Page MenuHome

Implementation of an OpenVDB based cache system for smoke simulations.
ClosedPublic

Authored by Kévin Dietrich (kevindietrich) on Jan 9 2016, 3:04 PM.

Diff Detail

Repository
rB Blender

Event Timeline

Kévin Dietrich (kevindietrich) retitled this revision from to Implementation of an OpenVDB based cache system for smoke simulations..
Kévin Dietrich (kevindietrich) updated this object.

A few quick comments.

Currently a file is output for every frame, though we might have an option to export all frames in a single file, if this behavior is desired.

I wouldn't bother with this, I've never seen VDB files used that way and I don't know any renderers that support such files.

High resolution simulation:
the same fields as for the low resolution version, suffixed with "High" (e.g. "Density" -> "Density High")
Texture Coordinates (used for turbulence)

I think it would be better to give a suffix to the low resolution fields instead of the high resolution fields. Generally for rendering you just want to render the highest resolution in the file, and it would be inconvenient to have to use different grid names depending if high resolution is used or not.

On top of that it is possible to use Zip compression, or Blosc compression if OpenVDB was built with Blosc support. The only real difference between the two is that Blosc is multi-threaded.

As far as I know Blosc is the much better compression for OpenVDB because of its decompression speed, and also the default in Houdini. So I think Blosc should be the default.

A metadata called "creator" and set to "Blender/OpenVDBWriter" is written in every file to help knowing where the file comes from upon reading.

Maybe use Blender/Smoke?

I see Houdini uses a value like Houdini/SOP_VdbAtomWriter, which makes sense as SOP_VdbAtomWriter is the name of a node / SOP that a users can choose to use. But OpenVDBWriter in Blender is not a concept that means anything to users.

source/blender/makesrna/intern/rna_smoke.c
373–375

Perhaps the description should be something like "Blender specific point cache file format" and "OpenVDB file format"

644–647

RNA_def_property_ui_text is called twice for the same property.

I would name this property cache_file_format which is a bit more specific than cache_type.

I haven't analysed the code in depth, but could you explain why this duplicates so much point cache code? I was expecting VDB to be just an alternative file format, which I wouldn't expect to require its own baking operators, caching logic or depsgraph hooks.

CMakeLists.txt
246

It might be best to enable this by default. For proper interop with other applications Blosc isn't really optional I think.

intern/openvdb/intern/openvdb_dense_convert.h
86

I think this needs to handle the case where a grid with the given name does not exist in the file.

intern/openvdb/openvdb_util.cc
31–57

openvdb::util::CpuTimer could be used to avoid this platform specific code.

A few quick comments.

Currently a file is output for every frame, though we might have an option to export all frames in a single file, if this behavior is desired.

I wouldn't bother with this, I've never seen VDB files used that way and I don't know any renderers that support such files.

I remember being asked if this was feasible, it is, but it's quite impractical indeed. I think I'll remove this line.

High resolution simulation:
the same fields as for the low resolution version, suffixed with "High" (e.g. "Density" -> "Density High")
Texture Coordinates (used for turbulence)

I think it would be better to give a suffix to the low resolution fields instead of the high resolution fields. Generally for rendering you just want to render the highest resolution in the file, and it would be inconvenient to have to use different grid names depending if high resolution is used or not.

Can do.

On top of that it is possible to use Zip compression, or Blosc compression if OpenVDB was built with Blosc support. The only real difference between the two is that Blosc is multi-threaded.

As far as I know Blosc is the much better compression for OpenVDB because of its decompression speed, and also the default in Houdini. So I think Blosc should be the default.

See my inline comment.

A metadata called "creator" and set to "Blender/OpenVDBWriter" is written in every file to help knowing where the file comes from upon reading.

Maybe use Blender/Smoke?

That sounds a bit better, and more descriptive.

I haven't analysed the code in depth, but could you explain why this duplicates so much point cache code? I was expecting VDB to be just an alternative file format, which I wouldn't expect to require its own baking operators, caching logic or depsgraph hooks.

While @Lukas Toenne (lukastoenne) was working on Alembic during Gooseberry, he gave me the idea of making an OpenVDB exporter for smoke sims, since it would be more suited than Alembic for this. After that people wanted it to be make it have it's own cache system, next to the point cache without reusing the point cache code, so the it could eventually be removed. That was following the idea that Alembic would replace the current point cache system for the rest (cloth, hair...). So I turned my little exporter into a little cache system. Unless I misunderstood what they (the Gooseberry team, and other developers) wanted/expected.

The depsgraph hooks are apparently required to invalidate the cache when you modify a flow/collision object while the simulation is running and cached on the fly (before a final bake). I thought I could get away without those.

CMakeLists.txt
246

Blosc is optional as far as compiling OpenVDB is concerned, since it is a separate library. So that's why I put this compile option. I think it'd be better to always compile OpenVDB with Blosc and remove this option. I might add a page, or a section in the doc, on the wiki to communicate on what would be required to compile OpenVDB (linking with blosc, jemalloc...). Most people only compile it with OpenEXR and TBB.

intern/openvdb/intern/openvdb_dense_convert.h
86

In that case, an exception will be thrown (openvdb::KeyError), and not caught (so Blender will crash). I can refactor that and add some exception safety if desired, though. For now the code assumes it will only read files output from Blender, and everything is hardcoded anyway.

intern/openvdb/openvdb_util.cc
31–57

Right, will do.

While @Lukas Toenne (lukastoenne) was working on Alembic during Gooseberry, he gave me the idea of making an OpenVDB exporter for smoke sims, since it would be more suited than Alembic for this. After that people wanted it to be make it have it's own cache system, next to the point cache without reusing the point cache code, so the it could eventually be removed. That was following the idea that Alembic would replace the current point cache system for the rest (cloth, hair...). So I turned my little exporter into a little cache system. Unless I misunderstood what they (the Gooseberry team, and other developers) wanted/expected.

The point cache system was designed to be a unified physics caching and baking system, to avoid duplicating code, to make the UI and behavior consistent, and to make it possible to bake all physics systems in one go so that they can all interact properly.

I wasn't involved with that Alembic point cache design or code so I don't know the rationale, but I would hope the eventual goal would still be to keep the physics caching and baking unified, even if some systems use different file formats? If the idea was to eventually have two separate Alembic and OpenVDB systems then that seems wrong to me.

I feel a bit bad coming late into this, but reading T37578: Point Cache replacement based on Alembic and Alembic branch code I still don't see a good reason to have a separate system even as an intermediate step. To me it seems we'd end up with duplicated code which will only be more work to unify again later.

Kévin Dietrich (kevindietrich) marked 5 inline comments as done.Jan 11 2016, 8:35 AM

I wasn't involved with that Alembic point cache design or code so I don't know the rationale, but I would hope the eventual goal would still be to keep the physics caching and baking unified, even if some systems use different file formats?

Yes. To give you some context, if I recall correctly, Lukas' goal was to make a unified cache library, where various file types would be used. It's only when I asked about volume and Alembic that he got me into working on OpenVDB caching (as I already worked on some OpenVDB patch which didn't make it (D1008)). To say the least, a couple of functions in this patch were copied/adapted from his branch.

Since this cache library work is pretty much on a halt, I just merged my work with the point cache, which will make the patch smaller and code review faster ;) (I guess I could have done that before.)


There's one issue now, the point cache does not read .vdb files. For some reason PointCache.cached_frames is set to zero after a file was written. By commenting out some code I managed to get the point cache to read .vdb files, so it works, but I don't know where/why it fails to register written .vdb files. I'll upload a new patch shortly for people to have a look.

Also there's another issue (an old one), which I kept for visual debugging, the smoke read from vdb files is darker, that's an easy fix, but I keep it to tell where the cache is read from.

  • Merge branch 'master' into openvdb_smoke_cache
  • Move OpenVDB file read/write code with to the point cache.
  • Remove dedicated OpenVDB cache code (bake operators...).
  • Add back OpenVDB file write options.
  • Take care of Brecht's review points.
Brecht Van Lommel (brecht) requested changes to this revision.Jan 11 2016, 10:24 PM

I didn't think you'd be able to re-unify the code so quickly, nice :)

More comments from reading through the code.

intern/openvdb/intern/openvdb_dense_convert.cc
141–142

Same here.

intern/openvdb/intern/openvdb_dense_convert.h
75

Perhaps errors have to be caught here as well, when e.g. running out of disk space.

I'm not sure which functions can throw exceptions, but we should try to catch all of them ideally.

88–91

Perhaps also memset the grid to zero? And the error message could modified a bit to give a bit more context and print to stderr:

std::fprintf(stderr, "OpenVDB grid %s not found in file!\n", name.c_str());
memset(*data, 0, sizeof(T) * res[0] * res[1] * res[2]);
intern/openvdb/intern/openvdb_reader.cc
49

I suggest to set this to 0, because from experience this isn't really helpful and the implementation of this feature has various issues, particularly on Windows.

intern/openvdb/openvdb_capi.cc
200

This line can be removed.

source/blender/blenkernel/intern/pointcache.c
923

I'm not sure if the domain object matrix should be included in the VDB grid transform. I'd expect the volume to be stored in object space and more or less centered around the origin, so that it's easy to instance.

946–957

This metadata could be namespaced somehow, like "blender/smoke/active_fields"?

As I understand it this is mostly metadata internal to the smoke simulator.

1852–1859

Can you create a utility function for this so it's just const char *ext = point_cache_file_extension(pid);? The memory alloc / free can be avoided too I think.

source/blender/makesdna/DNA_smoke_types.h
81–83

If you give VDB_COMPRESSION_BLOSC value 0 it will be default on existing files with smoke modifiers without the need for version patching.

source/blender/makesrna/RNA_access.h
443 ↗(On Diff #5795)

Can be removed now.

This revision now requires changes to proceed.Jan 11 2016, 10:24 PM
Kévin Dietrich (kevindietrich) edited edge metadata.
Kévin Dietrich (kevindietrich) marked 8 inline comments as done.
  • Take care of some review points.
  • Remove some diff noise.
  • Cleanup: pass params by reference.
  • Fix compilation errors when WITH_OPENVDB is off.
  • Change UI tooltip.
  • Add suffixes to low resolution fields if high res is used, add a
  • Merge branch 'master' into openvdb_pointcache
intern/openvdb/intern/openvdb_dense_convert.h
75

This just pushes the grid into a std::vector of shared_ptr. Writing the file is done later on, when all the grids have been converted and pushed into the vector. IIRC, OpenVDB only throws an exception if it can't open the file (e.g. bogus filename, relative path...). The grids themselves are written through std::ostream, which won't throw exceptions I think. But it's worth checking what can be done here.

source/blender/blenkernel/intern/pointcache.c
923

I remember having some issues making transforms work, especially when adaptive domain is used (overall, the smoke code isn't really well documented...). I think I added that to get grids in the right place for rendering in Cycles, for some reason. Not sure, I'll check on that.

Brecht Van Lommel (brecht) requested changes to this revision.Jan 12 2016, 11:29 PM
Brecht Van Lommel (brecht) edited edge metadata.

Some build warnings to fix, no actual bugs in there, for the last one using (T)TOLERANCE should be fine.

intern/openvdb/intern/openvdb_reader.h:31:1: warning: 'OpenVDBReader' defined as a class here but previously declared as a struct [-Wmismatched-tags]
intern/openvdb/intern/openvdb_writer.h:31:1: warning: 'OpenVDBWriter' defined as a class here but previously declared as a struct [-Wmismatched-tags]
intern/openvdb/openvdb_capi.cc:80:44: warning: struct 'OpenVDBWriter' was previously declared as a class [-Wmismatched-tags]
intern/openvdb/openvdb_capi.cc:123:30: warning: struct 'OpenVDBReader' was previously declared as a class [-Wmismatched-tags]
intern/openvdb/intern/openvdb_dense_convert.h:62:49: warning: implicit conversion from 'float' to 'typename Tree<RootNode<InternalNode<InternalNode<LeafNode<int, 3>, 4>, 5> > >::ValueType' (aka 'int') changes value from 0.001 to 0 [-Wliteral-conversion]
        tools::copyFromDense(dense_grid, grid->tree(), TOLERANCE);
        ~~~~~                                          ^~~~~~~~~

Actually testing this patch with the default Quick Smoke gives me some visual artifacts, for example frame 3 looks like this. Other than these dark patches the shape of the fire and smoke looks about correct to me. If you're not seeing those artifacts I can try to track down the cause. It might be a mistake in my OpenVDB library build or a bug that only happens on OS X or something else in my environment.

source/blender/blenkernel/intern/pointcache.c
1106

Forgot to mention this earlier, VDB files from Houdini and the OpenVDB seem to use lower case names, so perhaps we should follow that. Some grid names from my collection of VDB files:

density, heat, fuel, temperature, v, vel, velocity, vel.x, vel.y, vel.z, velocity_x, velocity_y, velocity_z.

Lots of inconsistency for velocity names, so "velocity" as already used seems reasonable to me.

This revision now requires changes to proceed.Jan 12 2016, 11:29 PM

Those compile warning about class/struct are a bit nitpicky :) But that's a easy fix.

The dark patches that you see are most likely the drawing issue I mentioned in one of my previous post. I suspect the issue comes from the clipping of the shadow grid.

source/blender/blenkernel/intern/pointcache.c
1106

The reason I went for camel case is that the OpenVDB node in Cycles gets its socket names from the grids in the .vdb file (like the OSL script node does), so it was to follow Blender's UI naming convention. Either way I'm fine. If other software, or people using other software, expect lower case name, I guess we could settle for lower case name. (I think Maya also export with lower case name.)

source/blender/blenkernel/intern/pointcache.c
1106

Hm yes, data layers typically have upper case names in Blender. On the other hand Cycles is already using lowercase names for the Attribute node to access smoke attributes like density or flame.

Not sure what's best here, I would go with lower case names then, but I don't feel strongly about it either way.

Kévin Dietrich (kevindietrich) edited edge metadata.
  • Fix compile warning on OSX.
  • Fix drawing issue.
  • Use lower case name for the grids for consistency with vdb files

I went for lower case names, to be consistent with other .vdb files.

Does caching work for you, though? I ask because the only way for you to notice the drawing issue is that the vdb file is read, but here it does not read them unless I comment these two lines: https://developer.blender.org/diffusion/B/browse/master/source/blender/blenkernel/intern/pointcache.c;e2715b129c1e62c8076ab4a212634b2a99cd8bc3$2696-2697

I tried to figure out why there is this issue, but haven't found the cause yet.

It seems the caching worked in a file with an existing smoke sim, but trying now in a new file I got an openvdb::v3_2_0::ArithmeticError exception and crash.

It seems obmat is entirely zero, commenting out these lines makes it work for me:

//mul_m4_m4m4(sds->fluidmat, sds->obmat, sds->fluidmat);
//mul_m4_m4m4(sds->fluidmat_wt, sds->obmat, sds->fluidmat_wt);

The drawing issue appears to have been fixed though I can't tell which change did it.

I think obmat is zero because it needs to read it from the cache. And I just fix the caching issue, some code was relying on the extension to be 5 chars ("bphys").

(The drawing issue was fixed by not clipping the shadow grid (NULL parameter in OpenVDB_export_grid_fl(..., "shadow",....). We clip the grids based on the density grid to optimize file size).

Kévin Dietrich (kevindietrich) edited edge metadata.
  • Fix cache file not being marked as cached when using OpenVDB.

Ok, the patch seems to be in pretty good shape now.

From my side the two remaining questions are regarding using the object matrix for the grid transform, and ensuring we handle all exception in case of I/O failures.

source/blender/blenkernel/intern/pointcache.c
3205–3206

Ha, that was nice and obscure. We can get rid of some magic numbers here.

const int frame_len = 6;
const int ext_len = frame_len + strlen(ext);
...
char num[frame_len + 1];

For the object matrix, I'm 100% sure I added it for rendering, but I don't remember exactly why or what I was trying to fix. So I'm just gonna remove them from the volume matrix computation, and see what's up when I go back to the Cycles implementation. I do feel bad about having them here, actually.

For I/O failures we have a few cases:

  • a name for the grid is not found (handled)
  • a file can't be opened because a relative path was given, here I think we should trust Blender to do the right thing and always produce an absolute path. We can still add some try/catch block just in case.
  • out of disk space/memory, here I don't know how to handle that, the OpenVDB API is not clear about that case.
  • metadatas can throw too, e.g. if two metadatas have the same name but not the same type, or if no metadata is found for the given name (these likely need to be handled too)
  • lastly, if we read or write a file with Blosc compression, but OpenVDB wasn't compiled with Blosc support, an exception is thrown (not sure how to handle that one either, except telling people to always compile with Blosc).
source/blender/blenkernel/intern/pointcache.c
3205–3206

I just noticed there are 2 other places which does that, so I'm gonna try to turn this into a utility function, e.g.

int get_frame_num_from_filename(const char *filename, const char *ext);

For the object matrix, I'm 100% sure I added it for rendering, but I don't remember exactly why or what I was trying to fix. So I'm just gonna remove them from the volume matrix computation, and see what's up when I go back to the Cycles implementation. I do feel bad about having them here, actually.

I just noticed that in the Quick Smoke setup the domain object is scaled, and that the smoke domain voxels are scaled to stay cube shaped in world space. So from that I think actually the scaling should be included in the grid transform? Otherwise it will look squashed when the VDB is loaded in another app?

But to me it still makes sense to have the VDB contain object space data, and let Cycles use the object matrix to transform things into world space, just like meshes. That way you could instance the smoke without any special exceptions I think, which seems more messy if (part of) the object matrix is baked into the VDB file.

I'm not immediately sure about the right solution, need to think about it more.

  • lastly, if we read or write a file with Blosc compression, but OpenVDB wasn't compiled with Blosc support, an exception is thrown (not sure how to handle that one either, except telling people to always compile with Blosc).

An error print in the console is probably enough in that case, telling people that they should have compiled with Blosc.

I just noticed that in the Quick Smoke setup the domain object is scaled, and that the smoke domain voxels are scaled to stay cube shaped in world space. So from that I think actually the scaling should be included in the grid transform? Otherwise it will look squashed when the VDB is loaded in another app?

Now I think it was partly to get uniform voxels, as for some obscure reasons the openvdb::tools::VolumeRayIntersector only works with uniform voxels. But even in that case you can still get non-uniform voxels, depending on the object's dimensions with respect to the domain's resolution. Maybe we could just mulitply the voxel size by the object matrix, and leave the translation untouched.

But to me it still makes sense to have the VDB contain object space data, and let Cycles use the object matrix to transform things into world space, just like meshes. That way you could instance the smoke without any special exceptions I think, which seems more messy if (part of) the object matrix is baked into the VDB file.

In case you don't know, VDB files do support instancing (by simply storing the instances' transforms alongside the original grid). However, I don't know how that would fit into the Cycles' and Blender's instancing paradigms, so to speak.

Another thing worth mentioning is that (most) vdb files exported also have a per grid is_local_space metadata associated with each grid. (All of the non-Blender vdb files that I have here have that metadata set to false.)

  • Fix some more issues relative to file extension length and frame number.
  • Fix compression flags not having the right values.
  • Add some exception safety on file reading/writing.
  • Add some exception safety for metadata read/write.
  • Rework/improve UI names and tooltips.
Brecht Van Lommel (brecht) edited edge metadata.

Let's keep the object matrix in the grid transform for now then. Later on we can always decide to change the behavior, perhaps using a new option to choose between local/global transforms.

One more thing before this is committed, it's best to leave OpenVDB OFF in CMakeLists.txt and blender_full.cmake until the OpenVDB libraries are available for all platforms, to avoid breaking the build.

As far as I'm concerned this looks ready to commit.

This revision is now accepted and ready to land.Jan 14 2016, 10:11 PM
Campbell Barton (campbellbarton) edited edge metadata.

Went over this patch and made various minor edits.

  • FindOpenVDB: Check for /opt/openvdb, use 2 spaces indenting (convention for find modules)
  • No need to use sets for string comparison
  • Minor edits and comments
  • Make OpenVDB depend on Boost, also list headers in source
  • Use depth enum for spoke depth (matches EXR setting)
  • Reduce scope of bounding box vars (closer how it was in master)
  • Cleanup: indentation
  • Correct typo
This revision was automatically updated to reflect the committed changes.