Page MenuHome

Point Cache replacement based on Alembic
Closed, InvalidPublicDESIGN

"Love" token, awarded by mistajuliax."Love" token, awarded by bnzs."Love" token, awarded by MobyMotion."Love" token, awarded by ofuscado."Like" token, awarded by gandalf3."Like" token, awarded by sparazza."Doubloon" token, awarded by nozzy."Like" token, awarded by obadonke."Like" token, awarded by k9crunch."Like" token, awarded by vicentecarro."Like" token, awarded by szap."Like" token, awarded by bassamk."Like" token, awarded by aditiapratama."Love" token, awarded by zanqdo."Like" token, awarded by tychota."Mountain of Wealth" token, awarded by looch."Like" token, awarded by italic_."Like" token, awarded by brachi."Like" token, awarded by aermartin."Like" token, awarded by jasperge."Like" token, awarded by monio.
Authored By


The goal of this project is to implement a new Point Cache system based on the Alembic library and unify the various caching methods used in Blender.

This is a prerequisite for further work on particle systems. Having a working Alembic exporter/importer will also greatly benefit integration of Blender into larger pipelines.

Current Issues

  • Point cache is used for most physics simulations, but does not include mesh caching. For this there is a separate Mesh Cache modifier, using mdd or pc2 file formats, with its own limiations. Using a unified point cache system would help scene management.
  • Alembic, unlike the current point cache binary format, supports arbitrary data attributes. This is very important for flexible simulations and user-defined data in particles and mesh data (vertex groups, custom data layers, etc.).
  • Point cache is limited to fixed-size data sets. This is sufficient for many simulations which only deform geometry (cloth, hair), but for particles and constructive modifier caching variable point data sets are important.
  • Compression in the point cache file format leaves a lot to be desired. Alembic is specifically designed to have a small memory/disk space footprint and make good use of threading for fast export/import.
  • Point cache binary format is unique to Blender. While it is fairly simple to interpret, it does not facilitate import/export and integration into larger pipeline workflows. Alembic on the other hand is a commonly used tool and can leverage a large developer community.
  • Settings for caches are object-centric, each simulation writes its own separate cache files. Baking all caches is simply a loop over all objects, which is not very efficient. Having scene-level settings and operators would make Alembic export/import easier.


  1. Keep basic cache settings on Scene level:
    • "Disk Cache" vs. "Memory Cache": This could be reframed as having cache file output by default and use ''packing'' in .blend files like we do with images.
    • Output directory setting for cache files (with nice defaults)
    • Frame ranges
    • Automatic cache recording settings (see below)
  2. Optional overrides on object/sim level If deviating settings are needed for individual simulation caches. By default it should not be necessary to tweak each cache individually.
  3. Question: How do we handle automatic caching? Current approach is to automatically record point cache when stepping through frames, unless the cache is explicitly baked. This breaks easily when jumping over frames and or scrubbing in the timeline, because simulations need a contiguous ordered frame range for correct results. Does this work well enough to justify keeping it? Are there alternatives to this behavior? One suggestion was to make simulations independent from the animation playback by default, but this makes it difficult to sync simulation results to keyframe-animated objects and to record valid caches on-the-fly.
  4. Question: Do we really need multiple caches for each object? This feature adds a list of point caches for every object, only the active cache is used, the others are basically a version control system. Could this be moved to the scene level? The problem with this way of "versioning" is that there is really no way to roll back to a previous version, other than comparing the results without any way to revert settings.
Directory and File Names

How should directories and file naming work?
Current naming scheme works per object and uses a combination of output directory + filename:

  • Directory is chosen in this order:
    1. explicit path if specified by the user
    2. or constructed as //blendcache_<blend file name>/ (if .blend is saved and thus relative paths work)
    3. or using a temp folder like <tmp folder>/blendcache/
  • File name is generated as <basename>_<frame number>_<cache index>.bphys
    • <basename> can be specified by the user or otherwise the ID datablock name (usually Object) is used
    • <cache index> is generated automatically by default (-1), but needed if multiple caches exist in the same ID block.
Frames, Times and Sample Indices

There are 3 different modes of interpreting "time" that need to be managed in a clean way:

  • Blender usually defines data in terms of frame. Any notion of "time" is secondary, defined by the frames-per-second (fps) settings in the scene and additional start/end frames. Some simulations also add their own interpretation of time on top of this, by using time scale factors (e.g. smoke sim time scale options), although these factors are mostly for the internal simulation time step and don't have much consequence for mapping sample times.
  • Alembic samples can be accessed

While it would be possible to largely ignore time sampling for Blender cache files and simply read each cached sample as a frame, it is probably more reliable and flexible to calculate time values as defined by the scene render range and fps settings. This would allow using externally generated Alembic files which use a different sampling rate than the current scene and map them into the scenes frame range.

Ultimately it could be very useful to integrate cached data into the NLA editor as a track. At that point the direct sample<->frame relationship that might still exist for cached data from the same scene is useless anyway.

Proposed Changes

  • Output directory is by default specified on scene level, using the same rules as before.
  • Alembic cache files don't store files for each individual frame, instead one file for the whole cached frame range is used
  • 2 possible behaviors:
    • Each cached simulation/modifier creates its own file in the cache folder
    • Combine caches in a single scene cache file. This is more in line with standard Alembic use, but makes it more difficult to cache objects separately
  • In both cases identifier names for cache data are a problem. This is already the case with current point caches: Renaming an object will sever the mapping to file names, leading to unused files on the disk and making re-bake necessary.

Almost all cacheable data is located somewhere in modifier stacks, so these require some special attention. While caching the final result of mesh modifier evaluation (final_dm) is obviously useful, we also want to support partial caching and layered evaluation of mesh stages and simulation.


  • Add a "Point Cache" modifier for defining intermediate stages that can be cached. It would have almost no settings and just let users define fixed points, rather than caching every modifier (too much data) or relying on some heuristic of modifier "cost" to determine suitable stages.
  • By default there is one virtual Cache modifier instance shown at the end of the stack. This does not have to be a real modifier, but just reflect caching of the final_dm result. Caching can be disabled entirely by disabling this modifier. [Is it preferable to leave this completely to the user and not do any modifier caching at all by default?]
Base modifier stack with cache
Modifier stack with intermediate caching

Open Questions

  • How to treat multiple successive cache modifiers? Can/Should this be prevented, or just ignore redundant instances? Similarly for caches at the beginning of the stack where they are useless.
  • Should these caches also handle internal data of simulation modifiers (smoke/fluid voxels, particles, hair, ...). Or do they only handle DerivedMesh caching and leave sim data to explicit cache instances elsewhere (physics tab) [I would prefer the latter to keep design clean and avoid nasty corner cases]
Caching vs. Export

These two cases may need to have somewhat different requirements in terms of necessary data attributes, sampling, etc. (while running the same backend input/output functions).

  • Caching is an internal procedure. The data being written only has to be interpreted by Blender itself (although the files would largely follow Alembic schemas where applicable). Some data attributes (e.g. normals) might be omitted here, since reconstructing them is fairly cheap. [confirm?]
  • Export is a one-way procedure to describe Blender data in a generic format for other software. All possible data attributes should be included to ensure compatible results.


(somewhat outdated)

  1. Scene stores global settings for caching
  2. Simulations and modifiers can have own settings if needed
  3. Cache data is separate from other DNA by default, but can be packed into blend files like images
  4. Point Cache is a thin API, using Alembic as a backend
  5. Different schema implementations (writer/reader) are created for the various DNA types and to handle interpolation (comparable to current PTCacheID)
  6. Scene writer/reader ties everything together for exporting the scene as a whole, possibly using object hierarchy (flattening scene structure for export would also be possible)


Event Timeline

Lukas Toenne (lukastoenne) claimed this task.
Lukas Toenne (lukastoenne) raised the priority of this task from to 90.
Lukas Toenne (lukastoenne) updated the task description. (Show Details)
Lukas Toenne (lukastoenne) edited a custom field.
Brecht Van Lommel (brecht) lowered the priority of this task from 90 to Normal.Nov 22 2013, 1:50 PM

In my development branch i have now removed the point cache ListBases. As noted in the task description this feature does not add much value and it made the code unnecessarily complicated.

Older cache data is only useful for comparing to newer results, but one cannot actually revert to the previous settings just from the cache - for this you need to actually save a different .blend file version, which has its own cache anyway ... Such versioning functionality needs to be handled as part of the asset management or simply by making cache data backups outside of Blender.

The RNA code for accessing the point_caches collection was also causing recursion issues, basically the point cache contained a reference to itself, which is bad. Without the need to handle potential lists everywhere the code as well as the workflow becomes much simpler.

I'd like point caches to works as unobtrusively as possible. However, in order to store a valid state of the simulation/modifier results and keep in sync with the user settings some rules need to be established.

The current scheme works like this:

  • When the cache is "baked" the simulation is calculated explicitly for the whole frame range. After that the user settings are locked superficially in the UI to indicate that they won't have any effect until "free bake", i.e. the cache is released.
  • Without a baked cache, the results are still calculated if a the frame range is walked through (animated with alt+a) in a valid order from the start frame onward. But any change to settings or a dependency (e.g. controller object) will invalidate the cache.
  • Starting caching on a later frame than the start frame doesn't work very well. The cache will store results from the current frame onward if invalidated, but then the results in earlier frames are useless. Sometimes the cache will not be properly invalidated once going back, i can't quite figure out when this happens. So it becomes a hit-and-miss affair which is not good. Better have a limited system with reliable results than a sophisticated machine that you cannot trust ...

I'd like to know if and how this system could be improved.

For a start, it's not really necessary to perform a full simulation over the whole frame range if the cache is already valid. Then we could have an option "lock/unlock" instead of the "bake"/"free bake" operators. What this means is that, when locked, a simulation/modifier will never perform calculations but try to read results from the cache. If no valid cache result is available for a frame this should be clearly indicated in both the property buttons and the 3D viewport (instead of misplaced particles/etc. like it often happens now).

Notice how "lock" refers to the simulation settings rather than the cache! It can be seen as a 3rd option in addition to completely enabling/disabling a simulation - expensive calculations are avoided and the simulation is treated as read-only data.

In addition to, and separate from this, there can be a "bake"/"full update" operator. What this does is simply run through the simulation frame range once to calculate correct results for every frame and cache them. This can be a scene-wide operator, since generally a simulation is calculated in relation to other objects, so there is no need to re-run this for every simulation on its own. Essentially this is a slightly modified "run animation" (alt+a) operator, which just runs through the frame range once instead of looping.

After performing this "full update" there will be valid results for every frame, so it makes sense to lock the simulation settings. But it's not necessary and could be left to the user. A clear indicator for "all frames cached" should help.

Hi Lukas, I'm thinking this more from the character animation workflow than the simulation workflow. I think there can be 2 types of point cache: the manual persistent cache and the automatic disposable cache. This could create a solid difference between actual simulation cache and just playback stuff like frame-scrubbing a sim that can be easily recreated.

The disposable cache could be set to automatically cache even general animation data (from modifier stack) so that for example consecutive playbacks with a few heavy characters are greatly accelerated, only recalculating the needed parts or characters and completely skipping calculation of the rest.

The persistent cache is more like when you decide to bake and lock a character so that it becomes uneditable unless you decide to release the persistent cache, make some edits and maybe bake again.

The persistent cache can have the added ability of remapping the timing and other nice tricks.

Edit: I just read your last comment @Lukas Toenne (lukastoenne), you're already planning some kind of locking behavior. Great, just hoping we can extend this concept to anything worth caching and not only to simulations.

Thinking from a project pipeline perspective: It would be invaluable at render time to render only caches, not rigged characters. Several advantages accrue:

  1. Scrubbing for lighters is real time.
  2. No matter the complexity of dependencies, links, rigs, lighting files are super reliable.
  3. No issue with renderfarms with linking, Python scripts, drivers, etc that might require 'unsafe' setting to render correctly

Secondary thing that would be really cool is sculpting on caches, keying caches etc.

I'm curious how this would play into final rendering, such as motion blur and sub-frame animation. I know it's possible to export a cache in sub-frame increments, but how motion blur will work is beyond me. Does Alembic have any provisions for this or will it be entirely up to Blender (or other software of preference) to interpret and interpolate between cache steps?

@Daniel Salazar (zanqdo): Yes, a form of locking could certainly be helpful to prevent accidentally losing cached data, combined with operators that calculate the whole cache to ensure it is valid in every frame. I just think that we need to keep the cache away from the simulations for the sake of design.

  • When the cache is "locked" it means it will not accept overwriting. The "full bake" operator can temporarily disable locking as part of an explicit user action. This is almost what the current "baked" flag means, the subtle difference being that we don't expect the cache to contain everything per se, although the operator would usually do a full calculation to ensure a valid cache state before locking it. It makes the code a lot simpler if we just view locked as a mechanism to prevent cache writes instead of making assumptions about validity in every frame. It would also be an internal cache feature, cache write attempts would just have no effect and locked does not have to be checked by callers everywhere.
  • To indicate the locked cache state in the UI we can disable buttons that would invalidate the cache, as it happens now when the cache is baked. Just be aware that it is difficult to make this absolutely watertight: Generally a simulation/modifier can depend on a lot of other things outside of it's own settings scope, like controller objects, other mesh surfaces etc. So this should be seen more as an independent feature of the sim (in self-contained character rigs it could work fairly well). The "bake cache" button could be made available in locked state as well, so you can re-bake the cache explicitly if you know external influences have changed and require a cache update.
  • Remapping time is again a feature of the cache user (sim/modifier/rig/...) and the cache itself should not need to know about it. I'll have to look into this closer to figure out how current NLA tools could do that. Perhaps such uses of cache data should be entirely separate from the original "cache creator", so you'd have one object that defines the setup and writes to the cache, and then any number of instances which can use the cache data read-only, do time remapping and so on. Otherwise i can see this becoming quite complicated when you have to always take into account what sort of timing a rig uses.

@bassam kurdali (bassamk): I think this goes in the same direction as the last point above: Keep rig/mod/sim settings (collectively "cache writers") separate from instances ("cache readers"), which can add further modifications on top of the cached data (lighting, timing, materials, ...). I'll try to make a coherent design proposal for this.

Sculpting on cached data is a cool idea, which is only partially realized in "particle edit mode" so far. There are some tricky challenges though, like how to visualize the time domain in a meaningful way. When you compare it to regular sculpting: there you have a brush with some spatial extent whose effect can be shown all at once (all mesh vertices visible at the same time). But if you modify cache data at one point in time it's not so easy to show how this changes in the time dimension. "Ghost frames" are one approach, but it's most suitable for point data (like particles), i think it becomes difficult for use with surface geometry. Maybe there are proven solutions ...

@Jeffrey Hoover (italic_): Alembic does not do interpolation itself, but it is quite flexible in how it stores sub-frame samples. An Alembic file can have multiple different "time samplings", which usually relates to the frames. The standard would be 1-sample-per-frame, but we can add more fine-grained sampling next to it for fast moving objects that require detailed motion blur.

How this data is then interpreted for rendering etc. is up to Blender. IIRC our current renderers use spline segments to model the subframe motion of objects, which is good enough in most cases.

Hello Lukas! When you ask :Do we really need multiple caches for each object? There is an important parameter, when we work with teamates, animators often overwrite pointcache for only a character, and render team sees updates in its scene. Would it be possible to do this if there is only one big cache file for the scene? Would it possible to link a character from a file, and a set from another file? And would it be possible to overwrite a file while someone else is reading it?

@sarazin jean francois (dddjef): The "multiple caches for each object" refers to the list feature in every single object. This has very limited use and causes a number of complications in the code.

That being said, i like the concept of instancing cached data. I have outlined this a bit above, but needs to get a more precise design. Basically the current caches are very closely attached to the objects that generate them, making an object that works only as a cache instance is really cumbersome atm (requires using "external" caches and mismatching settings can easily break it).

Reading/writing cache files simultaneously should be as smooth as possible. I have to investigate the details of how Alembic files can be accessed more-or-less simultaneously, but generally it should follow a one writer / multiple readers model. Using the cached result of a writer in the same frame would be nice, the depgraph should enable us to let the writer finish writing the result in that frame before readers access it for instancing.

This writer reader thing seems great.

Instancing cache could be like image management. It sounds good to me. By the way, i think cache should be written with absolute frames, and not in seconds. Time management should be the software Task . I rather keep things simple, ie for compositing it is easier to work with images sequences, as soon you work with movie file you have some framerate issue between projects.

@sarazin jean francois (dddjef): The "time" values in Alembic are not per-se seconds. We can also interpret the time values as frames (1 frame = time interval of 1.0) instead of scaling by the fps (1 frame = time interval of 1/fps). This can be changed fairly easily though, so we can have a look at what would work best for interchangeability (the ominous "industry standard").

I just discovered the point animation features in messiah studio.

would the task here bring blender closer to implementing functionality like this?:

this would make animation extremely powerful, especially for lip syc, muscle animation, small cloth effects and so on....

@Warren Bahler (russcript) Just so you know you can do this with the AnimAll addon

@Daniel Salazar (zanqdo) Wow, I didn't realize you can actually edit the mesh with Animall, very cool work, thanks

@Warren Bahler (russcript) @Daniel Salazar (zanqdo) actually it's a good question: is there anything in the design above that would prevent sculpting directly on the cache (not on the underlying mesh) ? the diagrams don't show this but don't explicitly prevent it either.

Generally agree scene level baking is good so not much comments on what is written in the proposal.

Some feedback:

re: simulations independent from the animation playback by default - This seems fine to me. If you want real result you need to play from start or bake it.

re: Do we really need multiple caches for each object? - I think not, but its worth considering how/if we might use multiple alembic files per scene.

Other comments: ... questions infact!

  • Question: How to handle different resolutions from render/viewport - if you have subsurf - for example, set at different resolutions before the cache modifier, Bake for viewport display - but want to use for render.
  • Question: How to handle linked library data? (you make an animation on a character in another scene and want to link it in) - issue raised by Ton today.
  • Question: How to handle garbage collection (you have some data in an alembic file and the object gets removed).
  • Question: (relates to depsgraph and may not need to solve now), but what might be the workflow for using multiple alembic files on one mesh. maybe link-duplicate the mesh & rename.
  • Question: You already covered it mostly - but per simulation frame range vs global alembic frame range. (assume there would be some way to toggle between them).

Reading/writing cache files simultaneously should be as smooth as possible. I have to investigate the details of how Alembic files can be accessed more-or-less simultaneously, but generally it should follow a one writer / multiple readers model.

Alembic explicitly does not allow changing an existing file, and from my experience does not even allow opening a file for reading unless it has been properly closed for writing (my hunch is that it writes some index only on closing). There are no provisions for simultaneous access, and writing to a file that's opened for reading will actually cause an exception (T51141). As a result, I have serious doubts about the usability of Alembic to replace the current Point Cache.

Can we have also option of adding moddifier instance. and what it does it let's as choose... alembic file. it reads attribute like orient, Scale.. and we also choose what is the instance object. it can be collection or object. And that is how we could have instancing based on pointcloud. with is very very usefull.


-ability to use attributes like orient/scale to drive instances.

@Maciej Jutrzenka (Kramon) This is not the place to ask for new features.

I'm closing this task, as my concern about using Alembic for such caches hasn't been replied to in 1½ years.

Since Lukas did think about things and wrote down some designs, I do feel that this is still useful information, so I've added a link to my personal Alembic page on the wiki so that it's not forgotten.