"Compress" shapekeys on filesave - WARNING: fileformat compat breacking!
Needs RevisionPublic

Authored by Bastien Montagne (mont29) on Nov 1 2014, 10:12 PM.

Details

Summary

This patch is extracted from soc-2014-shapekey branch.

Basic idea is to only write a skey vertex if it actually differs from org mesh, instead of writing coordinates for all vertices of all shapekeys.

Benefits hugely to file size in some cases (heavy meshes with tens of shapes, like this one (a face with many expression keys limitted to an area, like lip, eyes, nose, etc.). With this mesh I get:

Current formatuncompressed10.8Mo
Current formatcompressed8.1Mo
New formatuncompressed4,2Mo
New formatcompressed2.0Mo

New format quite obviously breaks forward compatibility, exactly like BMesh did some times ago. So patch also adds a 'compat' flag to force writing skeys in 'old' format.

Now, aside from code itself (it’s quite straight-forward I think), main question is, do we accept to break file format for this? If so, do we do it now, or rather wait 2.8 to do it together with other forward-compat breaking changes?

Diff Detail

Bastien Montagne (mont29) retitled this revision from to "Compress" shapekeys on filesave - WARNING: fileformat compat breacking!.Nov 1 2014, 10:12 PM
Bastien Montagne (mont29) updated this object.
Bastien Montagne (mont29) set the repository for this revision to rB Blender.
Sergey Sharybin (sergey) requested changes to this revision.Dec 3 2014, 7:08 PM

Well, code-wise it seems ok, however:

  • Is it correct old blender will simply crash trying to open file with compressed shapekeys?
  • Memory file seems not to use the compression?

Maybe just enable this for undo pushes, but keep file IO as is?

source/blender/blenloader/intern/readfile.c
3094

Just to doublecheck, is it correct curve datablocks are using 3 floats per element as well?

3110

That's unclear why cycle above wouldn't crash then?

This revision now requires changes to proceed.Dec 3 2014, 7:08 PM

Hey Sergey, thanks for feedback.

All in all, re-reading that patch, I found a few points that should be enhanced, will try to find some time to get a new patch.

  • Old blender reading: this won't not crash, just produce complete rubish results (kb->totelem stores number of compressed items, and CompressedMeshVertex struct is bigger than float[3]).

Regarding undo (memfile)... I have to make some tests, I’m not quite sure the gain in size would not be counterbalanced by time taken to compress the keys...

source/blender/blenloader/intern/readfile.c
3094

No, but it does not matter in this patch, since only mesh keyblocks may be compressed.

Would probably be worth adding same feature for lattice & curves, though...

3110

if verts is NULL, kb->totelem is 0, so above for loop is not run a single time.

Well, i;'m not so much fussed about disk space usage, memory usage for undo is much more benefitial IMO.

So if that was me, i'd go comparing it enabling this option for undo pushes makes any difference :)

source/blender/blenloader/intern/readfile.c
3110

I'd be paranoid and put BLI_assert(kb->totelem == 0), sort of a tip for the developers.

OK, so basically rewrote that patch from scratch (it had one big issue, using a way too big value to check differences between org data and skey one, such 'cleanup' shall never be done implicitly on write file, we probably want an op doing that though). It now also handles curves and lattice compression.

Also made it only working on memwrite (i.e. undo/redo stuff). BUT i'm not convinced in the end.

Facts: a *big* mesh with *many* skeys (157 skeys; 280000 verts, 300000 faces, 560000 tris).

Each undo step takes about 570ms to write memfile, among which 412ms just to compress skeys. In theory, it would save us about 274MB of RAM per step (that's the difference between compressed skeys and uncompressed ones), but I could not see any actual benefit looking at the app mem usage. :(

Unless someone knowing better what happens in this area can show some real benefits to this patch, I'd say it’s not worth it.

Not really convinced this patch is worth having.

  • For runtime (undo), I rather we have some more generic customdata-layer level code which can better do binary diff's to save memory.
  • For disk, its OK... but doesn't seem so much worth breaking file compatibility for.

Think the patch could be saved for a version jump (2.80?) - where some other file format changes are already happening.

Yes, quite agree here too. Let’s keep it in the fridge for until then! :)

Sergey Sharybin (sergey) requested changes to this revision.Jan 14 2015, 5:50 PM

Requesting changes so it doesn't bother me in the "to be reviewed" list for until we're getting back to it :)

This revision now requires changes to proceed.Jan 14 2015, 5:50 PM