Page MenuHome

Crash and file corruption after calc_normals_split, calc_tessface execution.
Closed, ResolvedPublic

Description

System Information
Ubuntu 12.04 and Debian Squeeze

Blender Version
Broken: 2.74 official release build

Short description of error
UVMap disappears; sometime Blender crashes

Exact steps for others to reproduce the error
I did not succeed to reproduce the full algorithm of the blend file creation, but attached a prepared one. It has been created and edited in previous Blender versions, but strange behavior appeared in 2.74, when I tried to use python API for split normals.
The file attached here has script for reproducing this behavior.
Tested official blender-2.74-linux-glibc211-x86_64 on Ubuntu 14.04.2 LTS and Debian squeeze

  1. If I call calc_normals_split on active object, something happens with UVMap_0: it dissapears from "UV Maps" panel when mouse cursor drag over it. Sometimes Blender crashes.
  2. If I call calc_tessface, blender crashes.

If after point 1) you save the file and reopen it, the names of UVMaps will change.

Alexander (Blend4Web Team)

Event Timeline

Bastien Montagne (mont29) triaged this task as Needs Information from User priority.Apr 20 2015, 7:28 PM

Hrrrrrm… With this file and a release build, I have no issue at all, cannot reproduce your crash. However, when trying to open it in a debug build with asan enabled, it crashes on reading. This means that the file itself is corrupted. And we cannot do much with a corrupted file, what we need in those cases is a way to reproduce such corruption - bug is here, not in a later crash.

So I need steps to reproduce that file.

This bug is reproduced only on Linux. It was tested on Ubuntu, Debian, Arch. The interesting thing is that if we build Blender by our own, crash disappears. Probably, some issues with Linux libraries?

I’m on linux too… But again, the issue is not in the crash itself - a corrupted file is likely to crash sooner or later. I need to know how to generate such a corrupted file, otherwise there’s nothing we can do.

Thanks for your answer. Unfortunately the problem is not just with the single corrupted file. Some of Blend4Web demos are affected too. In Blend4Web SDK we have hundreds of files, so it would be very helpful to have some method to know if the file is corrupted or not, and another method to fix the issue. Our guys will look at this in details, but I fear some really bad things just have happened.

Knowing if the file is corrupted is pretty simple (assuming it has similar corruption to the one you posted here): just build a debug version of Blender with asan enabled (on linux or OSX, using gcc or clang, see http://wiki.blender.org/index.php/Dev:Doc/Tools/Debugging/GCC_Address_Sanitizer), it will crash on loading the file (complaining about reading past allocated memory or so iirc).

Thank you for your help. By using custom Blender build and simple script we have discovered approx. 10 corrupted scenes. Now we are looking for solution to fix them. Anyway, is it possible to perform (implement) an automatic validation for loaded/saved data in the future?

The thing is, such corruption should *never* happens, we cannot check for validity (as in, length of data arrays etc.) of data chunks in .blend file when loading, would be heavy and not much helpful.

The bug here is the generation of such corrupted file - if you cannot give us a way to reproduce that, afraid we can’t do anything… :/

The new attached file has no asan messages when opened. After the steps bellow, asan shows heap-buffer-overflow.

  1. open .blend
  2. run the script
  3. save the file
  4. reopen the file
  5. run the script again
  6. save the file again
  7. Try to reopen. I have heap-buffer-overflow

script:

import bpy

d = bpy.data.objects['sunglasses'].data
d.calc_normals_split()

Bastien Montagne (mont29) raised the priority of this task from Needs Information from User to Normal.Apr 22 2015, 8:27 PM

Thanks, will check later.

Has been hard, but found the issue. It’s not really related to lnors actually, it was a bug hidden deep in how cd layers are saved (written) - basically, when temp/nofree layers were removed for saving, mesh would still write org number of layers, instead of updated (reduced) new one…

Note that fix will solve corruption of data itself, but cannot do anything for files already broken. :|

Anyway, thanks a bunch for the report!

In the interests of partial recovery of corrupted files, is it possible to abort loading of corrupted data blocks, such that only unmodified assets are loaded?

As I understand, aborting of any block loading is impossible because you can't know the offset of the next block without reading of previous block. And even if it is possible, generally, there is a non trivial issue to avoid broken links.