Page MenuHome

2.72(a) crash on unwrap of subdivided mesh
Closed, ResolvedPublic

Description

The following are reproduction steps for a 2.72(a) crash on unwrap of highly subdivided mesh. This behavior was found in Windows and Ubuntu Linux.

Blender versions with bug: blender-2.72a-windows64 (Hash 73f5a41) and blender-2.72-windows64 (Hash: 95182d1)

Repro steps:

  1. Open Blender
  2. Delete default cube
  3. Add Mesh –Plane
  4. Press S to Scale Plane, and type 10 as scaling factor
  5. Tab to Edit mode
  6. Subdivide Plane, number of cuts: 1000
  7. Press U and select Unwrap

After processing the Unwrap command for a while Blender quits with the following error on the Console:
Can't expand MemType 0: jcol 1210311

Blender 2.71 (blender-2.71-windows64, Hash 9337574) can consistently complete the repro steps above without problem. Blender 2.72 fails consistently with the error above. Blender on Ubuntu exhibits this behavior as confirmed by another user, more information here:
http://blenderartists.org/forum/showthread.php?352183-Unwrap-ability-from-2-71-to-2-72a

1000 cuts is obviously large and it takes a while to create those ~1M faces and then awhile to unwrap the mesh. I have a decent amount of hardware and thus I don't believe it's a hardware thing.
My system specs are found here: http://stirlinggoetz.blogspot.com/2014/01/building-ultimate-cgivfx-workstation.html

Event Timeline

Stirling Goetz (futurehack) raised the priority of this task from to 90.
Stirling Goetz (futurehack) updated the task description. (Show Details)
Stirling Goetz (futurehack) edited a custom field.
Bastien Montagne (mont29) lowered the priority of this task from 90 to Normal.Oct 21 2014, 8:02 AM

This error seems related to heavy matrix operations (python/scipy example: http://stackoverflow.com/questions/25652663/scipy-sparse-eigensolver-memoryerror-after-multiple-passes-through-loop-without), Not sure what we use for our lscm process in UV unwrapping…

Note I got the same type of error, but my mem is nearly full at that time (16Gb here), so based on link above I'm pretty 100% sure this is an out-of-mem error…

I've run another couple of tests with 2.72b and RAM utilization never reaches above 18% (12/64 GB) during unwrap and when the crash occurs.

Sergey Sharybin (sergey) changed the task status from Unknown Status to Unknown Status.Nov 4 2014, 2:38 PM
Sergey Sharybin (sergey) claimed this task.

The change in behavior is caused by rB0b12e61, which switched LU solver to double precision. This makes solutions more accurate, but bumps memory usage.

The thing is, memory usage you can control from the outer world, precision you can not. So it makes sense to have better precision than lower memory consumption in this case.

Thanks for the report, but it's not really considered a bug.

LU solver double precision sounds like a good design call.

Memory usage doesn't appear to be a factor as my test system has 64GB of RAM and during unwrap neither 2.71 nor 2.72 use more than 22% of that. Consistently 2.71 completes the unwrap and 2.72 crashes so memory usage changes aside...

I'm not fully understanding how this isn't a bug. Any information explaining the 2.72 crash behavior is appreciated. Thanks.

Hi Sergey. Can you explain how your LU solver comment applies when there are GB of free RAM? Any information is appreciated.

With double precision LU solver became 2x more hungry in memory, so i.e. if previously you had half of your RAM free during the solve free now it'll be all used. Other thing here is that the memory is allocated in one chunk and operation system might fail to allocate such a huge block if it consider application is running out of memory. This is why blender might crash even though memory usage didn't spike high enough.

Hi Sergey - Others have confirmed that this is not an OS or hardware issue but indeed appears to be a Blender software crash behavior introduced in 2.72. Confirmation here:
http://blenderartists.org/forum/showthread.php?352183-Unwrap-ability-from-2-71-to-2-72a&p=2782237&viewfull=1#post2782237
Are there troubleshooting steps that could help us find the root cause?
Blender 2.73 Test Build also exhibits this behavior.

I'm not sure what troubleshooting you're talking about, i've explained what exactly commit commit made unwrapping more memory hungry in my comment from 4th nov.

Sorry if I wasn't clear. From your comments earlier:

"LU solver became 2x more hungry in memory" - That's fine. Those of us with 32GB+ are not a experiencing low memory condition
"if previously you had half of your RAM free during the solve free now it'll be all used" - With 32GB+ RAM we're not seeing half or anywhere near full memory use. ~12GB max during testing.
"the memory is allocated in one chunk and operation system might fail to allocate such a huge block" - We are running 64 bit Windows which can allocate enormous memory chunks for applications and I see Blender use all 64GB RAM for simulations operations so it's proven that Blender and OS is capable.

The items above seem to indicate that changes in Blender software (LU solver code or other) are causing the crashes. Do you agree? If not, are there troubleshooting steps (e.g. debugging, logging, etc...) I can take to help us determine if the issue is Blender code or external?

I appreciate your patience, I'm not a developer and I don't often log bugs.

Your understanding of memory consumption here is a bit wrong: if superlu requests more than virtual amount of RAM (to be more preciese, more than free amount of virtual RAM) for a single buffer blender will crash before you even see spike in the memory usage. This is how OS works -- it wouldn't give anyone more memory than it has. And this happens when superlu library needs to expand vectors during the solve, so if one of the vectors is really huge then you're in big trouble.

That is for sure changes in superlu solver caused blender to crash due to out of memory issues which doesn't happen before. But it's not a bug and can not be fixed, it's just all about all vectors in superlu now taking 2x of memory. Benefit from that is much more accurate solver results provided by superlu now.

I've just committed some changes which ensures all the memory requests from superlu are correct and no integer overflow happens in there. This could solve issues with your particular mesh, but in general it's still possible to have meshes which used to be able to be unwrapped in older blender versions and not able to be unwrapped in new ones (because of 2x memory bump described above).

You can test new builds from https://builder.blender.org/download/

Hooray! The commits you have made here have fixed the issue - http://www.miikahweb.com/en/blender/git-logs/commit/7a04c7f6d02a90388e722bf3a600327b52c744ac
Thank you the effort Sergey. This is a win for Blender UV unwrap scalability.

FYI - I tested with BuildBot blender-2.72-7a04c7f-win64.zip (2014-12-19) and memory usage appeared less erratic with 14.5GB allocated immediately and staying around 15GB during the operation.

We drink beers together once on a Friday under the big windmill in Amsterdam with the rest of the Blender crew. I'll be sure to buy you a beer we cross paths again. :)

Julian Eisel (Severin) changed the task status from Unknown Status to Resolved.Dec 20 2014, 1:51 AM

Glad it's working again! Think we can call this "Resolved" now