- User Since
- May 24 2016, 1:39 PM (107 w, 6 d)
Wed, Jun 6
Depending on your hardware and the scene configuration you might need to fiddle a bit with the tile size to find the sweet spot.
In my case it's almost always 32x32 or 64x64 but that can vary. Like with your scene it's 16x16 although I didn't test 8x8.
First question: Are you using "Hybrid" rendering in master?
Thu, May 31
Thu, May 24
May 4 2018
Apr 20 2018
Apr 17 2018
Mar 25 2018
Mar 21 2018
This is not a bug.
Mar 14 2018
Feb 19 2018
Feb 5 2018
After finishing the job that caused me to post this "bug report", I somehow lost sight of this ticket. But on second thought it might actually be worth to re-open it again for the exact reason that ronan mentioned.
Render size and simulation size should both be available but the render size should never affect the simulation. Quite the same as changing the instanced object or group doesn't affect the simulation. You should still be allowed to change render settings after caching a simulation.
Jan 25 2018
I don't know it this helps but if I open an image editor and manually set the color space of both the normal and the roughness maps to "Linear" instead of "sRGB" the packed version also renders fine.
Seems to ignore the non-color data setting of the image texture nodes.
Jan 19 2018
Well, here is what Houdini 16.5 says to the attached .abc file.
Jan 15 2018
Jan 12 2018
Gave it 10 more tries, no crash. It also works reliably with the brand new "Transparent Glass" feature. :)
Last night it crashed for me (Linux Mint 18.3, 12GB RAM, GTX780 3GB), but now with the very current master it works:
Jan 11 2018
OK, I tried to reproduce my problem today with the very latest own build and couldn't reproduce it. I can now slowly but successfully render the benchmark scene on my +10 years old system with only 12GB of RAM and a GTX780 3GB (on Linux Mint that is).
I don't know what fixed it, but at least this is now working properly. Great work! :D
Jan 4 2018
In my case it's not so easy I guess. I just gave it another try on my 10+ years old dual Xeon system with Linux Mint 18.3 and the very latest master of Blender (own compile).
It's just the Gooseberry benchmark scene switched to use GPU only. I also made sure to only use the GPU (GTX780 3GB) not the CPU or both.
The result is that it starts rendering although the scene would use more than twice the amount of GPU RAM. This is great.
The problem is that the final result is very very dark. I interrupted the rendering but here's a screenshot:
Dec 27 2017
I'm not at home at the moment so I can't help with testing stuff, but the TdrDelay issue affects almost any application using CUDA. Allegorithmic Substance Designer and Painter now even display warnings if they find default values in the registry and strongly advise you to change the settings accordingly.
Here's a helpful article on that https://support.allegorithmic.com/documentation/display/SPDOC/GPU+drivers+crash+with+long+computations
Dec 20 2017
Dec 8 2017
Nov 20 2017
Thanks, I actually had render crashes in this project, but it was too weird to reproduce easily. Also I am currently too swamped with work. Hopefully you can kill all these bugs ;)
Nov 18 2017
Nov 13 2017
Nov 3 2017
@Brecht Van Lommel (brecht) You are a wizard! Thanks a lot!
Oct 24 2017
I used the new CPU + GPU option in this case.
Oct 23 2017
Confirmed here. Win10 x64, GTX1080Ti
Oct 16 2017
This is perfectly normal and expected.
Oct 9 2017
Oct 7 2017
Sep 20 2017
Have you tried the latest Buildbot builds?
I had a similar bug that was fixed some time ago: https://developer.blender.org/T52479
Sep 16 2017
OIC, thanks for your fast response.
Sep 9 2017
Confirmed on my side.
Obviously a duplicate of https://developer.blender.org/T52687
Sep 7 2017
Aug 29 2017
Thanks for confirming this. Yes, it "works" when scaling first, but in my case all the filters were set to a specific look and I was about to start the playout for the final delivery of several shots in 4k when all of a sudden I noticed this bug.
In the end I started resizing all of the render pictures with Fusion.
Aug 25 2017
Aug 21 2017
Aug 10 2017
Just tried it and it helps a lot!
Jul 20 2017
Jul 14 2017
Jun 27 2017
Thanks Bastien for instantly fixing the linking bug and updating the "Purge All" button with important information! 😄
Jun 26 2017
Nice you can reproduce it and thanks for your quick reaction.
I was about to post the same report here. :)
I noticed it after I completely cleaned a blend file but it still took ages to load. The Outliner didn't show anything but the Group still existed and caused a linked file to be loaded although it wasn't used anymore.
Jun 22 2017
Thanks for your answer, Vuk. I didn't even know that one could find every single pixel of an image in the Outliner... crazy ;)
I don't know if this helps in any way but if I enable the new dependency graph (--enable-new-depsgraph) the scene load time goes from several minutes to a mere 5 seconds.
The Outliner performance problems persist though.
Thanks Corey for having a look at this. I hope you find something to improve this as it's a major bottleneck here right now.
Jun 21 2017
Jun 9 2017
I just tried if this new fix also gets rid of these artifacts but it doesn't: https://developer.blender.org/T51681
Jun 6 2017
Wow! That looks great. I don't have much time to test all kinds of combinations but following the repro steps in this bug report gave me the results I expected:
Jun 1 2017
May 31 2017
Yes, I noticed that too. The bundled EXR environment seems to be a tough one. The tiny sun disk is at something around 50000 RGB!
But on the other hand I want / need the "power" of this sun to give me bright highlights and sharp shadows, clamping would be my last resort.
I wonder why Branched PT isn't giving me artifacts.
Confirmed. I was just about to report it here as well ;)
May 30 2017
Sorry, can't help you because I don't know what kind of modifiers you try to apply or what your scene looks like but I can translate that German term to English for the other 99% of users here:
May 28 2017
May 26 2017
I'm afraid something went wrong here. I get an instant crash of Blender as soon as I start tracking, no matter if I use Autotrack or a single manually placed tracker.
The latest Buildbot build that doesn't contain this commit is not crashing.
May 24 2017
All fine here. Latest builds. Win x64.
May 22 2017
At least with the current Buildbot builds Alembic export always crashes as soon as I use "Simple" or "Interpolated" children no matter what I set the "Display" amount to.
I can confirm it with the very latest Buildbot build.
May 19 2017
There seems to be a syntax error in line 40 of filter_features_sse.h
May 17 2017
No, sorry, no backtrace. I'm not a coder, I'm just glad I can compile Blender myself ;)
I justed tested it again and found that my own VS2015 builds and the official Buildbot VS2015 builds crash while the VS2013 builds from the Buildbot don't.
This EXR coming with this scene is known to contain NaNs. Saving it to HDR format "fixes" this. But even saving it as a new EXR (I tried 16bit half-float with ZIPS compression) got rid of the black boxes.
If you load the original environment EXR into Natron it displays a yellow warning that it replaced some NaNs with white.
May 16 2017
According to the build number in the splashscreen, yes.
And the bmesh_bevel.c source also contains the latest changes.
Thanks for taking care of this. I just compiled the very latest master and the Blenderman.blend still crashes Blender here.
May 10 2017
May 5 2017
All 2.8 Buildbot builds also crash.
May 4 2017
You're a wizard! Thanks for this very quick fix!
May 3 2017
Apr 13 2017
OMG! It crashed in my own latest build and it still crashes in the current Buildbot builds but it looks like Sergey just fixed it while I was typing the bug report!!!
Apr 12 2017
Ah, that's good to know, Sergey.
Mar 31 2017
The Blenderman file is still crashing all the Buildbot builds on opening while official 2.78c can open it. Is this a regression of some sort?
Mar 7 2017
Feb 20 2017
I just compiled the latest version and the fix is working perfectly! :)
Feb 18 2017
Jan 13 2017