- User Since
- May 24 2016, 1:39 PM (133 w, 1 d)
Mon, Dec 3
Oh I see. Thanks!
This task is marked "Closed, Resolved" but the functionality is not yet inside 2.80. So is this still on the agenda?
Thu, Nov 29
Sorry, can't confirm your other crash problem (and I think you should not change the status of that other report on your own behalf, this is the developer's realm ;) )
Could be this? https://developer.blender.org/T58183
Sound a lot like this: https://developer.blender.org/T58183
OK, I think I'm onto something! :)
Mon, Nov 26
Wed, Nov 21
Oct 29 2018
Funny. I just compiled the very latest master (17d91bcb611b) and it doesn't crash this easily any more.
But it still crashes reliably in CPU mode with "Accurate Mode" off.
Blender 2.80 same thing.
Oct 9 2018
Also happens under Linux Mint with a GTX 780 and with a Xeon / GTX 1080ti.
I gave it a closer look and my initial thought also was that the denoiser freaks out. When you disable the denoiser you can see that there's a lot of noise in the rendered image (where it would be a miracle if any denoiser could properly resolve that BTW). But: it's not the denoiser's fault because the flickering artifacts are already present in the "raw" render!
I could reduce the whole scene to only 3 Lego bricks (around the sink) which still gave me the error. The artifacts don't appear if I override all materials with a simple Principled shader. So it looks like the (quite complex) materials (or at least one of them) is faulty or maybe some bump or normal shader doesn't play nicely with a scaling of 0.001 the whole scene is at. Plus I think that setting the render camera limits to 1mm - 2km doesn't help numerical precision either.
It might also be a combination of things that cause this, but the Denoiser is innocent.
Sep 22 2018
Aug 31 2018
Aug 28 2018
Aug 22 2018
Aug 6 2018
I just gave it another try. Both with 2.79 latest master and 2.80 latest. This time under Windows. No artefacts there as well. So it might be a Mac thing, maybe OpenCL related?!
Aug 5 2018
I guess the problem is not a bug in Cycles but rather a small mistake in your material:
Aug 3 2018
Jun 6 2018
Depending on your hardware and the scene configuration you might need to fiddle a bit with the tile size to find the sweet spot.
In my case it's almost always 32x32 or 64x64 but that can vary. Like with your scene it's 16x16 although I didn't test 8x8.
First question: Are you using "Hybrid" rendering in master?
May 31 2018
May 24 2018
May 4 2018
Apr 20 2018
Apr 17 2018
Mar 25 2018
Mar 21 2018
This is not a bug.
Mar 14 2018
Feb 19 2018
Feb 5 2018
After finishing the job that caused me to post this "bug report", I somehow lost sight of this ticket. But on second thought it might actually be worth to re-open it again for the exact reason that ronan mentioned.
Render size and simulation size should both be available but the render size should never affect the simulation. Quite the same as changing the instanced object or group doesn't affect the simulation. You should still be allowed to change render settings after caching a simulation.
Jan 25 2018
I don't know it this helps but if I open an image editor and manually set the color space of both the normal and the roughness maps to "Linear" instead of "sRGB" the packed version also renders fine.
Seems to ignore the non-color data setting of the image texture nodes.
Jan 19 2018
Well, here is what Houdini 16.5 says to the attached .abc file.
Jan 15 2018
Jan 12 2018
Gave it 10 more tries, no crash. It also works reliably with the brand new "Transparent Glass" feature. :)
Last night it crashed for me (Linux Mint 18.3, 12GB RAM, GTX780 3GB), but now with the very current master it works:
Jan 11 2018
OK, I tried to reproduce my problem today with the very latest own build and couldn't reproduce it. I can now slowly but successfully render the benchmark scene on my +10 years old system with only 12GB of RAM and a GTX780 3GB (on Linux Mint that is).
I don't know what fixed it, but at least this is now working properly. Great work! :D
Jan 4 2018
In my case it's not so easy I guess. I just gave it another try on my 10+ years old dual Xeon system with Linux Mint 18.3 and the very latest master of Blender (own compile).
It's just the Gooseberry benchmark scene switched to use GPU only. I also made sure to only use the GPU (GTX780 3GB) not the CPU or both.
The result is that it starts rendering although the scene would use more than twice the amount of GPU RAM. This is great.
The problem is that the final result is very very dark. I interrupted the rendering but here's a screenshot:
Dec 27 2017
I'm not at home at the moment so I can't help with testing stuff, but the TdrDelay issue affects almost any application using CUDA. Allegorithmic Substance Designer and Painter now even display warnings if they find default values in the registry and strongly advise you to change the settings accordingly.
Here's a helpful article on that https://support.allegorithmic.com/documentation/display/SPDOC/GPU+drivers+crash+with+long+computations
Dec 20 2017
Dec 8 2017
Nov 20 2017
Thanks, I actually had render crashes in this project, but it was too weird to reproduce easily. Also I am currently too swamped with work. Hopefully you can kill all these bugs ;)
Nov 18 2017
Nov 13 2017
Nov 3 2017
@Brecht Van Lommel (brecht) You are a wizard! Thanks a lot!
Oct 24 2017
I used the new CPU + GPU option in this case.
Oct 23 2017
Confirmed here. Win10 x64, GTX1080Ti
Oct 16 2017
This is perfectly normal and expected.
Oct 9 2017
Oct 7 2017
Sep 20 2017
Have you tried the latest Buildbot builds?
I had a similar bug that was fixed some time ago: https://developer.blender.org/T52479
Sep 16 2017
OIC, thanks for your fast response.
Sep 9 2017
Confirmed on my side.
Obviously a duplicate of https://developer.blender.org/T52687
Sep 7 2017
Aug 29 2017
Thanks for confirming this. Yes, it "works" when scaling first, but in my case all the filters were set to a specific look and I was about to start the playout for the final delivery of several shots in 4k when all of a sudden I noticed this bug.
In the end I started resizing all of the render pictures with Fusion.
Aug 25 2017
Aug 21 2017
Aug 10 2017
Just tried it and it helps a lot!
Jul 20 2017
Jul 14 2017
Jun 27 2017
Thanks Bastien for instantly fixing the linking bug and updating the "Purge All" button with important information! 😄
Jun 26 2017
Nice you can reproduce it and thanks for your quick reaction.
I was about to post the same report here. :)
I noticed it after I completely cleaned a blend file but it still took ages to load. The Outliner didn't show anything but the Group still existed and caused a linked file to be loaded although it wasn't used anymore.
Jun 22 2017
Thanks for your answer, Vuk. I didn't even know that one could find every single pixel of an image in the Outliner... crazy ;)
I don't know if this helps in any way but if I enable the new dependency graph (--enable-new-depsgraph) the scene load time goes from several minutes to a mere 5 seconds.
The Outliner performance problems persist though.
Thanks Corey for having a look at this. I hope you find something to improve this as it's a major bottleneck here right now.
Jun 21 2017
Jun 9 2017
I just tried if this new fix also gets rid of these artifacts but it doesn't: https://developer.blender.org/T51681
Jun 6 2017
Wow! That looks great. I don't have much time to test all kinds of combinations but following the repro steps in this bug report gave me the results I expected:
Jun 1 2017
May 31 2017
Yes, I noticed that too. The bundled EXR environment seems to be a tough one. The tiny sun disk is at something around 50000 RGB!
But on the other hand I want / need the "power" of this sun to give me bright highlights and sharp shadows, clamping would be my last resort.
I wonder why Branched PT isn't giving me artifacts.
Confirmed. I was just about to report it here as well ;)
May 30 2017
Sorry, can't help you because I don't know what kind of modifiers you try to apply or what your scene looks like but I can translate that German term to English for the other 99% of users here:
May 28 2017
May 26 2017
I'm afraid something went wrong here. I get an instant crash of Blender as soon as I start tracking, no matter if I use Autotrack or a single manually placed tracker.
The latest Buildbot build that doesn't contain this commit is not crashing.
May 24 2017
All fine here. Latest builds. Win x64.
May 22 2017
At least with the current Buildbot builds Alembic export always crashes as soon as I use "Simple" or "Interpolated" children no matter what I set the "Display" amount to.
I can confirm it with the very latest Buildbot build.
May 19 2017
There seems to be a syntax error in line 40 of filter_features_sse.h