- User Since
- May 24 2016, 1:39 PM (176 w, 5 d)
The setting is in the preferences now (where it belongs!):
Tue, Oct 8
Fri, Sep 20
Thu, Sep 19
I'm getting this (on Linux Mint 19.2):
Aug 1 2019
vertexColor_blender_colorSet1.abc is also working fine in Houdini 17.5:
Jul 30 2019
@Sybren A. Stüvel (sybren) : I just gave the very latest master ( 7f29fc7415a4 ) a quick try and now the normals are perfect for both Alembic im- and export. Much appreciated! Thanks!
Jul 18 2019
@Sybren A. Stüvel (sybren) Great to hear! Thanks for your support!
I don't know if this helps, but I prepared some ABC files in Blender and Houdini which show that Blender currently ignores / destroys the custom normals both on import and on export.
I have an original car hood geometry that looks perfect in Blender. I exported it as an FBX and import it into Houdini. Normals are still OK. Then I export it as an Alembic from Houdini. The normals are still OK, but they aren't imported correctly in Blender.
If I export the geometry with the correct normals as an Alembic from Blender, the normals are being garbled during export. If I import it in Houdini it looks like this:
May 8 2019
May 5 2019
Apr 23 2019
OK, did a complete rebuild from scratch and vc16 libs: same crash
And just a few minutes ago a fresh build from the Buildbot came out ( 5a144c797a3b ) and it also crashes on 2 different workstations (the one in the description above and a Threadripper 1950x machine with a GTX 1080, both running Win10 x64).
Thanks for having a look at it. Just in case it helps, this is what I get if I start and crash Blender from the command line:
In my case it's the default PNG, 8-bits, RGBA with 15% compression, but it reliably crashes with JPEG and no matter what settings I use.
I already tried loading the factory defaults first but to no avail.
Apr 19 2019
Confirmed here. Linux Mint 19.1, GTX 1080ti
Apr 15 2019
I remember reporting something very similar some time ago: https://developer.blender.org/T61607
Apr 7 2019
Apr 5 2019
Feb 16 2019
I can confirm it on my side. Linux Mint 19.1 / GTX 1080ti. Very latest build of 2.80.
Feb 7 2019
OK, just tried it and it works. New dependency graph?
Feb 4 2019
Have you tried loading the factory settings before? Might be an old startup.blend or preferences saved from an older version of 2.80.
It's also strange that your splash screen says "Branch: blender2.7" while a proper splash screen should look like this:
Jan 25 2019
Jan 13 2019
Jan 6 2019
Jan 4 2019
Thanks for the quick help. Working perfectly now with 2.79 and 2.80 :)
Dec 27 2018
Can you attach a PNG showing that problem? I have no problems here (2.79 and 2.80 no matter what version). I suspect it's an issue with your display driver (or its settings) or another piece of software that changes the display of images on your laptop.
Do the images @artem ivanov (ixd) posted look OK for you or do they also show banding?
Dec 20 2018
OK, next step. It looks like it's not my user settings but my startup file that contains something that the Multiresolution modifier doesn't like.
If I load factory settings, all is OK. If I load factory settings and then manually load my startup.blend the crash is back...
Here's a debug output with the crash:
Hmm... for me it happens on 3 completely different computers, both with my own compile and the buildbot versions of 2.80.
Dec 3 2018
Oh I see. Thanks!
This task is marked "Closed, Resolved" but the functionality is not yet inside 2.80. So is this still on the agenda?
Nov 29 2018
Sorry, can't confirm your other crash problem
Could be this? https://developer.blender.org/T58183
Sounds a lot like this: https://developer.blender.org/T58183
OK, I think I'm onto something! :)
Nov 26 2018
Nov 21 2018
Oct 29 2018
Funny. I just compiled the very latest master (17d91bcb611b) and it doesn't crash this easily any more.
But it still crashes reliably in CPU mode with "Accurate Mode" off.
Blender 2.80 same thing.
Oct 9 2018
Also happens under Linux Mint with a GTX 780 and with a Xeon / GTX 1080ti.
I gave it a closer look and my initial thought also was that the denoiser freaks out. When you disable the denoiser you can see that there's a lot of noise in the rendered image (where it would be a miracle if any denoiser could properly resolve that BTW). But: it's not the denoiser's fault because the flickering artifacts are already present in the "raw" render!
I could reduce the whole scene to only 3 Lego bricks (around the sink) which still gave me the error. The artifacts don't appear if I override all materials with a simple Principled shader. So it looks like the (quite complex) materials (or at least one of them) is faulty or maybe some bump or normal shader doesn't play nicely with a scaling of 0.001 the whole scene is at. Plus I think that setting the render camera limits to 1mm - 2km doesn't help numerical precision either.
It might also be a combination of things that cause this, but the Denoiser is innocent.
Sep 22 2018
Aug 31 2018
Aug 28 2018
Aug 22 2018
Aug 6 2018
I just gave it another try. Both with 2.79 latest master and 2.80 latest. This time under Windows. No artefacts there as well. So it might be a Mac thing, maybe OpenCL related?!
Aug 5 2018
I guess the problem is not a bug in Cycles but rather a small mistake in your material:
Aug 3 2018
Jun 6 2018
Depending on your hardware and the scene configuration you might need to fiddle a bit with the tile size to find the sweet spot.
In my case it's almost always 32x32 or 64x64 but that can vary. Like with your scene it's 16x16 although I didn't test 8x8.
First question: Are you using "Hybrid" rendering in master?
May 31 2018
May 24 2018
May 4 2018
Apr 20 2018
Apr 17 2018
Mar 25 2018
Mar 21 2018
This is not a bug.
Mar 14 2018
Feb 19 2018
Feb 5 2018
After finishing the job that caused me to post this "bug report", I somehow lost sight of this ticket. But on second thought it might actually be worth to re-open it again for the exact reason that ronan mentioned.
Render size and simulation size should both be available but the render size should never affect the simulation. Quite the same as changing the instanced object or group doesn't affect the simulation. You should still be allowed to change render settings after caching a simulation.
Jan 25 2018
I don't know it this helps but if I open an image editor and manually set the color space of both the normal and the roughness maps to "Linear" instead of "sRGB" the packed version also renders fine.
Seems to ignore the non-color data setting of the image texture nodes.
Jan 19 2018
Well, here is what Houdini 16.5 says to the attached .abc file.
Jan 15 2018
Jan 12 2018
Gave it 10 more tries, no crash. It also works reliably with the brand new "Transparent Glass" feature. :)
Last night it crashed for me (Linux Mint 18.3, 12GB RAM, GTX780 3GB), but now with the very current master it works:
Jan 11 2018
OK, I tried to reproduce my problem today with the very latest own build and couldn't reproduce it. I can now slowly but successfully render the benchmark scene on my +10 years old system with only 12GB of RAM and a GTX780 3GB (on Linux Mint that is).
I don't know what fixed it, but at least this is now working properly. Great work! :D
Jan 4 2018
In my case it's not so easy I guess. I just gave it another try on my 10+ years old dual Xeon system with Linux Mint 18.3 and the very latest master of Blender (own compile).
It's just the Gooseberry benchmark scene switched to use GPU only. I also made sure to only use the GPU (GTX780 3GB) not the CPU or both.
The result is that it starts rendering although the scene would use more than twice the amount of GPU RAM. This is great.
The problem is that the final result is very very dark. I interrupted the rendering but here's a screenshot:
Dec 27 2017
I'm not at home at the moment so I can't help with testing stuff, but the TdrDelay issue affects almost any application using CUDA. Allegorithmic Substance Designer and Painter now even display warnings if they find default values in the registry and strongly advise you to change the settings accordingly.
Here's a helpful article on that https://support.allegorithmic.com/documentation/display/SPDOC/GPU+drivers+crash+with+long+computations
Dec 20 2017
Dec 8 2017
Nov 20 2017
Thanks, I actually had render crashes in this project, but it was too weird to reproduce easily. Also I am currently too swamped with work. Hopefully you can kill all these bugs ;)
Nov 18 2017
Nov 13 2017
Nov 3 2017
@Brecht Van Lommel (brecht) You are a wizard! Thanks a lot!
Oct 24 2017
I used the new CPU + GPU option in this case.
Oct 23 2017
Confirmed here. Win10 x64, GTX1080Ti
Oct 16 2017
This is perfectly normal and expected.
Oct 9 2017
Oct 7 2017