- User Since
- Nov 28 2009, 10:22 PM (381 w, 4 d)
The idea is indeed nice, but i don't think this should be an option. Just implement it in a way that it does not cause slowdown for regular renders ;)
Did some more deeper research. The commit mentioned above has nothing to do with this issue. In fact, i don't see issue here at all. At least there is no change in behavior since Blender version 2.75. All older versions did not have any logging around materials. All the versions starting from 2.75 are reporting 7 shaders synchronized to the shader manager.
The issue is coming from hardcoded MAX_SOCKET set to 64. I've tried to remove such a static limit, but it's not so trivial within the current nodes design, especially for the texture nodes. I think it is safe to bump this constant to 512. This will only be ~16K extra stack memory usage per thread during evaluation which is really small for the modern systems.
Had a look here. So far managed to reproduce on Windows, didn't see this error on Linux. Most likely it's because of different timeout policies on different platforms (those timeouts are dictated by driver and OS and out of our control). The issue is very similar to the one described in out manual . It is not solvable in general case for as long as the card is not dedicated for compute (aka, has monitor attached to it).
Committed a work around for now. We are planning to abandon MSVC2013, so not really motivated spending days nailing down the issue.
@Bastien Montagne (mont29), but that's exactly the thing. All other flags we pass to CMake are hints for find_package(). This is how it is supposed to work.
The change seems fine to me. Wouldn't go a rabbit hole in this exact patch, but we should indeed clean up some CMake around OSL.
Did you do any performance check, especially with OpenCL split kernel? This will be an extra loop iteration for CUDA/CPU and will be extra registers and if-else chains for the split kernel.
Tue, Mar 21
Fix typo in scheduling
Please double-check out testsuite is till passing, otherwise LGTM after more careful and complete dig.
Update. I've loaded optimzied defaults in BIOS and now i don't see measurable difference in performance before/after the patch.
This update includes:
Mon, Mar 20
Managed to reproduce the issue on WIndows. Annoyingly,m only happens in release build. Could be a fault of optimizer.
I don't see any difference of rendered result when comparing 2.77a, 2.78a, 2.78c and latest master.
Just a quick update. The issue was fixed in b91db51 of upstream OIIO.
Fixed this in rB18bf900. Thanks for the report!
@Brecht Van Lommel (brecht), this is just weird to forbid negative color but allow negative light.. Would be better if both are allowed or both are disallowed. But i'm fine keeping things as-is and just use fabs() for now.
Don't really think it's a bug. Negative colors and strength is going to be always problematic. What is the reason you use negative strength on lamps?