- User Since
- Feb 21 2019, 1:26 PM (82 w, 4 d)
Fri, Sep 18
Tue, Sep 15
Wed, Sep 9
Tue, Sep 8
This is caused by rB009971ba7adc9603b90e9bf99b6b6d53eeae6c3a. Enabling OptiX denoising with CPU rendering creates an implicit multi-device in Cycles, which breaks Embree BVH building because the Embree device (BVHEmbree::rtc_device) is not initialized.
@Stefan Werner (swerner) I suppose the simplest solution would be to override Device::bvh_device() in MultiDevice as well and have it return the one from the first CPU device (as there should only be one anyway)? We also may want to streamline the Embree/OptiX BVH building at some point and make it more similar, but that's out of scope for this particular issue.
It's the result that counts ;). Looks good, thanks!
Mon, Sep 7
@Juan Gea (juang3d) This patch does not compile with the current official NanoVDB (it needs some upcoming changes that are not in there yet), see last paragraph in the description. So if it compiles successfully without changes you are missing something and NanoVDB support is not active.
Sat, Sep 5
Fri, Sep 4
Thu, Sep 3
Fri, Aug 28
You need to change a node connection. Blender apparently doesn't update the node graph send to Cycles if an unconnected node is deleted. So either connect the AO node to something and then delete it, or add/remove a connection somewhere else before switching mode and it will work.
Thu, Aug 27
2.90 shows a proper error message for this in the viewport already. AO and Bevel nodes are not currently supported with OptiX. This is caught when kernels are loaded, so need to force Cycles to reinitialize to get rid of the error after removing those nodes. Simplest way to do this is to switch Viewport Shading to Solid and then back to Rendered. I cannot reproduce it not being possible to get back to a working viewport after removing the node this way, so don't think there is a bug here.
Tried on all the recent driver branches (on Windows):
- R440: does reproduce
- R445: does NOT reproduce
- R450: does NOT reproduce
Which lets me to believe this was a compiler bug in the driver that has since been fixed. So you should be good updating to a newer driver, from at least the R445 branch, better yet R450. On Windows that is 452.06 aka R450 U3, on Linux 450.66.
Aug 17 2020
Sorry for the late response. I'll check this out as soon as I can!
Jul 29 2020
Tested your new scene and cannot reproduce. Make sure you recompiled the PTX after applying the change or simply download the latest alpha/beta build from https://builder.blender.org/download/ which already incorporates it.
Jul 28 2020
Jul 27 2020
Jul 24 2020
- Quadro M2000/PCIe/SSE2 NVIDIA Corporation 4.5.0 NVIDIA 451.48
Likely a system/driver issue with this particular card. It would be useful to try another application using OptiX and/or its denoiser to see if similar issues occur.
- GeForce GTX 1070 Ti/PCIe/SSE2 NVIDIA Corporation 4.5.0 NVIDIA 432.00
Driver r432 is way too old, OptiX is only supported starting with r440.
- Quadro K620/PCIe/SSE2 NVIDIA Corporation 4.5.0 NVIDIA 442.19
~Maxwell 1.0 is not supported for OptiX (which the chip used on the K620 card is from), only Maxwell 2.0 is. It shouldn't show up as being available on those cards though, so will push a fix shortly that makes it unavailable in the first place.~ This is fixed now.
Jul 20 2020
Jul 17 2020
- Added "compute_75" kernel target to build configuration
- Fixed PTX discovery affecting architecture version used by adaptive kernel compilation
Is it sufficient to add compute_75 to the CYCLES_CUDA_BINARIES_ARCH list in build_files/cmake/config/blender_release.cmake or does the buildbot use a different config?
Jul 14 2020
Jul 13 2020
Jul 10 2020
AFAIK both OptiX/OIDN need noise-free normal/albedo passes too (I know for sure for OptiX). In this case the best solution I can think of is to use the OptiX denoiser with only the color pass as input (so to not confuse it with that albedo/normal data). That's currently only possible to select for final renders though (in the render layer properties), not for viewport denoising (which always uses color+albedo).
Jul 8 2020
This isn't an issue with the OptiX denoiser, but the denoising passes generated by Cycles (which are passed on to the OptiX denoiser, but are incorrect, so the OptiX denoiser cannot produce correct results):
|Noisy||Denoising Albedo||Denoising Normal|
The problem occurs when the material uses both low roughness and base color values, which could indicate an issue with how denoising data is handled with specular closures. I'm guessing that since denoising data updates for specular-like closures in kernel_update_denoising_features are deferred to the next bounce, but in this case the next bounce likely hits the background, it's never updated. @Lukas Stockner (lukasstockner97) is more familiar with this code I think, maybe he could chime in?
Jul 7 2020
Strange. --debug-cycles should help to get a more detailed error message from OptiX.
Jul 6 2020
That's just the nature of AI denoising. The HDR denoiser that comes with OptiX was trained for a specific usecase (AI is only ever as good as the data that was used to train it), which is HDR images with color data between 0-10'000 that on average is not close to zero (see also https://raytracing-docs.nvidia.com/optix7/guide/index.html#ai_denoiser#nvidia-ai-denoiser).
In this particular case this contract cannot be abhold, since the scene is very very dark and therefore values are very close to zero (it's only later after tonemapping that the final image gets scaled up using the exposure value). The AI is therefore unable to produce a good output.
Jun 27 2020
Jun 25 2020
Jun 24 2020
Jun 22 2020
I think we should bump the OptiX target to at least sm_50, if not sm_52 (for the purpose of CUDA 11) in general. It cannot run on earlier cards anyway and having it at sm_30 misses some (albeit small) optimization potential (since most optimization is handled in the PTX->binary driver step). I wanted to do that a while ago, but seamingly forgot about it.
Jun 19 2020
Jun 15 2020
Jun 12 2020
Jun 10 2020
Jun 9 2020
Jun 8 2020
Jun 6 2020
Jun 5 2020
Found a couple of bugs with the current CUDA memory management in Cycles when it comes to multiple devices.
The code that moved textures from device to host memory failed to actually free up the memory after moving as soon as multiple GPUs were enabled for example. In addition, the "texture_info" array was always moved to host memory on the first move (which may be a reason for T75955).