- User Since
- Feb 21 2019, 1:26 PM (25 w, 2 d)
Fri, Aug 16
Fixed modified branched connect to light functions
Wed, Aug 14
Fixed performance regression that caused BMW benchmark scene to only be 1.1x faster instead of 1.6x
Mon, Aug 12
- Fixed a compile error when building with Embree (thanks @Alex Fuller (mistaed))
- Fixed an OptiX warning during pipeline creation
- Reduced number of used attributes in OptiX pipeline to two
Fri, Aug 9
Thu, Aug 8
Fixes some of the formatting issues that were brought up. The #ifdefs in geom_subd_triangle.h were missing on accident.
Wed, Aug 7
Tue, Aug 6
Laid groundwork for baking support
Mon, Aug 5
Just a note regarding the CUDA backend: I don't think this can fully replace that one, since OptiX only works on Maxwell and up, with full performance only on Turing, so the CUDA backend would still be necessary to support older cards.
Rebase against master and fix merge conflicts
Sat, Aug 3
Fri, Aug 2
CPU + GPU doesn't currently work with this because OptiX manages its own BVH format, but Cycles only builds a single BVH for everything (with CPU + CUDA this is BVH2 for example). The CPU clearly cannot make use of the OptiX BVH though (since that one is designed for RT Cores and not usable from the outside). So Cycles would have to be extended to allow building and using multiple BVHs simultaneously, which was out of scope for this initial implementation (is something to keep in mind for the future though).
Thu, Aug 1
Wed, Jul 31
Tue, Jul 30
Mon, Jul 29
It doesn't currently. Multi-GPU is implemented via the existing MULTI-device in Cycles and just broadcasts the BVH to all GPUs (so each gets a copy).
Adding proper NVLink support would be more suited for a separate change independent of this one I think.