network rendering does not handle well Cycles' compute devices #46071

Closed
opened 2015-09-10 23:12:26 +02:00 by Reinis Adovics · 7 comments

System Information

Computer 1
Client - OS X
CPU: 3.7 GHz Quad-Core Intel Xeon E5
RAM: 16 GB, DDR3, 1866MHz, ECC
GFX: AMD FirePro D 700 2x6 GB
OS: OS X Yosemite 10.10.5
Drivers: OS managed, EFI 01.00.687, ROM 113-C3861J-687, gMux 4.0.11 [3.2.8]
Blender: 2.75a, stable, July 8, 2015

Computer 2
Master and Slave - MSW
CPU: INTEL CORE I7-4790 3.6GHZ
RAM: 16GB, DDR3L, 1600MHz, Non-ECC
GFX1: Asus NVIDIA GTX980 STRIX-GTX980-DC2OC-4GD5
GFX2: Asus NVIDIA GTX980 STRIX-GTX980-DC2OC-4GD5
OS: MSW 8.1
Drivers: NVIDIA 355.82 WHQL (10.18.13.5582)
Blender: 2.75a, stable, July 8, 2015

Blender Version
2.75a, stable, July 8, 2015

Short description of error
Blender on OS X (as well as other computers) without NVIDIA card does not have GPU compute.
When a job is sent from a Client that does not have GPU compute (and does not have Device dropdown in Cycles Render panel) to a Slave that has GPU compute and is configured, the job on the slave is executed on CPU.
When a job is sent from a Client that has GPU compute (and does have Device dropdown in Cycles Render panel) to a Slave that has GPU compute and is configured, the job on the slave is executed on GPU.
The expected behaviour would be, that Slave defines the compute type, not Client.
Client could only hint the prefered. When slave is GPU ready, then one can choose GPU/CPU rendering in Device dropdown.
As of now Slave defines the compute type, but that introduces issue discussed in this report - for CPU only clients there is no Device dropdown in render panel, thus one even cannot specify how Slave should render the job.
Proposed solution would be an extra field/checkbox in Network settings Client tab if Engine chosen on client is CYCLES (in Job Settings subpanel). This dropdown/checkbox specifies (overrides) Device (hidden on non GPU devices) setting , a la, “Use GPU compute on Slave(s) if possible”.

Exact steps for others to reproduce the error

Section 1)
Set up a test file (attached .blend file should work).
Set renderer to Cycles.
Configure rendering parameters. Set sampling to a “final-ish” values, so that render would take ~4 minutes to render on CPU.
Close file.

Section 2)
Open up test file in OS X (or any other Blender instance where GPU compute is not present due to host hardware) for network rendering as Client. [I was using Computer 1 here]
Set renderer to Network Renderer.
Start network render master in network [I was using Computer 2 as Master here]
Start network render slave in network [I was using Computer 2 as Slave here]
The Compute Device in Blender User Preferences must be set to CUDA for the Slave instance.
Render the job from Client. [it took 4:45, local gigabit ethernet]
Close file.

Section 3)
Open test file in GPU compute compatible device for network rendering as Client. [I was using Computer 2 as Client here]
The Compute Device in Blender User Preferences must be set to CUDA for the Client instance.
Set Device to GPU Compute in Render panel (option that lacks due to design in (2) step).
Set renderer to Network Renderer.
Start Master and Slave if they’re stopped as in section 2.
Render the job from Client. [it took 2:05, here all - Client, Slave, Master were running on Computer 2].

Section 4)
As a test render the file on Slave device in Cycles, but locally [I was using Computer 2 here, served as Slave in previous steps], both on CPU and GPU.
I took locally 4:37 on CPU and 1:54 on GPU. This corresponds to the assumption that Slave in Section 2 rendered on CPU and in Section 3 on GPU.

network-render-example.blend

**System Information** Computer 1 Client - OS X CPU: 3.7 GHz Quad-Core Intel Xeon E5 RAM: 16 GB, DDR3, 1866MHz, ECC GFX: AMD FirePro D 700 2x6 GB OS: OS X Yosemite 10.10.5 Drivers: OS managed, EFI 01.00.687, ROM 113-C3861J-687, gMux 4.0.11 [3.2.8] Blender: 2.75a, stable, July 8, 2015 Computer 2 Master and Slave - MSW CPU: INTEL CORE I7-4790 3.6GHZ RAM: 16GB, DDR3L, 1600MHz, Non-ECC GFX1: Asus NVIDIA GTX980 STRIX-GTX980-DC2OC-4GD5 GFX2: Asus NVIDIA GTX980 STRIX-GTX980-DC2OC-4GD5 OS: MSW 8.1 Drivers: NVIDIA 355.82 WHQL (10.18.13.5582) Blender: 2.75a, stable, July 8, 2015 **Blender Version** 2.75a, stable, July 8, 2015 **Short description of error** Blender on OS X (as well as other computers) without NVIDIA card does not have GPU compute. When a job is sent from a __Client that does not have GPU compute__ (and does not have *Device* dropdown in Cycles *Render* panel) to a __Slave that has GPU compute and is configured__, the job on the slave is __executed on CPU__. When a job is sent from a __Client that has GPU compute__ (and does have *Device* dropdown in Cycles *Render* panel) to a __Slave that has GPU compute and is configured__, the job on the slave is __executed on GPU__. __The expected behaviour would be, that Slave defines the compute type, not Client.__ Client could only hint the prefered. When slave is GPU ready, then one can choose GPU/CPU rendering in *Device* dropdown. As of now Slave defines the compute type, but that introduces issue discussed in this report - __for CPU only clients there is no *Device* dropdown in render panel, thus one even cannot specify how Slave should render the job__. Proposed solution would be an extra field/checkbox in *Network settings* Client tab if *Engine* chosen on client is *CYCLES* (in *Job Settings* subpanel). This dropdown/checkbox specifies (overrides) *Device* (hidden on non GPU devices) setting , a la, “Use GPU compute on Slave(s) if possible”. **Exact steps for others to reproduce the error** Section 1) Set up a test file (attached .blend file should work). Set renderer to Cycles. Configure rendering parameters. Set sampling to a “final-ish” values, so that render would take ~4 minutes to render on CPU. Close file. Section 2) Open up test file in OS X (or any other Blender instance where GPU compute is not present due to host hardware) for network rendering as Client. [I was using Computer 1 here] Set renderer to *Network Renderer*. Start network render master in network [I was using Computer 2 as Master here] Start network render slave in network [I was using Computer 2 as Slave here] The Compute Device in Blender User Preferences must be set to CUDA for the Slave instance. Render the job from Client. [it took __4:45__, local gigabit ethernet] Close file. Section 3) Open test file in GPU compute compatible device for network rendering as Client. [I was using Computer 2 as Client here] The Compute Device in Blender User Preferences must be set to CUDA for the Client instance. Set *Device* to *GPU Compute* in *Render* panel (option that lacks due to design in (2) step). Set renderer to *Network Renderer*. Start Master and Slave if they’re stopped as in section 2. Render the job from Client. [it took __2:05__, here all - Client, Slave, Master were running on Computer 2]. Section 4) As a test render the file on Slave device in Cycles, but locally [I was using Computer 2 here, served as Slave in previous steps], both on CPU and GPU. I took locally __4:37__ on CPU and __1:54__ on GPU. This corresponds to the assumption that Slave in Section 2 rendered on CPU and in Section 3 on GPU. [network-render-example.blend](https://archive.blender.org/developer/F232047/network-render-example.blend)
Author

Changed status to: 'Open'

Changed status to: 'Open'
Author

Added subscriber: @kroko-1

Added subscriber: @kroko-1
Bastien Montagne changed title from In network rendering (Cycles) Slave should define the compute type. CPU-only-Clients cannot define compute type. Network render job sent from CPU-only-Client renders on CPU although Slave is GPU compatible and configured. to network rendering does not handle well Cycles' compute devices 2015-09-10 23:30:58 +02:00
Author

The line should actually read:
... As of now CLIENT defines the compute type, but that introduces issue discussed in this report - for CPU only Clients there is no Device dropdown in render panel, thus one even cannot specify how Slave should render the job (or more specifically - CPU only Clients under the hood explicitly specify that Slave(s) should render in CPU, ignoring that Slave is GPU ready and willing). ...

The line should actually read: *... As of now __CLIENT__ defines the compute type, but that introduces issue discussed in this report - for CPU only Clients there is no Device dropdown in render panel, thus one even cannot specify how Slave should render the job (or more specifically - CPU only Clients under the hood explicitly specify that Slave(s) should render in CPU, ignoring that Slave is GPU ready and willing). ...*

Added subscriber: @mont29

Added subscriber: @mont29

Changed status from 'Open' to: 'Archived'

Changed status from 'Open' to: 'Archived'
Bastien Montagne self-assigned this 2015-09-11 12:06:24 +02:00

Thanks for the report, but this is not really a bug, more like a TODO/feature request to support something that did not exist at time that addon was written. Unfortunately, its author is no more active currently, so only minimal maintenance (to fix critical crashes and such) is performed on this code currently.

Thanks for the report, but this is not really a bug, more like a TODO/feature request to support something that did not exist at time that addon was written. Unfortunately, its author is no more active currently, so only minimal maintenance (to fix critical crashes and such) is performed on this code currently.
Author

I see. So let us make it TODO!? :)

Woke up and spent some time on this today. Here you go https://github.com/WARP-LAB/Blender-Network-Render-Additions
While going through the code some other todo features came in mind (i.e., handling/rendering of specific scene(s) from multi-scene files, headless master (why pollute user space?), fast switching compute_device_type and compute_device for slave a.o.).

What should be the "routine" for this to be pushed to "official" release?
My everyday is using totally other naming conventions and I spent time only reading this paragraph http://www.blender.org/api/blender_python_api_2_60a_release/info_best_practice.html#style-conventions on meeting pep8 criteria.

I see. So let us make it TODO!? :) Woke up and spent some time on this today. Here you go https://github.com/WARP-LAB/Blender-Network-Render-Additions While going through the code some other todo features came in mind (i.e., handling/rendering of specific scene(s) from multi-scene files, headless master (why pollute user space?), fast switching compute_device_type and compute_device for slave a.o.). What should be the "routine" for this to be pushed to "official" release? My everyday is using totally other naming conventions and I spent time only reading this paragraph http://www.blender.org/api/blender_python_api_2_60a_release/info_best_practice.html#style-conventions on meeting pep8 criteria.
Sign in to join this conversation.
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender-addons#46071
No description provided.