When using NetworkRender mode, postprocessing with Depth of Field (DOF) node does not update focus lenght (ie. driven by animated object). Same scene rendered locally is working as expected.
Rendering goes fine, but compositing has to be turned off on the client, otherwise it reapplies composition that has been done on the slaves. I'm going to have to figure something out to turn if off automatically.
There's also this warning when loading back the result (a multilayer exr) in Blender through the render engine api, loading the exr manually works fine.
warning, channel with no rect set Composite.Combined.A
warning, channel with no rect set Composite.Combined.B
warning, channel with no rect set Composite.Combined.G
warning, channel with no rect set Composite.Combined.R
Assigning to brecht for testing (I think he's responsible for the render engine).
Adding a demo file.
Unzip, open blender from this directory (it will load results.exr from cwd), load result.blend, execute opened script (it should change the render engine to Test Render, do so if it didn't), render. It should display exr warnings in console, the layers mentioned in the warnings will be missing from the result buffer (but would be visible if loading the exr in the image editor manually).
Looked into this bug again, and the problem is that network render expects all layers in the .exr file to be loaded, while only the ones that the renderer expects are loaded. This is working as designed actually, but in this case it would be useful to support loading other layers too. Not sure yet how to solve without making major changes.
I see that "[#28049] Blender "Network Render" does not render effects (nodes) from the Compositor." has been marked as a duplicate of this.
It would seem to me that this is an extremely serious bug as it is impossible to use netrender to farm out any scenes that use compositing nodes.
The slave nodes render the scene correctly with both the renderlayer and the composite output in the exr file. The master monitor web page correctly shows a thumbnail of the composited scene. However, the client seems to only expect, and will only display the renderlayer before compositing. It completely ignores the composite layer, as if it doesn't exist. The client does not appear to be expecting the composite layer at all, even though Compositing is checked in the Post Processing settings in the blend file.
This bug can be very easily verified, using the default cube. Simply open the nodes editor, enable nodes and set up a simple compositing pipeline (e.g. connect alpha from the renderlayer to the compositing node). Rendering locally will display the expected output, whilst a network render will only display the renderlayer before compositing.
This bug effectively renders (sorry!) the netrender absolutely useless apart from for very simple work. I am extremely puzzled as to why there appears to be no progress on this bug, since it is a year and a half old.
Please note that the work around written in the other report still applies.
That is: the result exr files on the master can be fed into Blender's compositor as multilayer and that will work correctly (as if you'd render them locally).
But it sure would be nice to get it fixed.
...problem presists. (Blender 2.58 r5365)
So, the current workflow to correct this is to render with the results appearing un-noded, and then wire them in again to the compositor as "Movie Clip" (image sequence) and set up again your nodes. Did all those Open Movies work this way?
It works I think. Bit of a headache to explain to a class of 30 or so students, but then I am assessing them on their ability to be able to discuss efficiencies in a pipeline so it's one more thing they could get marks for when they apply the workaround as long as they say what they did. I'm wondering if Loki has the same issues (although our school network hates Loki because it's Java - another story)
Still having issues with this in Blender 2.68.1 r58706.
For those who are frustrated with this bug, or use the compositor for allot of their work, here is the best work around that I have been able to find so far.
When you are ready to use Network Render, send it as a job, and let it render. Ounce the render is finished, you will need to find the EXR files that are sitting on the Master (the file path is located under the Master dialogue in Blender). At this point, open up a new instance of Blender, and bring up the composite node editor. delete the scene/render layer node, and add and image node and attach it to the composite node. Navigate to and add the first .EXR image in your rendered animation that is on the Master. Switch it from single image to image sequence and set the correct values for your animation. Make sure that the render settings for the scene match the blend file that you sent to the render farm originally, and then render out to the desired format.
It seems to me that this bug reduces the effectiveness of the idea of the render farm considerably. As composting usually equates to more render power needed, therefore more computing power is needed to get the final result faster(aka a render farm). Compositing seems like a priority to me, but my priorities don't always match someone elses.)
I appreciate all the hard work that has been put into this add-on, it has helped me with my work for over 3 years now. It's just been frustrating that I have been running into this problem, along with not being able to do full-sample motion-blur/anti-aliasing through Network Render (both processor intensive tasks), over the last 3 years.
Hi; this is my first patch on Blender. Please bear with me, esp. if it's not the right place to do code reviews (IRC ?).
Despite what has been said elsewhere, all EXR layers are correctly loaded. They are just not linked to the RenderResult, because when it's created, it creates one layer (a RenderResult layer, not OpenEXR layer) per render layer, but doesn't create a layer for the composite. So when output.exr arrives from the network, Composite.Combined.A/R/G/B have nowhere to go and are simply put aside.
In my simple tests, this patches makes the following cases work :
- network single frame rendering, Multilayer EXR output : the image editor shows the different layers, including the composited image.
- network single frame rendering, JPEG output : same thing. With network rendering everything goes exr, so layers are present in the image editor.
- network animation rendering, Multilayer EXR output : work (probably as before)
- network animation rendering, JPEG output : all the images contain the composited image.
Comments welcome !
Thanks for the patch, some of us have been waiting for a long time for something like this. I am planning on working with it soon.
I'm no expert, but it seems that this patch isn't format specific. Will it output PNG's correctly, according to the scenarios listed? This format is a bit more condusive for quick VFX workflows, as it supports transparency.
Hi johantri, I'm not sure what your question is. If you're uncomfortable using diff/patch you can just apply it by hand, editing render_result.c, and hope it didn't change too much in 1.5 years. Then re-compile Blender (which is too complicated to explain here).
Re-reading the code, it should work with any format (except maybe some exr variants ?). Please let me know if PNG doesn't work.
Still present in 2.78a , I guess this is not going away? We've encountered it whilst developing a network render addon similar to Netrender, except we do bucket rendering or frame splitting. For us, the ability read in a multilayer exr is rather critical since the work around by Arnoud above doesn;t work for us.
With each machine doing a section of a single frame, the ability to use the feature of a render result to stitch images back together is key, but we can only do this with single exr files or other flatter image formats like png, jpg.
Can one of the devs give us at least an indication as to whether it is planned to provide support for blender to allow a render result to read a multilayer open exr in such a way as to provide access to the passes in the compositor? It seems rather odd that this can be done using the image node and opening the same file that cannot be read in properly using a RenderResult.
Any suggestions for our particular situation would be most helpful too ;)
Not an actual answer, but what my patch does is (if I recall correctly, it's been 2 years) make it possible for each computer to composite its own render.
If you're doing frame splitting, this doesn't have any sense; you can't composite just a part of an image (think gaussian blur), you need the full frame. So you'll need to do the following on the master (or one of the clients):
- Wait for all renders to be complete
- Merge them into a single EXR
- Composite this final EXR.
No idea if Python has such low-level access on EXR files.
Hi Arnaud, :D
Thanks for your comment. That is an important point. We've been wondering for a while how to get around the issue, here is what we're facing right now, it is as you described.
What we are missing here is the ability to import the merged exr as you say above. Blender allow us to merge results by using multiple render results and writing the pixels with the correct offsets. But we get the same error as Martin was getting (according to his posts above) when we try to use a multilayer exr file. This is the problem since without being able to import a multilayer exr, we can't access the passes of the render that are part of the exr file.
From what I can see, the problem arises due to the naming of the layers inside the exr file. I currently have no problem loading a single layer exr file using multiple render results and merging the split frames back together, easy!
The multilayer file has different names and hierarchy for the various 'channels'. In the single exr file (which works when loading using a render result, but only has the RBGA channels of the combines pass) there are only 4 channels called 'R', 'G', 'B' and 'A'. These are no problem for use with a render result, all works as documented.
However in the multilayer file, which has the separate passes (and which you can load into the compositor and see all your passes as mentioned above), there are the following 'channels'
RenderLayer.Combined.A RenderLayer.Combined.R RenderLayer.Combined.G RenderLayer.Combined.B RenderLayer.Depth.Z
Judging by the posts above, this isn't expected by the RenderResults' methods for loading layers. I think it expects just one layer per file and probably not with this naming convention either. Seems like an easy fix.
Anyway, we're working on a way around this in our project. Shouldn't take us too long and network rendering will be able to feed directly into the compositor just like the native engines do, if you're interested in learning more about what we're doing, just let me know and I'll pass you the details :D
Thanks again for reaching out, appreciate your time that you took in posting here for me!
a RenderResult contains a list of RenderLayer's (rr->layers)
a RenderLayer contains a list of RenderPass'es (rl->passes)
The naming is exactly what EXR wants. See https://github.com/martijnberger/blender/blob/c359343f8dae6689c955dc1fa700cb26f6cd2e95/source/blender/render/intern/source/render_result.c#L176
Hi Arnaud, have been looking through this code, but not sure what you meant by your last comment. I am new to the blender cpp code. I did find this
which refers to loading data into a render result from an exr file. I am guessing the issue we're talking about here lies somewhere in this code rather than the code you mention since that code on L176 just seems to create the names used in the exr file.
I agree the naming is consistent with the exr format, just seems odd to me that blender writes the multilayer exr file, but then can not load that very same file back as a render result. I am guessing that this is due to the use case of external renderers not being fully supported right up into being able to use the multiple passes in the way the internal render engines do. Again all guesses.
In any case, thanks again for commenting. Gotta get back to it! So much coding to do!