User Details
- User Since
- Nov 27 2019, 8:46 AM (129 w, 6 d)
Jan 5 2022
Thank you, Sergey - it's understandable. May I ask how the viewport (during gui rendering) getting the picture? Maybe there is a way to get it out of GPU on demand, like by calling _cycles.draw()?
Jan 4 2022
Hmmm, I understand that it's not supported officially (as a declared feature) - but the API allows to do that and worked before v3 just well. If you will run it with previous version of blender (like I did with 2.93.1 for example) - you will see that the png is actually is quite here:
> BLENDER_USER_CONFIG=. BLENDER_USER_SCRIPTS=. ~/local/blender-2.93.1-linux-x64/blender -b -noaudio --python-expr " import bpy bpy.context.scene.render.engine = 'CYCLES' bpy.context.scene.render.image_settings.file_format = 'PNG' bpy.context.scene.cycles.use_adaptive_sampling = False bpy.context.scene.cycles.samples = 2048 bpy.context.scene.cycles.use_progressive_refine = True counter=0 def my_handler(scene): global counter; counter += 1 if counter < 60: return print('\ntaking screenshot\n') bpy.data.images['Render Result'].save_render('_preview.png') print('screenshot taken\n'); import sys; sys.exit(0) bpy.app.handlers.render_stats.append(my_handler)
Dec 28 2021
Ok, just found a better (slightly) way to reproduce - don't know a way to get and parse status using built-in python, but simple if counter < 38 makes it happen right in time of printing the second sample status:
Dec 22 2021
Oh sorry about that, I have a relatively powerful workstation (48 threads 128GB RAM) - so now can't be sure it is able to be reproduced this way (even though I'm running my distributed agents in docker containers with 4 threads and 4GB of ram and there I'm able to reproduce it constantly, maybe it's up to CPU features or something)... For example on a much slower speed it could not get the memory corruption if will save exr before the buffers will change or something... You can try to run the save_render manually without this sleep or let me try to test it again with handler like bpy.app.handlers.render_stats because my preview saving is actually triggered during that event.
Dec 21 2021
Apr 4 2021
Any updates? Do you think it's possible to include in 2.93 LTS release?
Feb 16 2021
Hi @Philipp Oeser (lichtwerk) , no, this issue is about Cycles merger (built-in cycles functionality https://github.com/blender/blender/blob/master/intern/cycles/blender/addon/operators.py#L160 ), I just use a simple script from my addon to run and test it.
Jan 23 2021
Dec 3 2020
@Robert Guetzkow (rjg) I understand the complexity of catching every issue possible - it's not a great for sure. But if the error generates the message where instead of the module directory "BlendNet-v0.3.3" we have part of it "BlendNet-v0" - it looks weird. For sure it's a limitation of the python, but who need to be aware of it and find out what's wrong - Blender or the User?
Dec 2 2020
@Robert Guetzkow (rjg) I totally agree that addon developers should test their code and make sure it's working well - that's why BlendNet is used daily CI to ensure that the addon is working properly on 2.80, LTS and Latest Blender. And right now there is no issue in the addon itself, it's just the GitHub interface that posting additional files on the release page:
So for example in my particular case - I did everything correctly, packed my great addon properly, but github also provides a way to download the sources of the release and the tags are attached to the repo name like "BlendNet-v0.5" (https://github.com/state-of-the-art/BlendNet/issues/123#issuecomment-734960407). It's a different convo with github guys here: https://github.community/t/control-prefix-of-the-release-source-code/145808 , but overall there is not much I can do if user by mistake clicks on this link and downloads the wrong artifact. Or even just to accidentally put dot in the name because he likes dots - doesn't matter.
Dec 1 2020
Unfortunately no - the users installs the addon archive, see the exception and don't understand why it's not found. It's not obvious - the addon is here (folder and stuff), but just contains dot in the name.
Nov 28 2020
Oct 21 2020
Ok, during preparing of the example bundle I found what was the actual issue - seems like if the current scene render engine changes from the render engine - it's causing the crash.
Hi @Richard Antalik (ISS) , sure will prepare a simple test suite to check the behavior.
Oct 20 2020
Sep 24 2020
Sep 22 2020
Sep 11 2020
Just found another thing with baking on 2.90 - if I'm changing the initial fuel level (0-1.3 bezier) for example and not starting from 1.3 - it's dissolving before reaching top of the domain, than it probably will not fail. Not sure how it's related, but maybe it's a workaround for now - just make sure the fire particles not reaching the domain limits.
Sep 4 2020
Hello Guys, just found quite the same during working on BlendNet addon (https://github.com/state-of-the-art/BlendNet/issues/57#issuecomment-687360342), it's not critical - but definitely will be great to have it fixed.
Sep 3 2020
I just tested the latest blender-2.91.0-ba188e721899-linux64 build:
Sep 1 2020
So, I checked recently released 2.90 - even worse it crashed on 2th step "baking" (5th domain bake, 215 frame):
$ blender found bundled python: /home/user/local/blender-2.90.0-linux64/2.90/python Read blend: /home/user/tmp/mantaflow/mantaflow-render-test.blend terminate called after throwing an instance of 'Manta::Error' what(): can't clean grid cache, some grids are still in use Error raised in /home/sources/buildbot-worker-linux_centos7/linux_290/blender.git/extern/mantaflow/preprocessed/fluidsolver.cpp:33 zsh: abort (core dumped) blender
Aug 31 2020
Hello Guys, anyone could reproduce? @Raimund Klink (Raimund58)
Aug 29 2020
Aug 28 2020
Simple test blend file:
Ok, just reproduced the issue on a simple blend file and default blender 2.83.5 . Will reproduce one more time from nothing and will attach the steps to reproduce and the file.
Aug 27 2020
Looks like it's happening during animation rendering once per ~150 frames, still trying to figure out the minimal configuration.
Jul 10 2020
So the change will be added only to 2.9? Do you have plans to integrate it to 2.83 LTS?
Jul 5 2020
Yep, affecting me too. I think it's somehow related to the prefetch frames mechanism - when it's working on the erasing strip there are ~100% chance to crash.