- User Since
- Aug 13 2010, 4:07 PM (478 w, 4 d)
Wed, Oct 9
This still occurs in official 2.8 - rendering test ffmpeg produces color shifted image, compared to source png.
Aug 27 2019
I'll try to replicate it, but afaik the 'Delete' command was added in 'Blender File' view for cases like this - to force-remove the data if something goes wrong and it is considered used by something.
Thank you. That indeed worked.
Thanks, I'll probably find a work-around, but just to give an example - in my case there are 160 objects with linked data. Although it's probably possible to do some check on whether the data is compatible, I actually wasn't expecting that due to potential nuances. But what also wasn't expected is skipping the modifier alltogether, as it could skip just the hidden vertex binding list, still removing the majority of clicking work.
Aug 26 2019
Aug 20 2019
I mean - If the explanation behind is that ctrl+tab should always call a mode choice menu, then there is an inconsistency that in object mode, ctrl+tab leads directly to Pose mode instead. If not - why is it there for Bone edit mode? It's not like switching to Pose from Object mode is somehow less needed that from Edit mode.
Sorry for awkward explanation.
Wait.. I'm not sure I explained myself right - the complaint is that there is a new unnecessary menu added that does nothing but slows the user and is inconsistent compared to the other mode.
Meaning - From Pose mode - pressing Edit hotkey (tab) goes directly to that mode, but from Edit mode, Pose hotkey (ctrl+tab) does something else - calls a menu.
Aug 19 2019
Aug 15 2019
Also - this is even less of a bug, but graph editor frame strip eats away from it's viewport when making graph editor region full screen and back. Possibly it should be excluded from viewport height together with scrollbar, although strip's bg_color could just be more transparent by default, as what's behind it is actually still clickable, despite being hardly visible.
Regarding being a bug or not - you can't select a node after running the function that is supposed to make it available for selection (visible), so...
Sorry - I'm not using 3D machine for internet, so it's not really easier, but I'll try to if the time allows next time.
Anyway - if it's already fixed probably should probably be closed. Thanks!
Apr 16 2019
Apr 15 2019
Feb 26 2019
Feb 2 2019
@Troy Sobotka (sobotka) Could you elaborate a little or maybe refer to some resource on why it's not feasible to apply view transform to current scene input strip/node separately and then pass it into VSE/compositor, that'd have it's own view transform ("None" for example), to be able to combine with existing non-hdr material?
Jan 31 2019
With latest build I can't reproduce it too.
Assuming there is no reason to believe the release will behave somewhat differently, I should've checked that first before posting, sorry.
Ok, not a bug then.
As for design - then how about having separate View Transform profiles for 3D render result and for compositing result, so that a scene can be rendered giving output with Filmic transform, and combined with material that already has that transform (especially LDR video footage, which doesn't exist without "Filmic")?
Or - bit less convenient, but probably more flexible and simpler - having an option to apply inverse transform to input strips/nodes, so that when View Transform is applied it hits only the result of 3D render.
Huh - I thought "incomplete" means it's not yet worth trying to reproduce, and "Open" as being investigated.
No, those are different machines, however rendering in the file is set to CPU (gtx580 isn't supported as CUDA device in 2.8).
Btw - what's incomplete about the report?
Jan 30 2019
Sorry, I should've worded that clearer.
Color type differentiation sounds intriguing :)
- It doesn't reach 255 in sRGB display transform.
- There is no slider to go beyond 1.0 if setting HDR color is the point. Which it probably is not, given 1 is "absolute" reflectance, which is how majority of material color settings are used.
- Normal monitor isn't capable of displaying HDR. Pushing the value down is limiting the already very limited resource of color gamut.
- Normal monitor is not able do display sensible hdr even if color is being pushed down like this. Meaning - it gives 1, max 2 additional stops with a highly trained eye. One can't tell what color it actually is and at what luminance beyond that.
Ok, so this:
does the conversion for other programs to read colors identical to png.
-color_primaries 1 -color_trc 1 -colorspace 1
makes Blender recognize the converted colors properly.
Found it! Adding this option to ffmpeg seems to produce result with no color shifting:
It does happen in 2.79 on windows/linux too, however I was wrong - Converting in Handbrake (FFMpeg) directly from PNG does produce the color shift too. Converting from MOV+h264 that was created from the same PNG in AfterEffects however does not. Which suggests that there is a problem and that it is solvable, but it's unlikely to be a Blender's bug.
Jan 29 2019
Tried rendering 3D scene, not VSE.
On an unrelated side note, given Filmic was mentioned - applying Filmic color space transform to scene rendering by default may be a good strategy for many cases, but it probably will be a source of headache and many complaints for anything involving video editing as any imported file already has color transforms applied to it, which will damage external footage and will accumulate on re-rendering even on that made in Blender, unless paid attention to turn off.
It is already set to sRGB/Default in the attached file. I was talking about the color space used by the FFMpeg encoder inside Blender (FFMpeg in converters like Handbrake seems not to have this issue). The difference is less perceivable than that between Filmic and sRGB, but it is quite strong in terms of color correction.
Jan 28 2019
Jan 23 2019
If I may add -
Alt+scroll doesn't work for scrolling time (2.8beta), which was it's global function in 2.79. Wouldn't say anything, but it is the main navigation method for animation. Not having it makes things harder and slower.
Dec 12 2018
Dec 10 2018
Tested - looks fixed. But now another problem (maybe unrelated) showed up (it isn't there in 2.79)
- Subsurfaced edges maintain vertex radius only in old vertices. New (interpolated) vertices have vertex radius set to 0, which can be visualized by applying skin modifier to a subsurfaced edge.
Apr 16 2018
Thanks. It indeed is the reason for both the particular value and clipping.
Apr 12 2018
Jan 17 2018
Jan 6 2018
Nov 13 2017
Thanks for patient replies!
Nov 12 2017
BSDF transparency is another can of worms I wouldn't touch. I was referring to input value of rgba(1, 0.5, 0.25, 0.25) fed directly into compositing Combine RGBA, resulting in grey (transparent white) output (at least in the output file), which I though from your comment is due to clipping of premultiplication. Which it seems shouldn't happen.
Thanks for the link and I understand the problems with background-color/premultiplication, but it seems taken a bit to extreme.
Whether the render layer is being passed into compositing premultiplied or not is a matter of preference, as long as it's consistent, but unless I fail to see something else, explicit values and channel transforms shouldn't be affected by any post processing "behind the scenes" before final output.
That is - nodes like "Combine RGBA" (maybe Color Input) shouldn't do any post-processing on values passed to them. In contrast, a "Set Alpha" node can be expected to do premultiplication as the user just expects it to work within given color system (which as you said is premultiplied), but nodes like "Separate/Combine RGBA" imply and are needed so that manual transforms can be made. And, I mean - it's not exactly manual, if it's being automatically transformed on the spot :)
Sure it'd be great to have a fancy "unpremultiplied" checkbox here and there, but given there are options to do something alike to math on channels, there should be no obstacle to output anything imaginable without the checkbox, by doing manually at least.
Nov 7 2017
Nov 4 2017
Well, as a work-around that sounds better than what I did - re-parenting the objects being worked on at every key frame :)
Particle cache, at least from how it looks, isn't exactly relevant here as the main problem is with inability to move the animated objects themselves.
However if the issue is tedious and as some claim the particle system will be rewritten in observable future, the workaround of temporarily disabling subframes could be acceptable in most cases.
Nov 3 2017
"I think it is expected behaviour in order to manage baking in files containing multiples simulations."
Not really, because it's a Bake button of a concrete object not the scene. That is - it is there to explicitly bake a concrete object and it can't do anything else.
Nov 2 2017
Parent-child is not being used for fine tuning. The problem described above is to move objects individually, not the parent.
Nov 1 2017
Oct 31 2017
It does work indeed. Thanks!
Happens in preview, render F12, render image or animation.
Oct 30 2017
It seems that just using more than one point density node gives the same result.
Oct 27 2017
Thanks, I'm aware of that, the diversity itself is achievable quite easily. The goal was in interpolation of that diversity (like "clumping" in hair) and at moment that takes considerable amounts of magic, that instance modifier being compatible with skin modifier would alleviate in some cases. Sure, it may not be worth the time investment if the PS system will be rebuilt, but it's out of my competence to judge.
Changing the order eliminates the purpose. Which is to create impression of particles being bent differently with a controlled degree of cross-neighbor relatedness (think flower petals, mosses - lots of small scale repetitive structures). Because if the deformation texture scale is large enough for that to work (affecting particle mesh as whole, not just shifting separate vertices randomly), it begins also affecting particles as a whole, not individually.
A tested alternative for such cases is creating individual objects instead of particles with noise*location based driver deformation modifiers, however that has to deal with mass application of drivers, them using "self", etc, grinding performance of anything but unique cases to a halt.
A theoretical work-around would be having custom object hair PS, but objects seem to replace hair, not following their shape.
Oct 26 2017
Oct 4 2017
It is possible that I didn't fully understood what you mean, so I'll describe my thought in detail:
- The N/T-panel can be scaled up considerably (in full screen region mode or on a bigger monitor for example).
- Then the region gets reduced back to it's normal size or the file gets opened on a smaller monitor and the N/T-panel collapses due to lack of space.
- The user presses N/T shortcut but nothing happens as current N/T-panel width is larger than can be displayed in current region or even the monitor.
Oct 3 2017
@Vuk Gardašević (lijenstina) "If the area is made wider (by dragging the border) than the default width of T and N regions in pixels - they will be displayed." At least in 2.79 and before this seems not to be the case. If the n-panel in the viewport is scaled up and the viewport is reduced in size after, it won't show up regardless if there is space for the default size.
Could be called a minor headache but this sometimes does lead to confusion and thinking that keyboard button isn't working or something like that (had been banging it a few times :) ). "Instinctively" one often assumes that the panel will get reduced in size to available space or would even get displayed in any way possible rather than not show up at all. Especially problematic when opening a project saved on a big monitor on a smaller computer. Makes one wonder why suddenly all the panes "stopped working".
Sep 22 2017
Sep 11 2017
Thanks, but the reason of using "self" is the object having many thousands of copies (things like petals of opening flowers etc.). Referring each one to itself is not really possible. If it will be fixed eventually, there's no use wasting time of course.
Sep 6 2017
Sep 1 2017
Aug 30 2017
Aug 23 2017
"If you press F12, you always render according to renderlayer set-up."
Nope - try it.
Ok, depends on priorities probably. But there is still a problem of keyframes being inserted with -1..1 clipping, which was the main point.
@Ronan Zeegers (ronan) ducluzeau
Yes, I was using it for modelling and thank you for an insight and explanation of the reasons, but that is not the same as being logical :)
- "Only Render" display is disabled, thus it is logical to expect the view not to be influenced by what will be rendered.
- Viewport layers are detached from render layers, thus it should be only affected by layers that are set locally.
Aug 19 2017
Aug 14 2017
Aug 12 2017
Sorry, I'm not a developer :)
What I meant is it doesn't look like a bug, but something with geometry.
Also the general rule for reporting is to post an example .blend file. In this case that would avoid guessing.
Task manager graph shows CPU utilization not speed. It is logical that there's less computation while fetching transparent tiles, hence the lower utilization. Also the model seems to have relatively simple shader, hence not a big difference in render speed between geometry and background.
From how it looks - you may just have double edges there that don't overlap perfectly, hence you see some pixels of them.
That's normal. Obj files often don't contain materials, so changing viewport shading mode (that "button" is a viewport shading mode, it doesn't put materials on anything) shows them shaded white.
Aug 10 2017
Jul 23 2017
Daily build (e 982ebd) crashes for me.