- User Since
- Aug 13 2010, 4:07 PM (405 w, 5 d)
Apr 16 2018
Thanks. It indeed is the reason for both the particular value and clipping.
Apr 12 2018
Jan 17 2018
Jan 6 2018
Nov 13 2017
Thanks for patient replies!
Nov 12 2017
BSDF transparency is another can of worms I wouldn't touch. I was referring to input value of rgba(1, 0.5, 0.25, 0.25) fed directly into compositing Combine RGBA, resulting in grey (transparent white) output (at least in the output file), which I though from your comment is due to clipping of premultiplication. Which it seems shouldn't happen.
Thanks for the link and I understand the problems with background-color/premultiplication, but it seems taken a bit to extreme.
Whether the render layer is being passed into compositing premultiplied or not is a matter of preference, as long as it's consistent, but unless I fail to see something else, explicit values and channel transforms shouldn't be affected by any post processing "behind the scenes" before final output.
That is - nodes like "Combine RGBA" (maybe Color Input) shouldn't do any post-processing on values passed to them. In contrast, a "Set Alpha" node can be expected to do premultiplication as the user just expects it to work within given color system (which as you said is premultiplied), but nodes like "Separate/Combine RGBA" imply and are needed so that manual transforms can be made. And, I mean - it's not exactly manual, if it's being automatically transformed on the spot :)
Sure it'd be great to have a fancy "unpremultiplied" checkbox here and there, but given there are options to do something alike to math on channels, there should be no obstacle to output anything imaginable without the checkbox, by doing manually at least.
Nov 7 2017
Nov 4 2017
Well, as a work-around that sounds better than what I did - re-parenting the objects being worked on at every key frame :)
Particle cache, at least from how it looks, isn't exactly relevant here as the main problem is with inability to move the animated objects themselves.
However if the issue is tedious and as some claim the particle system will be rewritten in observable future, the workaround of temporarily disabling subframes could be acceptable in most cases.
Nov 3 2017
"I think it is expected behaviour in order to manage baking in files containing multiples simulations."
Not really, because it's a Bake button of a concrete object not the scene. That is - it is there to explicitly bake a concrete object and it can't do anything else.
Nov 2 2017
Parent-child is not being used for fine tuning. The problem described above is to move objects individually, not the parent.
Nov 1 2017
Oct 31 2017
It does work indeed. Thanks!
Happens in preview, render F12, render image or animation.
Oct 30 2017
It seems that just using more than one point density node gives the same result.
Oct 27 2017
Thanks, I'm aware of that, the diversity itself is achievable quite easily. The goal was in interpolation of that diversity (like "clumping" in hair) and at moment that takes considerable amounts of magic, that instance modifier being compatible with skin modifier would alleviate in some cases. Sure, it may not be worth the time investment if the PS system will be rebuilt, but it's out of my competence to judge.
Changing the order eliminates the purpose. Which is to create impression of particles being bent differently with a controlled degree of cross-neighbor relatedness (think flower petals, mosses - lots of small scale repetitive structures). Because if the deformation texture scale is large enough for that to work (affecting particle mesh as whole, not just shifting separate vertices randomly), it begins also affecting particles as a whole, not individually.
A tested alternative for such cases is creating individual objects instead of particles with noise*location based driver deformation modifiers, however that has to deal with mass application of drivers, them using "self", etc, grinding performance of anything but unique cases to a halt.
A theoretical work-around would be having custom object hair PS, but objects seem to replace hair, not following their shape.
Oct 26 2017
Oct 4 2017
It is possible that I didn't fully understood what you mean, so I'll describe my thought in detail:
- The N/T-panel can be scaled up considerably (in full screen region mode or on a bigger monitor for example).
- Then the region gets reduced back to it's normal size or the file gets opened on a smaller monitor and the N/T-panel collapses due to lack of space.
- The user presses N/T shortcut but nothing happens as current N/T-panel width is larger than can be displayed in current region or even the monitor.
Oct 3 2017
@Vuk Gardašević (lijenstina) "If the area is made wider (by dragging the border) than the default width of T and N regions in pixels - they will be displayed." At least in 2.79 and before this seems not to be the case. If the n-panel in the viewport is scaled up and the viewport is reduced in size after, it won't show up regardless if there is space for the default size.
Could be called a minor headache but this sometimes does lead to confusion and thinking that keyboard button isn't working or something like that (had been banging it a few times :) ). "Instinctively" one often assumes that the panel will get reduced in size to available space or would even get displayed in any way possible rather than not show up at all. Especially problematic when opening a project saved on a big monitor on a smaller computer. Makes one wonder why suddenly all the panes "stopped working".
Sep 22 2017
Sep 11 2017
Thanks, but the reason of using "self" is the object having many thousands of copies (things like petals of opening flowers etc.). Referring each one to itself is not really possible. If it will be fixed eventually, there's no use wasting time of course.
Sep 6 2017
Sep 1 2017
Aug 30 2017
Aug 23 2017
"If you press F12, you always render according to renderlayer set-up."
Nope - try it.
Ok, depends on priorities probably. But there is still a problem of keyframes being inserted with -1..1 clipping, which was the main point.
@Ronan Zeegers (ronan) ducluzeau
Yes, I was using it for modelling and thank you for an insight and explanation of the reasons, but that is not the same as being logical :)
- "Only Render" display is disabled, thus it is logical to expect the view not to be influenced by what will be rendered.
- Viewport layers are detached from render layers, thus it should be only affected by layers that are set locally.
Aug 19 2017
Aug 14 2017
Aug 12 2017
Sorry, I'm not a developer :)
What I meant is it doesn't look like a bug, but something with geometry.
Also the general rule for reporting is to post an example .blend file. In this case that would avoid guessing.
Task manager graph shows CPU utilization not speed. It is logical that there's less computation while fetching transparent tiles, hence the lower utilization. Also the model seems to have relatively simple shader, hence not a big difference in render speed between geometry and background.
From how it looks - you may just have double edges there that don't overlap perfectly, hence you see some pixels of them.
That's normal. Obj files often don't contain materials, so changing viewport shading mode (that "button" is a viewport shading mode, it doesn't put materials on anything) shows them shaded white.
Aug 10 2017
Jul 23 2017
Daily build (e 982ebd) crashes for me.
Jul 22 2017
Jul 19 2017
Jul 18 2017
Sorry - Yes, I meant viewport Local Camera.
Jul 17 2017
Jul 15 2017
May 29 2017
Thanks. Not sure I get "delete for real" in outliner part though, as in Outliner->Blender file mode, Right click->Delete on objects acts the same as normal delete. That is - nothing happens if an object has shader node users.
May 25 2017
May 24 2017
- "When a texture coordinate shader node refers to an object and the scene is made a full copy of, the material itself gets duplicated, but the nodes still refer to an object in a previous scene, not the duplicate."
Isn't this a bug?
Maybe got it -
- When a texture coordinate shader node refers to an object and the scene is made a full copy of, the material itself gets duplicated, but the nodes still refer to an object in a previous scene, not the duplicate.
May 23 2017
Open the attached file and re-save it. The file has only one object present (as seen in the viewport and outliner), the rest were deleted. But a bunch of objects remain saved in the file as can be seen when trying to append or browsing data blocks.
Apr 26 2017
Apr 14 2017
Indeed. On linux, the daily build works fine, the 2.78c crashes.
Still though - daily build keeps non-linked objects present in the file after saving.
Mar 14 2017
-Change file path for file output in node editor.
-Check use curves in scene properties.
-Press render again.
-Resulting output files are different.
Mar 13 2017
Just run into this. So basically - viewport preview doesn't work for semi transparent objects as what you see is not what you get. I understand that probably output can be changed into what is shown in viewport theoretically, but I wasn't able to do that in compositor. I can get a same image as if viewport would be rendered over some color, but not if keeping the alpha (adding un/premultiply seems to fix alpha issue, but brightens the colors, that doesn't mach the viewport again).
Mar 10 2017
Ideally a "find missing files" function, could have in additional to "Find All" option to replace the packed files too, so that only known not to exist files could be unpacked after. But that's definitely a feature request...
Nov 18 2016
Nov 11 2016
As I said, I understand that, that's why I asked if it is possible to let user know in the status bar, when the auto-save is being executed or at least when file writing fails.
Nov 9 2016
Oct 3 2016
Sorry, a proper one
Aug 1 2016
May 25 2016
May 24 2016
May 22 2016
So it is usable and the actual value is retrievable. Not the end of the world then. Thanks.
Though is there a reason to rounding it this way? I mean it's a direct input value, not a generated one, thus any weird number gets entered exactly as intended.
Strangely that works. However, try entering 0.012, 0.017, 0.01111 - it returns 0.01, 0.02 etc.
May 21 2016
May 18 2016
Feb 6 2016
Feb 4 2016
That's the point - objects visibility state icon is not the only criteria for it to actually be visible. Local viewport layers do that too, hence it's not enough to iterate through that to get if user actually is aware of the object. Didn't find a proper way to do that in an hour long googling.
Thanks, that clarifies the reason it's not showing. Had to deal with same problem on what objects python considers visible. However that's not exactly right behavior. Though I agree that it's hard to figure out a quick fix without crippling something else...
Maybe that's far fetched idea in terms of coding, but how about considering all viewports currently displayed inclusively, instead of scene, to define visibility state? That would fix both this issue and accidental changes to actually invisible objects via some scripts that depend on visibility list too.