- User Since
- Jun 16 2017, 1:41 AM (140 w, 5 d)
Dec 19 2019
Dec 16 2019
Dec 15 2019
Great to see this being tackled! One small note:
Looking good as far as I can tell. A few things:
Dec 14 2019
Nov 24 2019
Nov 21 2019
I believe we're mixing terms here? I've used the concept of light groups in Arnold in the past, as (I think) a precursor of custom AOVs, where the lights' influence gets output to a separate render pass per group. In which case this seems better covered by D4837.
@Dalai Felinto (dfelinto) 's task description is closer to what @Gavin Scott (Zoot) writes:
Applications that support Light Linking generally let you associate either individual lights or groups of lights with one or more objects to illuminate.
Nov 20 2019
Nov 19 2019
Ironically, that makes me very happy :D
- When rendering and compositing the linear scene ref space (default REC709) must be used. This means that the overlay/mode engines will be drawing in REC709 and only as a last step the view transform will be applied.
- Note that this will also impact how Grease pencil blends colors and there will be rendering differences. These differences are currently also noticeable during final rendering of GP projects.
- Images that are rendered in the background are currently fixed to REC709 or sRGB. other colorspaces are ignored. We should transform the image buffer from its color space to one of them (depends a bit on use).
Nov 17 2019
Without having a deeper look (I havent checked how the names are actually ending up when blender writes them), I am reading this as Sets.Glass.Combined would be a valid name for layer/sub-layer? (Unless a deeper nesting is not allowed? Maybe there is no layer.sublayer.sublayer.channel allowed?)
Nov 15 2019
I don't get it. Why don't you simply NOT use . in your view layer names?
That's obviously my workaround now. I'm simply pointing out that Blender isn't following a common convention, and its own convention is to use periods in *all* datablock naming, including view layers. Which I happen to prefer over anything else for a number of reasons.
Nov 14 2019
Apparently it does work if the glass object is not inside the volume object. This defeats the purpose in many cases though.
Nov 9 2019
Nov 8 2019
Nov 7 2019
I think your expectations are incorrect. Your sRGB image is *always* linearised, the second you indicated that its file encoding is indeed sRGB. Behind the scenes of the compositor, this removes the sRGB OETF and turns the image into a display linear sRGB buffer.
Many many kudos for the write-up. This would tackle a huge amount of the current issues.
When doing viewport rendering the color management is done on the CPU what is typically slow.
But doing it on the GPU would be less accurate, you once told me. How inaccurate are we talking? Is it worth the trade-off?
I think it's more superficial than that. Lookdev Mode to me seems like a temporary World that Blender creates and tells Eevee to use while the Viewport is in that mode. In that case this World uses an environment texture like any other World does, only without the amount of controls we normally would be presented with.
Nov 6 2019
when using a different OCIO config, the Rec.709 colour space is almost never listed as just 'Linear', so Blender picks a different space from the config
This is probably best avoided by looking up roles instead of literal colour space names.
I've attempted summarising the DevTalk post. Of course the original article and its reactions contain much more information - and a bit of sidetracking - but I've tried to condense the gist of it as best I could. With just a hint of personal opinion here and there.
Nov 1 2019
Oct 15 2019
Also linking D2547 here. In my opinion not absolutely essential, but the discussion is lengthy and should not be disregarded.
Sep 27 2019
The core issue is that the HSV code is legacy, and was based on the nonlinear assumptions behind sRGB.
Would this be the cause of the HSV colour picker returning crazy RGB ratios when using an alternative OCIO config?
Sep 22 2019
Any news on this?
Sep 16 2019
Sep 13 2019
Sep 12 2019
Simply create a new text object, the issue is already clearly visible in the placeholder text, between the letters 'T' and 'e'.
Seems a bit pointless to attach a sample file for issues that are clear as day with default values. But perhaps this screenshot shows what I mean.
Generally, graphics software provide the option to either enable or disable kerning. Which is the default differs per package, but it should never be ignored.
Note that this simply constitutes taking into account a table of data that is already present in the font file (or not, depending on the quality of the typeface). That said, it's possible that Bfont might not have kerning data present in its file at all, but I have tested with typefaces that do, and the same issue shows.
Sep 11 2019
All non-master commits are "unpublished" since a while now
@William Reynish (billreynish) Ah, 'non-master'. Didn't spot that before. I haven't said anything!
In the Transform Preferences, Rendering Space should be labeled as Working Space as involving also the Compositor and Sequencer.
The Image Editor needs a Viewing Transform option clearly visible since going back and forth to the Color Management panel is counterproductive.
Also correct. Just like the Save as Render flag, View as Render should make way for a more granular UI (Color Space is already there, but View Transform is not).
Note that the settings inside the Color Management panel don't even influence the view inside the Image Editor, as they only pertain to rendering and output.
View as Render is not only useless and badly named, it's also incorrect to use it. An image used as input should not have a transform used for viewing output slapped onto it. The checkbox should be replaced by a View Transform enum, also stored per image datablock.
LiftGammaGain should stay since many videos are composited, edited and graded in Rec.709, and many workflows involve this space but Lift should be fixed, it behaves as Lift+Power.
Just because your input material is Rec. 709, doesn't mean that it is display referred. On the contrary - the Blender compositor operates linearly, which is where Lift/Gamma/Gain breaks. The algorithm works on shadows, midtones and highlights, which are impossible to separate when using scene referred data.
Also LOG based controls should be included like a Contrast with a Pivot option (add Pivot to the contrast node?)
Shadow/Midtone/Highlight control wheels should be included for LOG based secondaries.
Again, compositing is a linear affair. The log controls you mention (including a shadows/midtones/highlights separation) are geared towards a grading workflow. Which should not happen on linear data, unless the tools use a log shaper if both input and output are stored linearly, like in an ACES workflow. But I digress.
It's beyond the scope of this task, but @Troy Sobotka (sobotka) did suggest a dedicated Grading Editor with correct controls for a log-based workflow. This editor would for example allow the colourist to work on NodeTrees linked to VSE strips, which is similar to e.g. DaVinci Resolve.
That Pivot option, however, is something that could very well be added to colour manipulation nodes already present in Blender, like Brightness/Contrast, but also Slope/Offset/Power.
A LUT node to apply different luts not only for artistic but also technical transforms, tonemappers, etc.
LUT nodes, enums for every node, .. all ways to prep data more adequately for the nodes they're about to go through.
Is there a reason for ea94cade2991 to be an unpublished commit?
Sep 5 2019
Found a few more related tasks that might be worth listing here.
Some of them are quite old, but the fact that they are still marked as open, at least warrants a check for whether they should be. Especially the ones containing lengthy discussions that I'm not sure how valid they are today.
Sep 4 2019
Sep 3 2019
Aug 28 2019
Previous status was probably the best way to never make it into a developer's field of attention.
Updating projects and priority so the devs can decide where this belongs.
Aug 21 2019
Great to see that the wiki document made it into a task. I would like to suggest this task to be made a parent task of all other open tasks regarding the subject. It's great that this task is finally here - it would be even better if we make it an overview of everything else - stuff like T46860, T34684, T68242, T67832, etc. Then at least we have a bird's eye view on the scope of what needs to be done.
Aug 19 2019
Aug 15 2019
Seemed to be related to X Mirror being enabled.
Jul 20 2019
Apr 18 2019
Apr 15 2019
Apr 10 2019
After performing a test just now at the studio, it seems that the Random distribution method is now render farm safe, which I hear it hasn't been before. Is this correct?
If that behaviour remains consistent, I frankly don't need the Jittered method anymore - since it was only used here to circumvent that problem.
Apr 2 2019
On Windows I'm still having the problem. Here's a blend file that shows what I mean. In my experience the problem manifests itself in different patterns, but the repetition is noticeable nonetheless. Additionally, the system generates multiple particles in the same spot. Changing the jittering does nothing but change the angle of the lines in the pattern, and the seed value doesn't resolve it either.
Apr 1 2019
You can make it totally random by choosing the Random distribution method, or semi-random by changing the jittering.
Mar 31 2019
Mar 25 2019
Feb 28 2019
I didn't specifically mean moving the cursor. That as a concept doesn't sit with me very well in all cases. Keystrokes are fine being tied to specific functionality in specific editors, and therefore require Blender to keep the focus in the intended area. But I don't like that to be tied to where the mouse *is*. While typing in the text editor or Python console, for example, this can have me pressing 237689 keystrokes in the viewport before I realise it.
Any news on this?
Feb 27 2019
Just discussed this with @Julien Kaspar (JulienKaspar). Seems like it's not a bug per se, only confusing behaviour emphasized by another bug.
The fact that add-ons with panels set to '.objectmode' disappear when changing modes is of course correct behaviour - the add-ons themselves are at fault for declaring they are related to Object Mode, while that's not necessarily true.
The confusing behaviour is caused by the fact that the top bar *looks* application-wide, but should have contents directly linked to the UI area and the tool that are currently active.
The actual bug is that the top bar doesn't update when it should, namely when focus shifts from one area to the other. Accordig to @Julien Kaspar (JulienKaspar), this has been discussed before, so I assume there's already a task for that?
Feb 25 2019
Feb 15 2019
Feb 14 2019
Feb 11 2019
Feb 7 2019
Feb 1 2019
I'd like to mention one that's been confusing me forever.
Randomize Transform has meters as the unit for location and scale. For location this makes sense, but for scale I feel it doesn't.
Also, if I understand it correctly, setting this number to 1.15m would set the boundaries for the randomisation to 85% - 115%, which is not very clear in this UI.
I would like to propose two fixes for this:
- Lose the meters unit for the random scale part of the tool, and display the input as the factor it actually is.
- Work with actual, defined boundaries in the UI - minimum and maximum. I feel this offers more control and predictability than using one number that the user has to imagine going in both directions of the axis.
Dec 21 2018
Display the UVs very clearly at all times for all 3D Viewport Modes when in the UV Editor
I totally agree with @Julien Kaspar (JulienKaspar) here. I've always found it super annoying that I had to go into edit mode just to see the UVs on a mesh. In object mode, it could even just display them, and only make them editable in edit mode - this would be very intuitive across Blender, and similar to the way Maya works as well, for example.
Dec 18 2018
I am really dying to know the new Tools and interface changes that must be planned to make the UV Editor not just better than it currently is
Whether or not that's related to splitting editors. We're just wondering.
@William Reynish (billreynish) To be honest, that's not really answering @Julien Kaspar (JulienKaspar)'s question. And I must admit I've asked the same before. What are the plans for the editors after they've been separated? Any new tools planned, UI revamp, ..?
Aaaaand my bad, little oversight that was completely my own fault.
In my case, both the background colour and the shading are pure black.
Attached is a screenshot of rendered mode with film transparency and the problem.
This is at the studio in Linux build 08e6948da50, so it does have the same problem as my 9149e894210 build on Windows at home.
It can't really be a driver thing afaik, since this is not using GPU rendering.
Fair enough, makes sense.
Is it normal though for undo steps to take up more memory when the complexity of said objects increases? Even if the datablocks are linked from another library?
A cube, for example, adds kilobytes. A forest of linked trees adds tens of megabytes each time.
@Brecht Van Lommel (brecht) What if I wanted to paint on the World texture? Good idea about separating mesh-related data and independent images. But I still don't see why UV editing and image painting have to happen in the same editor, especially if they're going to limit data access to what is selected in the viewport.
I would suggest this:
- UV Editor: data depends on the selected mesh, allows you to work in a dedicated editor
- Image Editor: independently selected image datablock (you want to be able to paint on anything really), optionally show the selected mesh's UVs. Image Viewer doesn't really need to be something separate. I mean, if you can look at an image, you might as well be able to paint on it without having to open up a different editor.
Dec 11 2018
@Brecht Van Lommel (brecht) I would expect the same from the RGB values, no? Only involve the view transform in displaying the colours, not the numbers? I just assumed all of the values were always linear, and wherever I'd see an actual colour displayed, that would have been transformed. Or am I seeing things wrong?