- User Since
- Oct 16 2008, 6:09 PM (474 w, 4 d)
Mon, Oct 23
Aug 28 2017
Updating to latest driver fixed the delay here. This can surely be optimized better but it's not a blocker any more and it Works On My Machine (tm) now, so it will be up to you guys to improve this if you want.
May 22 2017
May 13 2017
Apr 13 2017
Jan 9 2017
Sep 10 2016
Sep 9 2016
Sep 4 2016
Aug 30 2016
Jul 22 2016
Put some backwards compatibility in the mix, and this becomes a nice headache problem ;)
Made some changes to my comment because even my own brain was printing parsing errors after reading it.
Hey Brecht, the problem here is converting from scene units (depth buffer to world space that we use in the shader will be in scene units) to the units used internally by the camera. More specifically we are interested in "what percentage of the camera sensor is covered by the circle of confusion".
Jun 19 2016
Jun 2 2016
Well, you can also go to blender installation folder, open a command line on that directory with shift-right click and the relevant command and enter then blender -R in the console.
May 21 2016
May 20 2016
One thing to keep in mind: derivatives are not well defined if there are conditionals in the shader execution leading up to the .evaluation of the derivatives (For instance, read http://hacksoflife.blogspot.de/2011/01/derivatives-ii-conditional-texture.html). The cycles node system allows arbitrary code to run in order to evaluate uvs, therefore you might fall into issues with that. You may be able to hardcode this into some node (for instance, the UV node itself), and this would work much better.
May 16 2016
I'm afraid I'll have to agree that this is indeed not a bug - it works as intended so I will be closing this.
May 6 2016
Generally if the units were right everywhere (and from what I remember they are not...) scene scale is applied to all lengths in the scene, and then the geometry should not change and the focus would be the same. The crappy thing here is that some parameters from camera are always assumed to be in one unit (mm) which should not change with scene scale probably. On the other hand, the whole point of the focus is to keep, well, things in focus, so this should work regardless of the units of the scene.
Mar 22 2016
Found the bug report:
Hey @Jon Denning (gfxcoder), check if upgrading your Mesa package resolves this. There was a bug in old Mesa which was exactly this (VBO + selection = crash) and fixed two years after we reported it :). I think it was sometime in 2014/2015 which is when your implementation came out, so there's a chance latest versions are fine.
Mar 21 2016
Mar 13 2016
Committed in 861616bf693b78b070ada6cbc6aa79eb807fdde8 thanks for the patch
Mar 12 2016
Can you rebase this against master? I'm getting some errors here
Mar 7 2016
Feb 15 2016
Aaaah, indeed you're right. Thanks for fixing, Brecht.
Feb 11 2016
Feb 5 2016
As always for best results we would need a render target to store screen space normals.
It's probably caused by different implementations of dfdx/dfdy functions. You might be able to get rid of some of those by introducing some more bias to the AO effect, but you'll have to test it on the GPU with the issue to make sure it works.
Jan 18 2016
Jan 13 2016
Note - if you comment this out, you'll probably be able to sculpt normally but your normals will be wrong in non-optimal drawing. The change is very subtle, you might miss the error unless you expect it, it should be apparent there. Try GLSL mode it should appear there.
cdDM_update_normals_from_pbvh makes sure normals are updated even when we don't use optimal drawing from pbvh. It basically flushes the changes from PBVH to the derivedmesh itself. It's mostly useful on non-solid draw mode or when we can't do optimized drawing with PBVH (check can_pbvh_draw function)
Jan 11 2016
Jan 9 2016
Jan 8 2016
Jan 7 2016
Jan 6 2016
Jan 3 2016
Jan 2 2016
Actually, I agree with @Brecht Van Lommel (brecht), using BLI_polyfill_calc might be the way to go here.
It just needs to be called after every edit. Maybe we should test how well that works first?
If artists find the old behaviour still useful then I guess it's fine to keep. Added some inline comments.
Ah, I see this is done with inversion...forget my last comment then.
Also, for better performance you can use two sided stencil, it should reduce the algorithm to two passes only.
A couple of notes:
Jan 1 2016
Dec 28 2015
LGTM, fire away!
OK, one more thing which is missing is read-write from file save. As far as I can see writing should be explicit but make sure to load the scene back in lib_link_sequence_modifiers
Also need to convert byte images to linear space too (in addition to premultiply)
Dec 27 2015
Dec 25 2015
Dec 22 2015
I'll try. No promises, the system there is pretty broken.
Dec 17 2015
I don't think it's about depth only. The cursor being bound to the action button makes it very prone to mishaps (for both position and depth). Combined with the fact that the operation is not undoable such a mishap is pretty frustrating and this makes it a useful feature to have. Things to change here maybe to improve the situation:
Dec 13 2015
I've added your driver to our list of erratic drivers so it should be fine in the next release.
Dec 11 2015
You might want to check with @Thomas Dinges (dingto) on lowering the minimum requirements again, iirc there were some different issues, that's the reason they were raised. 128 MB is -very- low nowadays.
Dec 10 2015
Smooth only helps in sculpt mode, regular display memory footprint does not depend on smooth/flat attributes.