- User Since
- Mar 4 2009, 3:28 AM (533 w, 4 d)
Apr 17 2018
I'd recommend having a look at ZBrush's tool palette. There are potentially dozens (or even hundreds) of useful brushes that could exist in an artist's toolset. I can't imagine them collapsing nicely into anything resembling a quick toolbar. In fact, one of my biggest gripes with the current interface is how difficult it becomes to load and then navigate a full arsenal of alphas and custom brushes.
Mar 27 2018
Can you provide an example of where that's an issue? In a quick test it seems to work as expected, expected average roughness of the two materials mixed by weight is returned to the baked texture.
Made requested changes
Indeed, this won't work nicely for view-dependent effects like Fresnel or Facing values, but then again these aren't really effects that should be baked in the first place IMO, and those effects can't be used for PBR baking anyway, which is where I envision roughness baking having the most utility. What is the behavior today if you bake diffuse/glossy when mixing this way. In any case, I think that the benefits of being able to bake all roughness values outweigh the potential confusion over the corner cases.
Jan 15 2018
Nov 1 2017
Aug 21 2017
I think that there is a big opportunity here with this bevel, and more importantly a design change allowing for at least the option of real pointiness and "usable" AO for the painting and baking workflow of Blender to be taken to the next level. There currently, as far as I'm aware, doesn't exist an open source, or even free painting solution with a suite of powerful procedurals/render-time effects like that. While it may be slower for final renders, it opens up a lot of doors for anyone authoring content for baking in a PBR workflow. With the addition of a few paintable masks and a local material repository, alongside the changes proposed in this discussion, Blender suddenly begins approaching feature parity with Quixel or Substance Painter/Designer. Worst case scenario, we can retain the old behavior(s) for pointiness for example, with a user option for a higher quality sampled version.
Jun 30 2017
Apr 15 2016
Having the preview setting work as a percentage might be a good idea, since cases are probably common where you'll want more dicing for some objects compared to others, but still have the whole scene simplified. i.e. 50% would dice all objects in the scene at 2x the number of pixels as their object setting. Would probably require some extra int casting or floor/ceil stuff, but would be much more useful I think than a flat simplification number for everything in the scene.
Apr 6 2016
Feb 15 2016
I'm sure it's already been covered above, but this would be really nice for keeping contiguous shading on characters that are split into multiple objects (arms, legs, torso, head, etc.) for the sake of external texturing tools that don't support Blender's UV sets.
Feb 11 2016
Note that this seems to be constrained only to using Object Vertices. Particle System mode appears to render as expected.
Jan 26 2016
Another note, even after applying multires, if you attempt to sculpt you get the same result where the smooth brush eats the mesh, so it would appear that this is related to the mesh data of this object itself rather than just the multires.
On a related note, if I apply the multires modifier it looks like this.
Jan 16 2016
This is mostly a dead experiment I think. There are substantially better adaptive methods out there at this point (that Lukas has actively been researching) along with things like Gradient Domain that offer solutions that are much more production-safe. I wouldn't hold my breath for this particular adaptive branch, but if I'm wrong Lukas can correct me.
Oct 18 2015
Sep 25 2015
I'd be curious to hear your thoughts on .tx support as well, Brecht (and Thomas and Sergey). The render time savings in Arnold and Renderman are pretty enormous in my own tests.
Adding to the TODO tackling, I think time should be spent finishing some of the more half-baked features present before jumping into new things. Hair, hair shaders, volumes, and deformation blur still add much more to render time than should be acceptable, and proper displacements+autobump are a big part of a production pipeline that have been sitting on the table for years. Support for industry standards like UDIM and Ptex should also be high priority, even if Blender itself can't properly create them yet, especially considering that Cycles is now stretching its legs in other 3d packages. UDIM especially would likely take almost no time to implement, as everything we would need is already in the code.
Sep 16 2015
Sep 8 2015
Sep 7 2015
Aug 29 2015
Can anyone confirm that bskin.patch from the previous post strips all of the internal python UI elements from Blender? Building without the patch gives no issues here.
Aug 25 2015
Can't test the newest revision because the diff won't apply correctly, but the revision from the 19th still produces the same crashes.
This is my debug output from the time I add an object to the crash.
Not worth attaching a file, it would just be an empty blend. The crash happens as soon as I try to add the new skin modifier to anything.
Aug 21 2015
Blender crashes on addition of modifier/verts here. Win7 x64.
Aug 18 2015
I'm sure you've seen these already, but they might be interesting for other followers to take a look at. Both offer impressive speed ups and appear to be fairly trivial to add to existing path tracers. Cheers!
Aug 17 2015
Aug 16 2015
Looks very promising! Will patch and test tomorrow.
Aug 8 2015
Just a heads up because it might be related, baking seems to be checking for gpu even when it's not the chosen render device. Not usually an issue, but if your build doesn't support your gpu shader model this crashes Blender even with CPU selected.
May 18 2015
Mar 21 2015
Pinning works as exactly as expected. Speed is really nice too. From an artistic point of view this seems good to go in my opinion, it's already changed the way I'll skin in Blender in the future.
Mar 19 2015
Smoothing seems to be messed up now. Sections still aren't smoothed even after hundreds of iterations. Could be a problem with either deltas or the new smoothing algorithm, but it looks like it's more likely the new smoothing.
Mar 16 2015
Here's the file I've been testing on, if you're interested. It has lots of surface details that need to be preserved. The modifier as it exists actually handles this mesh fairly well. It still showcases some of the volume shrinkage though. No non-manifold points to test with on this one unfortunately, but I'll gladly whip up some other test cases if you need them.
On further testing it would appear that the issue is somewhere in the reprojection of points, as the smooth modifier by itself doesn't have the same issue with maintaining volume.
MUCH better since the switch to standard smoothing, but still some glaring issues that I'm having trouble pinpointing. Pinning of boundary verts is a big one. I don't think this needs to be as complicated as setting up vertex groups, just a checkbox should work. The two other main issues involve the smoothing and reprojection. Smoothing seems to be happening universally, which causes mesh shrinkage. This is especially evident on the ears/fingers/toes/etc. on human models. The original implementation avoids this, I believe, by weighting the smoothing based on distance between verts compared to their rest positions. I haven't looked at the code closely to see if anything like this is happening yet, but currently the modifier seems to have trouble maintaining volume. The final issue is related to the smoothing algorithm itself. The mesh explodes at higher values. I'd suggest putting a "soft" clamp at 0 and 1 for the Lambda, while still allowing manual values greater than that (although I don't know why you'd ever want negative smoothing).
I know this isn't an area for design comments, but I thought I'd bring up the fact that "Laplacian smoothing" as it's mentioned in the original R&H paper and video is almost completely unrelated to what we have known as the Laplacian smooth modifier in Blender. The original paper calls for a smoothing function much closer to Blender's standard smoothing modifier, and indeed after getting this branch built the modifier as it stands doesn't work at all as expected in its smoothing functions. Additionally (and I may be mistaken here) I believe that all of the info needed for the delta mush function is accessible without the need to bind to UV coordinates.
Feb 11 2015
Jan 10 2015
You can't hide parts of an inactive mesh for retopo any other way. Thanks though.
Dec 14 2014
Dec 6 2014
I found where the disconnect is. The standard path tracer cannot sample all lights per sample, which causes the discontinuity in the results. So I guess this falls more into a feature request than a bug. Sorry for the confusion!
I'm going to argue against that. In theory (as I understand it) the branched tracer taking one sample of all ray types should provide the same exact result as the regular path tracer taking a single sample. Changes should only occur (again, at least in theory) when more samples are given to individual ray types. As you can see, a single sample in the BPT does not equal a single sample in the standard tracer, and is in fact apparently much more efficient, which would imply that there are some improvements to be done to the sampling in the progressive sampler. I'm happy to be corrected by someone familiar with the actual cause though.
Dec 4 2014
Adding link for PasteAll since the file (only 13MB) is apparently too large to upload directly.
Nov 21 2014
Nov 16 2014
Nov 4 2014
I had already spoken with Antony about the issue, and he'd already confirmed it when I made the report. Just made a quick report as a formality.
Nov 3 2014
Sep 28 2014
If you haven't already, I HIGHLY recommend looking at the docs for Softimage's GATOR module.
Sep 15 2014
Sep 9 2014
Sep 1 2014
Did a few test renders of my own. Glad to see this functionality added so quickly after I mentioned it! These were done on 6 of the 8 cores on my 4790k. You can see the big time savings in scenes where you need reflective caustics, but where you can stand to not have/fake refractive ones.
Aug 11 2014
After some further testing, it would appear that this option just doesn't work much in general.
May 11 2014
May 5 2014
Another couple of test renders, this time at 720p resolution. Still only using 6 of 8 cores.
May 3 2014
That was with only 6 of my 8 cores, btw. Tolerated error set to 3, update rate at 5, warmup at 10. Let me know if you think there are even better settings.
WOW. I tested the scene I had spoken about previously. What a savings! Image quality difference is negligible, but a bit over 75% faster render times. Excellent work!
May 2 2014
Such confusing naming for VS/VC. I am indeed using 2013.
Apr 27 2014
No luck compiling against master here. Lots of errors thrown from util_color.h about ambiguous calls to overloaded functions. VC2012 on Win7 64-bit.
Apr 25 2014
It's a dipole based method, useless for us. This isn't the place to discuss that anyway.
Apr 16 2014
I'd be interested to see this on a scene without lots of glossy noise, something where shadows or indirect noise take up a good chunk of the render time with master Cycles. Something like a half lit face should benefit quite a bit from adaptive sampling like this.
Apr 9 2014
Here is some more discussion from Dade and others on halt conditions/adaptive sampling. Lots of nice papers linked, lot's of good discussion of pros and cons from people who have implemented some of them.
Apr 4 2014
Lukas: Sounds like the best way to do things. Halt render if either criteria is reached. For certain scenes (I have one with a half lit face where almost a full half of the samples are wasted on the side that is almost completely black that comes to mind) this would be an ENORMOUS time saver.
Apr 3 2014
To be clear, what's shown in that video isn't available in Blender at all. It's a demonstration of the new method available in LuxRender that I linked to above. Should be relatively painless to integrate into Cycles, though.
Apr 2 2014
A video of the new adaptive sampling at work. I'd need to see some more complex scenes to judge definitively, but so far this is the first adaptive sampling method I've ever seen that appears to "just work".
Mar 31 2014
Interesting piece about adaptive sampling that popped up on BA today. Dade claims it's incredibly easy to implement over an existing tiled renderer.
Mar 29 2014
Mar 7 2014
I had another really strange issue that I'm having trouble recreating at the moment. It was in the same session where I did the transparent screen grab. After working for a few minutes and doing a couple of f12 renders, I turned viewport rendering back on and got extremely blown out lights and strange colors not previously present in the scene. Switching back to path tracing caused different, but similar results.
Mar 6 2014
There appears to be something weird going on in the MinGW build in the viewport. Looks to be some kind of issue with the Alpha value? Using just path tracing, transparency is turned off. Sorry if this is a known issue.
Mar 5 2014
Indeed, no more cmake issues, but I get the same exact linker error as Thomas with VS2013, 64-bit.
I'm getting an error from cmake telling me there's no util_metropolis.cpp file. Indeed, upon searching, there isn't. Diff applied against master on Win7 64.
Mar 2 2014
Anyone using TortioiseGit will need to use the git Apply command. The GUI apply only works for diff's made with TG.
In TortoiseGit I'm getting "Path Format Detection: Fail" and it refuses to patch. Any ideas?
Feb 16 2014
Additionally, with all three selected, only vertex path select works.
Jan 30 2014
Another heads up: Using Filter Glossy seems to break rendering, not sure what's going on. I'm going to try to build on my Windows desktop now, been limited to a single core on my laptop so far.
After patching, preview seems to be broken. Looks like only a couple of samples are updated into the viewport. F12 render works just fine.
Nov 16 2013
Light Paths: I agree that 4 is high. It adds a lot of noise and render time with very little change in scene quality. The cases where 4 bounces would be necessary (indoor scenes with heavy indirect lighting) are very outweighed by other common scenes.