- User Since
- Nov 11 2018, 11:46 AM (73 w, 2 d)
Feb 17 2020
Nov 26 2019
I'd agree that it looks very much like the "known limitation" mentioned by @Omar Emara (OmarSquircleArt). But the documentation mentions: "This issue mostly affects Cycles but not Eevee....". I suggest updating the documentation to reflect that it happens for Cycles and EEVEE.
Nov 25 2019
Doing less-reproducible testing by inserting a Vector Mapping Node before the Multiply node and adjusting Location.X in steps 1/10, I started to suspect it is a z-fighting issue. Or more precisely, the Floor node sometimes floors down to 0.0 and sometimes it floors to 1.0 along the same cube face.
I'm sorry, I have only one computer (capable of) running Blender.
I just found that, when the camera projection is set to 'orthogonal' instead of 'perspective', the issue does not show up (so it only happens in 'perspective' projection).
Nov 22 2019
Oct 15 2019
@Paul Kotelevets (1D_Inc): Blender features some abstractions still: after creating a new UV-Sphere, parameters like location and rotation are presented that can be tweaked before committing the shape to a real mesh. I can imagine that the commit could be postponed to allow the user to change some parameters at a later stage (I've seen proposals for this support before). But that should also postpone any mesh-based editing. Those shapes would 'live' as ghosts between the real meshes. A disadvantage is that the algorithm that generated the shape in the first place needs to be linked to the shape and remain available for later adjustments. Not so difficult for the UV-sphere but it may be a tough complication for shapes that are provided by add-ons. Can a user understand the distinction between the two if Blender would not allow 'ghosting' of add-on based shapes?
Aug 22 2019
Instead of using the Cycles panoramic algorithms, wouldn’t EEVEE be better off by (1) rendering onto a skybox, then (2) applying DoF into another skybox whose faces are only then (3) mapped onto an equirectangular, fish=eye, or whatever panorama? The implementation according such process seems soooo much simpler yet still fairly accurate.
Aug 21 2019
I’m sorry for disturbing your discussion here (please keep it going), Taking my first steps in mesh modeling, I already find it difficult to get my head around triangles, normals, n-gons, degenerate stuff-stuff, co-planarity. If suddenly a hole appears in my mesh faces because some tool or modifier decided that it should an can do that, I would be hopelessly lost on a mental picture of what a non-coplanar (?) hole in my n-gon even means and what effect it will behave under further operations; let alone if it is somehow degenerate (?). Could I still enjoy playing with geometry which such beasts around?
Aug 7 2019
With recent advances in Monte-Carlo ray tracing (and specifically the VCM type of renderers), I suggest to use a bi-directional path tracer (with? MIS) instead of the currently proposed forward path tracer. Or at the very least, implement a reverse path tracer where rays start at the source images (the emitter) where important samples (highlights) are easily identified before tracing commences (also enabling Metropolis tracing). This is all the more efficient since users will typically blur the input a lot more than they will want to sharpen the output.
Jul 10 2019
Okay, that explains... I got the file from a friend and didn't know it was possible to hide stuff in edit mode while still having it visible in object-mode. Should there be a visible indication on the UI that says that some faces/vertices/edges are hidden?
Mar 12 2019
Feb 13 2019
Dear Brecht, I have been experimenting with generating and using cubemaps in Blender in SVM, OSL, and GLSL, to later find this old thread. I encountered the same issues with (bicubic) filtering and used a different strategy: force nearest-neighbor near the cube edges. My solution avoided the filtering issue at higher mipmap levels (e.g.: during fast EEVEE interactivity) (and without the trick from The Witness), but doesn't cover 'baking' (afaik). I don't mind that my code changes overlap with this current thread (mine was just an exercise in coding), but mainly wonder if the cycles_panorama_experiments branch (and the work in the current 2-year-old thread) is still of interest to current Blender. And whether I should try adding to Sergey's work? I understand the current workload you have with 2.80, so postponing is not a problem :). Then again, maybe, camera mapping is something to be solved with "everything nodes"?