Update to fix more crashes found after testing with repro cases.
Well, normally bugs only. Though this could have been a bug, it's hard to know these things without being familiar with underlying algorithms. It doesn't harm to report things just in case.
The denoiser can always be improved, but there's nog bugs here.
Did you try installing the latest drivers from the NVIDIA website?
It is not clear to if you are trying to use this card for OpenGL, or for Cycles OpenCL rendering. Either way we need more info:
Indeed, that's the point I was trying to make in T52924#469725. In my opinion there is nothing we can do to fundamentally improve .blend file loading security against these types of (difficult and somewhat unlikely) attacks in the short term. Ironically, if we had worse security and allowed executing embedded scripts on load by default like other software, it wouldn't even matter.
Add unit tests.
So if I understand this correctly, the idea is to handle one sample at a time, using Monte Carlo sampling for operations like blur. The advantage of that is that it simplifies the implementation, as every sample can be handled individually, in a single kernel execution. As a result it may also be easier to optimize than a more complicated implementation that theoretically could be faster but isn't. It also naturally provides a progressive preview. "Monte Carlo compositing" is a very interesting idea and I hadn't realised this was the plan.
Mon, Jan 15
I discussed this with @Sergey Sharybin (sergey) in IRC, and conclusion was that malloc() should be ok to use here. The only real difference with calloc() appears to be the zeroing, and we are not trying to protect against reading uninitialized memory here.
I'm not sure why using calloc() would be safer for the kind of vulnerability we are trying to protect against? malloc() lets you allocate more memory than you can actually use, but still the allocations should not overlap and you should not be able to overwrite other memory that is in use, as long as you don't write outside the bounds for each memory allocation? The program may crash when out of memory, but not in a way that can be exploited?
- Add fixes for the remaining vulnerabilities, far from elegant though.
- Abort in case of integer overflow in MEM_.alloc_arrayN, since we lack proper tests for NULL return in a lot of places. The idea being that the overflow means we have invalid data anyway that would most likely crash later, so better do it immediately than when it could lead to vulnerabilities.
The reported vulnerabilities indeed are all based on cases where there is some overflow in computing the size for memory allocation,
Sun, Jan 14
Please use a user community forum for questions on how to use Blender, this tracker is strictly for bug reports, and you're more likely to get useful help elsewhere.
Thanks for offering help, but Blender Internal is planned to be removed in Blender 2.80 so there is not much point in working on this.
Please try the latest build from here, we did some fixes related to this:
And to be a bit more explicit: if I had to guess, then I estimate that seriously securing Blender against these types of attacks would take at least 4 developers working full time for 2 years. Which is not an argument against addressing vulnerabilities, but just to give an idea of the cost.
Right, what I mean is that invalid data that slips through .blend file loading could cause crashes later on, because the code generally assumes it to be valid and what happens when it isn't is unpredictable. And also technically valid data can cause problems.
I agree it would be good to validate .blend file contents, in the generic ways that you mention it probably wouldn't add too much code complexity? The tricky part is the more complex data relations, mesh being the primary one for which we already have a validation function, but there's other ones which could be corrupted too (particles, curves, node trees, ..).
Sat, Jan 13
Right, I am not speaking for the Blender Foundation. Nor am I saying vulnerabilities should not be taken seriously, but rather that if anyone is serious about making loading arbitrary .blend files in Blender secure, fixing these issues reported by TALOS will not get us much closer to that. Users should understand that loading untrusted scene files in Blender and similar CG software is not secure, and not get the false impression that software developers addressing the occasional reported issue means it is secure.
There's no background image in this file, and the 3D view is not in one of the orthographic side views.
I'm doing a few fixes to make this work in latest master, and committing.
Fri, Jan 12
This bug was fixed after the 2.79 release, so you would need to test the daily build or the upcoming 2.79a to confirm if it's fixed:
- Rename to Offscreen Dicing Scale, and set default to 4.0.
- Remove ANIMATABLE from dicing camera, pointer properties do not support animation currently.
Thu, Jan 11
I couldn't reproduce a crash on Linux with a GTX 1080, will test on Windows later.
Without a .blend file attached we can't tell what exactly the issue is you are seeing, but this is probably not a bug.
For me the difference was much bigger than noise changes. This version passes the regression tests, except for one which has motion blur with scale so that's fine.
I wasn't able to reproduce this bug with the incorrect tiles in the Victor scene on either Windows or Linux, tested with Titan Xp (no host memory), GTX 1080 (little bit of host memory) and GTX 580 (almost entirely in host memory).
I committed a fix for this issue, but I couldn't reproduce the bug so I'm not sure if it works. My guess is that for whatever reason the driver is not giving us unique IDs for each device.