- User Since
- May 18 2008, 9:12 PM (596 w, 2 d)
Feb 2 2017
May 21 2016
Apr 9 2016
I'm sorry it took so long to finally run the performance tests on my problem machine. I made my own spreadsheet based on the linked spreadsheet above, and added a column for multi-GPU results, and an extra column for the GPU tests at the "Magical 160x120 tile size" as mentioned in the 980 Ti bug thread. (https://developer.blender.org/T45093)
Apr 7 2016
Mar 30 2016
Mar 28 2016
@Arnis Vaivars (Arnissan) Huge pain indeed! I used to Blender on Linux, but every time I would run the NVidia provided shell script installer to update the NVidia drivers to hopefully get a performance boost for Cycles, the install would corrupt my Xorg config file about every other attempt! Then I'd have to spend about a half a day just getting the thing into a state where the Xorg server would display a UI again. Sadly, this is probably exactly why you would -not- be able to test the issue with a live CD linux - you'd need to install the proprietary NVidia drivers, which no distro would actually ship with their image due to license incompatibility issues.
Mar 20 2016
Feb 23 2016
@Sergey Sharybin (sergey) I apologize for diminishing the focus on the topic in my last post. I don't know if I'm "technical" enough an artist to build Blender myself (at least without a little help in the configuration), but I would be more than happy to volunteer as much time with remote access into my system as you like, to configure and test in any way you need. There are many hours when I am asleep or working that the system is not in use. What method of contact do you prefer to work out the details?
I suppose that I'd like to at least partially retract my previous statement that 2.77 test build 2 has resolved my render performance issues on the 980 Ti cards - last night's performance boosts seem to have been highly circumstantial.
Feb 22 2016
This issue seems to have been solved in the new Blender 2.77 test 1, and test 2 is out now - give it a try!
Jul 10 2015
Okay, quick status update on what I have found with these cards so far. My other cards seem to perform well with render tile sizes ranging from 240x160 to 240x240, 320x160, to even 320x240 and 320x320. The GTX 980 Ti cards perform extremely poorly at all of those sizes. However, when I set the tile size to 160X120, they behave about as expected - nice and fast. Any larger tile size, or any smaller tile size than that though, and their performance tanks again. Their performance in the interactive viewport is painfully slow as well. I wonder why shrinking the tile size to 160x120 gets me such a performance boost on the GTX 980 Ti cards? Anyone have any ideas?
Jun 17 2015
@Sergey Sharybin (sergey): I appreciate you looking into this issue. These are all things which I had learned about the hard the hard way the first time I built a multi-card system. To verify the configuration of this workstation though:
@Wolfgang Faehnle (mib2berlin): I ran the tests you requested, and ranked within the top results on the Octane benchmark page, and while the result set is about 1/3rd as large, my results are looking pretty good on the the LuxMark page as well. Unfortunately, it looks like this is just a Cycles issue at the moment. :/
Jun 16 2015
In case the image with the performance stats doesn't load or display, I've added the same animation which can be scrubbed through in Gyfcat:
Oct 5 2014
Sadly, when I opened the blend file that I attached to this bug report with the newly released 2.72, I still had a similar but slightly different issue when animating. It now looks like the modifier is trying to find the very first vertex in the mesh, and merge all of the radially merged verts with that one, but only every other frame of animation. Would you please take a look at the previously attached blend file once more, and press alt-a to start animating it? That's where the problem persists.