Page MenuHome

Eevee out of GPU memory on large render
Open, Needs Triage by DeveloperPublic

Description

We ran into a weird issue with Eevee and a large render. It seems to run out of memory at 480 MB on a video card with 12 GB memory. This seems to happen when allocating the memory for the render result image.

Not sure if this is about the texture slot memory limit, but I'll paste the log below.

If it's a texture slot memory issue, then the bug is when rendering with border render. Even if a border render is set, it will still allocate the full image size for the final render. It crashes with the exact same error even if there's a border render set in the file.

So two questions here:

  1. Why it runs out of memory?
  2. Why is it trying to allocate the entire image when there's a border + crop set?

Here's the log:

GPUTexture: create : TEXTURE_2D, DEPTH24_STENCIL8, w : 7200, h : 18000, d : 0, comp : 1, size : 494.38 MiB
GPUTexture: texture alloc failed. Likely not enough Video Memory.
Current texture memory usage : 494.38 MiB.
Writing: /tmp/Victoria_Art.crash.txt
Segmentation fault (core dumped)

Details

Type
Bug

Related Objects

Event Timeline

Forgot to mention, this is affecting Blender 2.80 official release.

We're rendering with Ubuntu 18.04 LTS, command line render.

Sorin Vinatoru (sorinv) renamed this task from Eevee out og GPU memory on large render to Eevee out of GPU memory on large render.Fri, Sep 27, 8:30 PM

without a reference file, a video or other, how do you expect them to solve your problem ??

I've created a blend file with the issue.

@Zachary Russ (KlariceV) are you sure this file is related to the problem shown?
I opened the blender file, it tells me that it is only 34 mb and windows 214 mb, eevee is slow for some reason, but I don't see memory leaks ...
my gpu is a radeon 7600 HD , uses blender workarounds, on windows 10

edit:

I realized that the cube with the volumetric does not refresh (volumetric problem on my gpu.) T70091
maybe it's related to this? that the samples of volumetrics are going out of control with memory leack? (not on my machine, so i didn't noticed this problem)

I can fully reproduce this both on my windows computer (Nvidia GTX 980) and on our Linux servers (Nvidia K80). It may be a Nvidia only issue.

You may also want to hit F12. Using viewport rendering does not reproduce the error on my machine. Hitting F12 will crash Blender instantly.

@Sorin Vinatoru (sorinv) do you think it's related to the volumetrics in that file?

ok i pressed f12, blender devoured the memory, blender didn't crash, but it was freeze and then i had to kill it myself.

  1. Why it doesn't use the 30% you have set in render properties might be the primary bug here. Manually shrinking your initial dimensions to 2100x5400 works
  2. It's odd why blender eats up resources and typically needs to be manually killed in this instance... still needs investigated I suppose
  3. It may never be able to render at that 18000 dimension (100%) in any case. OpenGL GL_MAX_TEXTURE_SIZE limits on my AMD FirePro W2100 card is 16384 and I'm not aware of any implementation that allows higher than that.

It seems odd that it will still crash if using a custom border on the full resolution, though. Like Sorin asked, is Blender still trying to allocate that massive 18000 dimension even with the custom border in place? If it is, I feel like that would be a bug. Is it not?