Page MenuHome

Cycles - Rendering on a Nvidia A100 crashes/fails on Google Colab
Closed, ResolvedPublic


System Information
Operating system:

VERSION="18.04.5 LTS (Bionic Beaver)"
PRETTY_NAME="Ubuntu 18.04.5 LTS"

Graphics card: Nvidia A100-SXM4-40GB

| NVIDIA-SMI 470.63.01    Driver Version: 460.32.03    CUDA Version: 11.2     |
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|   0  A100-SXM4-40GB      Off  | 00000000:00:04.0 Off |                    0 |
| N/A   34C    P0    42W / 400W |      0MiB / 40536MiB |      0%      Default |
|                               |                      |             Disabled |

Blender Version
Blender 2.93.0-2.93.4, Blender 3.0.0
Worked: Blender 2.92.0

Short description of error
Blender is crashing and aborting the render with Cycles

Exact steps for others to reproduce the error
Open Google Colab
Try to get a A100
It will crash

CC @Brecht Van Lommel (brecht) @Patrick Mours (pmoursnv)

Event Timeline

Raimund Klink (Raimund58) renamed this task from Cycles - Rendering on a Nvidia A100 fails on Google Colab to Cycles - Rendering on a Nvidia A100 crashes/fails on Google Colab.Oct 1 2021, 3:00 PM

Since Blender does not ship with pre-compiled kernels for A100, it falls back to the PTX one it ships with. But the PTX files that ship with Blender 2.93.4 and 3.0.0 were compiled with CUDA 11.4 (PTX version 7.4).
This requires a r470 driver or newer, older drivers are not able to interpret that PTX version and will fail to compile the kernels. You are running r460, which only supports up to PTX version 7.2, and therefore it fails (and Blender does not handle the error correctly it seems and just crashes shortly after).

Brecht Van Lommel (brecht) changed the task status from Needs Triage to Needs Information from User.Oct 4 2021, 11:47 AM

Ah, I was not aware that upgrading the CUDA toolkit would affect driver compatibility this way. Not sure it's worth reverting to an older CUDA version for this case, I think mentioning it in the 2.93 release notes is enough.

@Patrick Mours (pmoursnv), can we improve the test here to give a better error message, and then backport that to 2.93?

@Raimund Klink (Raimund58), can you verify if upgrading the driver to 470 fixes the problem?

Patrick Mours (pmoursnv) claimed this task.EditedOct 4 2021, 1:09 PM

Sure, I can post a fix later today.
The driver reports CUDA_ERROR_UNSUPPORTED_PTX_VERSION (222), which is a more recent addition (was added in CUDA 11.1 as far as I can tell) and not currently in CUEW, hence the "Unknown CUDA error value" in the log.
The crash is then happening because Cycles tries to call "cuOccupancyMaxPotentialBlockSize" with a NULL kernel function in "CUDADevice::load_functions" (since the module failed to load).

I am quite confused and I don't know if it is because Colab or if I am doing something wrong.
The system said that 470 is already installed, no? But why is nothing changing?

First try to update the driver:

!cat /etc/os-release
!sudo apt install nvidia-driver-470

(Driver already up to date)

Second try to update the driver:

I followed this site:

!cat /etc/os-release
!sudo mv /etc/apt/preferences.d/cuda-repository-pin-600
!sudo dpkg -i cuda-repo-ubuntu1804-11-4-local_11.4.2-470.57.02-1_amd64.deb
!sudo apt-key add /var/cuda-repo-ubuntu1804-11-4-local/
!sudo apt-get update
!sudo apt-get -y install cuda

(Still nothing changed)

What am I not seeing?

I'm not really familiar enough with Linux driver installation to be much help with that. Something is weird on the system though: The called nvidia-smi tool was installed by a r470 driver, but the actual driver running (as reported by nvidia-smi) is a r460 one ... As if both drivers are installed in parallel, but the r460 one is actually active. Wonder if one can force uninstall the old driver and then try to cleanly install r470 over it again.

Fix for the crash in 2.93 is here:
Super simple, but I'm not sure about the process for backporting/committing stuff to 2.93 (or 2.83 for that matter, which suffers from the same bug)?

@Raimund Klink (Raimund58).: istalling the CUDA toolkit should not be needed, just the driver. Not sure what is going on with that driver though. Maybe needs a system restart or kernel module reload. If it takes a lot of time to investigate that it's probably not worth it just to confirm if this bug is fixed.

@Patrick Mours (pmoursnv): you can add it to the list in T88449: Blender LTS: Maintenance Task 2.93 and T77348: Blender LTS: Maintenance Task 2.83. Linking to the patch from there is fine, Jeroen should be able to commit that..

@Brecht Van Lommel (brecht) The Google Colab session can not rebooted afaik. So, no fix for me :/
Guess I have to wait for Google to update the driver.
I will come back in case I find a workaround.

@Raimund Klink (Raimund58) You could build Blender yourself with CUDA toolkit 11.2, that should give you PTX files that are compatible. Or build a Cubin for sm_80 directly (via CYCLES_CUDA_BINARIES_ARCH CMake variable), then the CUDA toolkit version should not matter.
Blender doesn't ship with a Cubin for sm_80 by default since there are no consumer GPUs using that architecture.

@Brecht Van Lommel (brecht) I don't have permission to edit those lists, could you add this?:

ReportCommits in Master
T91879rBc11585a82f97e51c01c4f4f309b85bdf7602ca08Use P2478 instead (has additional crash fix)

@Patrick Mours (pmoursnv) added you to the right groups, permissions should be ok now.

I'll consider this resolved then.