import bpy initializes CUDA drivers and crashes forked processes for transferring data to GPU #57813

Closed
opened 2018-11-13 13:47:56 +01:00 by Amir · 11 comments

System Information
OS: All Unix-based OSes
Python version: 3.x
Graphics card: All NVIDIA graphics cards

Blender Version
I guess this should be applicable to any version of Blender with CUDA support

Short description of error
I recently ran into an issue when trying to move my data to GPU using PyTorch 's Python API. By reading a couple of threads[- x][- x][- x] I noticed that CUDA is not fork-safe unfortunately. The only way the problem can get resolved is by not calling any cuInit() driver before calling a forked process (it looks like you can do whatever you want in the forked process without causing this issue).

After some trials and errors I realized that doing import bpy is causing this issue for me as I guess the import process is calling the cuInit() function to initialize CUDA drivers somewhere while getting loaded. In PyTorch, they avoided any call that causes CUDA initialization until such a call is actually needed and they called this fix Lazy Init. I'm not guessing what is happening exactly when doing import bpy but I guess this problem can get resolved if the call to cuInit() is only done when people have to change any settings related to the GPU or they explicitly click on GPU rendering in Cycles or basically do anything that clearly points that people are going to use GPU for something (including clicking on EEVEE to start realtime rendering)

Here is a potentially helpful comment from someone in PyTorch's Slack channel who has a better idea of what's happening under the hood:

CUDA, as a complex, multithreaded set of libraries, is totally and permanently incompatible with a fork() not immediately followed by exec(). That means the multiprocessing method fork cannot work, unless the fork is done before CUDA is initialized (by direct or indirect call to cuInit()). Once a process goes multithreaded or initializes CUDA, it’s usually too late.Second, torch.cuda.manual_seed() is lazy, meaning it will not initialize CUDA if it hasn’t been done already. That’s a good thing, for the reasons above.

P.S. I'm not entirely sure but this might also be relevant to this bug or this bug that I reported earlier this year.

Exact steps for others to reproduce the error

python3 -m pip install torch
or
conda install pytorch

Compile Blender (master branch) as Python module with the following CMake flags:

  • DCMAKE_INSTALL_PREFIX=/usr/local/lib/python3.6/dist-packages \
  • DWITH_PYTHON_INSTALL=OFF \
  • DWITH_PYTHON_MODULE=ON \
  • DPYTHON_ROOT_DIR=/usr/local \
  • DPYTHON_SITE_PACKAGES=/usr/local/lib/python3.6/dist-packages \
  • DPYTHON_INCLUDE=/usr/include/python3.6/ \
  • DPYTHON_INCLUDE_DIR=/usr/include/python3.6m \
  • DPYTHON_LIBRARY=/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so \
  • DPYTHON_VERSION=3.6 \
  • DWITH_OPENAL=OFF \
  • DWITH_OPENCOLORIO=ON \
  • DWITH_GAMEENGINE=OFF \
  • DWITH_PLAYER=OFF \
  • DWITH_INTERNATIONAL=OFF \
  • DCMAKE_BUILD_TYPE:STRING=Release

Note that I manually change PYTHON_VERSION_MIN="3.7" in install_deps.sh to `PYTHON_VERSION_MIN="3.6".

Then in Python:

#main.py
from multiprocessing import Process
import torch
import numpy as np

def moveDataToGPU(procID, importBpy=False):
	if importBpy:
		- doing import bpy inside the forked process or outside of it (before moveDataToGPU) 
                - would still cause things to crash
		import bpy

	- Doing the next two lines before moveDataToGPU would cause this function to crash
	- but having Lazy Initialization of CUDA drivers makes it fork-safe 
	- Look at the followings for a better idea of what's going on:
	- https://github.com/pytorch/pytorch/blob/master/torch/cuda/random.py
	# https://github.com/pytorch/pytorch/blob/master/torch/cuda/__init__.py
	torch.cuda.manual_seed(1)
	print(torch.cuda.get_rng_state().sum())


	data = np.random.uniform(0, 1, (5, 5))
	print ('data created for procID ' + str(procID))
	torchData = torch.from_numpy(data)
	torchData = torchData.cuda()
	print ('successfully moved the data to GPU for procID ' + str(procID))
	print ('')

forkedProcess = Process(target=moveDataToGPU, kwargs={'procID': 0, 'importBpy': False})
forkedProcess.start()
forkedProcess.join()

forkedProcess = Process(target=moveDataToGPU, kwargs={'procID': 1, 'importBpy': False})
forkedProcess.start()
forkedProcess.join()
**System Information** OS: All Unix-based OSes Python version: 3.x Graphics card: All NVIDIA graphics cards **Blender Version** I guess this should be applicable to any version of Blender with CUDA support **Short description of error** I recently ran into an issue when trying to move my data to GPU using [PyTorch ](https:*github.com/pytorch/pytorch)'s Python API. By reading a couple of threads[- [x](https:*github.com/pytorch/pytorch/issues/2517)][- [x](https:*github.com/pytorch/pytorch/pull/2811)][- [x](https:*github.com/pytorch/pytorch/issues/13883)] I noticed that CUDA is not fork-safe unfortunately. The only way the problem can get resolved is by not calling any `cuInit()` driver before calling a `fork`ed process (it looks like you can do whatever you want in the `fork`ed process without causing this issue). After some trials and errors I realized that doing `import bpy` is causing this issue for me as I guess the import process is calling the `cuInit()` function to initialize CUDA drivers somewhere while getting loaded. In PyTorch, they avoided any call that causes CUDA initialization until such a call is actually needed and they called this fix Lazy Init. I'm not guessing what is happening exactly when doing `import bpy` but I guess this problem can get resolved if the call to `cuInit()` is only done when people have to change any settings related to the GPU or they explicitly click on GPU rendering in Cycles or basically do anything that clearly points that people are going to use GPU for something (including clicking on EEVEE to start realtime rendering) Here is a potentially helpful comment from someone in PyTorch's Slack channel who has a better idea of what's happening under the hood: > CUDA, as a complex, multithreaded set of libraries, is totally and permanently incompatible with a `fork()` not immediately followed by `exec()`. That means the `multiprocessing` method `fork` cannot work, unless the `fork` is done before CUDA is initialized (by direct or indirect call to `cuInit()`). Once a process goes multithreaded or initializes CUDA, it’s usually too late.Second, `torch.cuda.manual_seed()` is lazy, meaning it will not initialize CUDA if it hasn’t been done already. That’s a good thing, for the reasons above. P.S. I'm not entirely sure but this might also be relevant to [this bug](https:*developer.blender.org/T54461) or [this bug](https:*developer.blender.org/T54561) that I reported earlier this year. **Exact steps for others to reproduce the error** `python3 -m pip install torch` or `conda install pytorch` Compile Blender (master branch) as Python module with the following CMake flags: - DCMAKE_INSTALL_PREFIX=/usr/local/lib/python3.6/dist-packages \ - DWITH_PYTHON_INSTALL=OFF \ - DWITH_PYTHON_MODULE=ON \ - DPYTHON_ROOT_DIR=/usr/local \ - DPYTHON_SITE_PACKAGES=/usr/local/lib/python3.6/dist-packages \ - DPYTHON_INCLUDE=/usr/include/python3.6/ \ - DPYTHON_INCLUDE_DIR=/usr/include/python3.6m \ - DPYTHON_LIBRARY=/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so \ - DPYTHON_VERSION=3.6 \ - DWITH_OPENAL=OFF \ - DWITH_OPENCOLORIO=ON \ - DWITH_GAMEENGINE=OFF \ - DWITH_PLAYER=OFF \ - DWITH_INTERNATIONAL=OFF \ - DCMAKE_BUILD_TYPE:STRING=Release Note that I manually change `PYTHON_VERSION_MIN="3.7"` in `install_deps.sh` to `PYTHON_VERSION_MIN="3.6". Then in Python: ``` #main.py from multiprocessing import Process import torch import numpy as np def moveDataToGPU(procID, importBpy=False): if importBpy: - doing import bpy inside the forked process or outside of it (before moveDataToGPU) - would still cause things to crash import bpy - Doing the next two lines before moveDataToGPU would cause this function to crash - but having Lazy Initialization of CUDA drivers makes it fork-safe - Look at the followings for a better idea of what's going on: - https://github.com/pytorch/pytorch/blob/master/torch/cuda/random.py # https://github.com/pytorch/pytorch/blob/master/torch/cuda/__init__.py torch.cuda.manual_seed(1) print(torch.cuda.get_rng_state().sum()) data = np.random.uniform(0, 1, (5, 5)) print ('data created for procID ' + str(procID)) torchData = torch.from_numpy(data) torchData = torchData.cuda() print ('successfully moved the data to GPU for procID ' + str(procID)) print ('') forkedProcess = Process(target=moveDataToGPU, kwargs={'procID': 0, 'importBpy': False}) forkedProcess.start() forkedProcess.join() forkedProcess = Process(target=moveDataToGPU, kwargs={'procID': 1, 'importBpy': False}) forkedProcess.start() forkedProcess.join() ```
Author

Added subscriber: @AmirS

Added subscriber: @AmirS

blender/blender-addons#54561 was marked as duplicate of this issue

blender/blender-addons#54561 was marked as duplicate of this issue

Added subscriber: @brecht

Added subscriber: @brecht

Changed status from 'Open' to: 'Archived'

Changed status from 'Open' to: 'Archived'
Brecht Van Lommel self-assigned this 2018-11-13 14:36:46 +01:00

You can compile with WITH_CYCLES_DEVICE_CUDA=OFF or WITH_CYCLES=OFF.

You can compile with `WITH_CYCLES_DEVICE_CUDA=OFF` or `WITH_CYCLES=OFF`.
Author

@brecht Sorry but I forgot to remove that line where I say compiling Blender without CUDA would also be okay. However, this is not the solution I was looking for here (I was looking for such solution on devtalk). Could you please reopen this and look into it?

@brecht Sorry but I forgot to remove that line where I say compiling Blender without CUDA would also be okay. However, this is not the solution I was looking for here (I was looking for such solution on devtalk). Could you please reopen this and look into it?

The bpy module is not an officially supported feature, and you can compile it in such a way to disable CUDA. So I do not consider this a bug to be handled in the tracker.

The bpy module is not an officially supported feature, and you can compile it in such a way to disable CUDA. So I do not consider this a bug to be handled in the tracker.
Author

@brecht But I don't think it's necessarily about the bpy module. As far as I know you can also compile Blender so that it uses the system's Python internally. If there is any goal to make Blender usable by researchers I think this is definitely one thing many researchers who do AI research need in.

@brecht But I don't think it's necessarily about the bpy module. As far as I know you can also compile Blender so that it uses the system's Python internally. If there is any goal to make Blender usable by researchers I think this is definitely one thing many researchers who do AI research need in.
Author

@brecht Also, I don't this issue and the import order issue are duplicates ... . I would appreciate if the developers can be more patient with issues that have the sentences like "I compiled Blender as Python module" and don't close/ignore them immediately. In the world of researchers using Blender as Python module is an amazing feature.

@brecht Also, I don't this issue and the import order issue are duplicates ... . I would appreciate if the developers can be more patient with issues that have the sentences like "I compiled Blender as Python module" and don't close/ignore them immediately. In the world of researchers using Blender as Python module is an amazing feature.

This issue is the same as the other one, it's a conflict with both Cycles and PyTorch using CUDA.

If you want to use the unsupported Python module, or if you want to use Blender for AI research, then you are free to do so. But Blender is primarily a tool for artists, and that means we set priorities and can consider issues like this outside of what we spend time supporting.

This issue is the same as the other one, it's a conflict with both Cycles and PyTorch using CUDA. If you want to use the unsupported Python module, or if you want to use Blender for AI research, then you are free to do so. But Blender is primarily a tool for artists, and that means we set priorities and can consider issues like this outside of what we spend time supporting.
Author

It looks like the changes here resolve this issue and Blender also has lazy initialization for CUDA now.

It looks like the changes [here ](https://developer.blender.org/rB001414fb2f7346d2ff332bf851373522d87659d7) resolve this issue and Blender also has lazy initialization for CUDA now.
Sign in to join this conversation.
No Label
Interest
Alembic
Interest
Animation & Rigging
Interest
Asset Browser
Interest
Asset Browser Project Overview
Interest
Audio
Interest
Automated Testing
Interest
Blender Asset Bundle
Interest
BlendFile
Interest
Collada
Interest
Compatibility
Interest
Compositing
Interest
Core
Interest
Cycles
Interest
Dependency Graph
Interest
Development Management
Interest
EEVEE
Interest
EEVEE & Viewport
Interest
Freestyle
Interest
Geometry Nodes
Interest
Grease Pencil
Interest
ID Management
Interest
Images & Movies
Interest
Import Export
Interest
Line Art
Interest
Masking
Interest
Metal
Interest
Modeling
Interest
Modifiers
Interest
Motion Tracking
Interest
Nodes & Physics
Interest
OpenGL
Interest
Overlay
Interest
Overrides
Interest
Performance
Interest
Physics
Interest
Pipeline, Assets & IO
Interest
Platforms, Builds & Tests
Interest
Python API
Interest
Render & Cycles
Interest
Render Pipeline
Interest
Sculpt, Paint & Texture
Interest
Text Editor
Interest
Translations
Interest
Triaging
Interest
Undo
Interest
USD
Interest
User Interface
Interest
UV Editing
Interest
VFX & Video
Interest
Video Sequencer
Interest
Virtual Reality
Interest
Vulkan
Interest
Wayland
Interest
Workbench
Interest: X11
Legacy
Blender 2.8 Project
Legacy
Milestone 1: Basic, Local Asset Browser
Legacy
OpenGL Error
Meta
Good First Issue
Meta
Papercut
Meta
Retrospective
Meta
Security
Module
Animation & Rigging
Module
Core
Module
Development Management
Module
EEVEE & Viewport
Module
Grease Pencil
Module
Modeling
Module
Nodes & Physics
Module
Pipeline, Assets & IO
Module
Platforms, Builds & Tests
Module
Python API
Module
Render & Cycles
Module
Sculpt, Paint & Texture
Module
Triaging
Module
User Interface
Module
VFX & Video
Platform
FreeBSD
Platform
Linux
Platform
macOS
Platform
Windows
Priority
High
Priority
Low
Priority
Normal
Priority
Unbreak Now!
Status
Archived
Status
Confirmed
Status
Duplicate
Status
Needs Info from Developers
Status
Needs Information from User
Status
Needs Triage
Status
Resolved
Type
Bug
Type
Design
Type
Known Issue
Type
Patch
Type
Report
Type
To Do
No Milestone
No project
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: blender/blender#57813
No description provided.