Skip to content

Python Tests

Located in: tests/python

Execute them directly, or via Blender, the Python files contain info on how to execute in their header.

Example

You can run python tests directly from the command line as long as you have python and Blender installed:

Test file: tests/python/bl_pyapi_mathutils.py

Source located at: source/blender/python/mathutils

Execute:

Linux:

./blender.bin --background --python tests/python/bl_pyapi_mathutils.py  -- --verbose

MacOS:

The blender app bundle needs to be in the executable PATH. See Launching from the Command Line.

blender --background --python tests/python/bl_pyapi_mathutils.py  -- --verbose

Windows:

blender should be set in your environment variables in the Path variable, the Path consists of the path/to/blender where your blender.exe file lies.

Open the command prompt and the enter the following command.

Note- the path given here of the python test file is relative, you need to navigate using the cd command inside of blender-git/blender

blender --background --python tests/python/bl_pyapi_mathutils.py  -- --verbose

Output:

test_item_access (__main__.MatrixTesting) ... ok
test_item_assignment (__main__.MatrixTesting) ... ok
test_mat4x4_vec3D_mult (__main__.MatrixTesting) ... ok
test_mat_vec_mult (__main__.MatrixTesting) ... ok
test_matrix_column_access (__main__.MatrixTesting) ... ok
test_matrix_inverse (__main__.MatrixTesting) ... ok
test_matrix_mult (__main__.MatrixTesting) ... ok
test_matrix_to_3x3 (__main__.MatrixTesting) ... ok
test_matrix_to_translation (__main__.MatrixTesting) ... ok
test_matrix_translation (__main__.MatrixTesting) ... ok
test_non_square_mult (__main__.MatrixTesting) ... ok

----------------------------------------------------------------------
Ran 11 tests in 0.002s

OK
test_orthogonal (__main__.VectorTesting) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.001s

OK
test_kdtree_empty (__main__.KDTreeTesting) ... ok
test_kdtree_grid (__main__.KDTreeTesting) ... ok
test_kdtree_invalid_balance (__main__.KDTreeTesting) ... ok
test_kdtree_invalid_size (__main__.KDTreeTesting) ... ok
test_kdtree_line (__main__.KDTreeTesting) ... ok
test_kdtree_single (__main__.KDTreeTesting) ... ok

----------------------------------------------------------------------
Ran 6 tests in 0.005s

Getting Started (Prepare for automated testing)

However we want to get started with the Blender test environment. So we need to dig a bit deeper.

I assume you have the blender sources in a folder named blender Furthermore I assume you have built your blender into a folder named build and you have run make update to get test data:

/dev
/dev/blender
/dev/blender/tests/data
/dev/build

For Windows it will look something like this, where blender-git is where you cloned into your local PC

/blender-git
/blender-git/blender
/blender-git/build

Get cmake to work

Now first take care that cmake is in your execution path. And make sure that cmake works. You can check it as follows:

cd build
$ cmake ../blender
-- Selecting Windows SDK version 10.0.16299.0 to target Windows 6.1.7601.
-- 64 bit compiler detected.
-- Visual Studio 2017 detected.
-- Blender Skipping: (bf_alembic;bf_intern_ctr;bf_intern_opencl;bf_intern_opensubdiv;extern_sdlew)
-- Configuring done
-- Generating done
-- Build files have been written to: D:/dev/cmake-build
$

If you see any error or warning figure out what went wrong and fix that before you go ahead.

Check if ctest is running

ctest is the tool that we use from now on for executing the tests. Important: This tool is not available by default. You have to configure your build environment (cmake) by enabling WITH_GTESTS.

Assuming all is ready prepared, lets begin by getting a list of available tests:

$ ctest -N

Test project D:/dev/cmake-build
  Test  #1: libmv_predict_tracks_test
  Test  #2: libmv_tracks_test
...
  Test #88: guardedalloc_alignment_test
  Test #89: guardedalloc_overflow_test
  Test #90: bmesh_core_test

Total Tests: 90

Again, if you see any error here check what is wrong before going on.

Run a single test

Running a single test you use the ctest option -R followed by the name of the test. However you also must add the Configuration that you want to use. Depending on how you built your system you have one of the following configurations: [Release|Debug|RelWithDebInfo|MinSizeRel] In the example below i have added a few line breaks so that you can better read the code.

$ ctest -R export_obj_cube -C Release

UpdateCTestConfiguration  from :D:/dev/cmake-build/DartConfiguration.tcl
UpdateCTestConfiguration  from :D:/dev/cmake-build/DartConfiguration.tcl
Test project D:/dev/cmake-build
Constructing a list of tests
Done constructing a list of tests
Updating test list for fixtures
Added 0 tests to meet fixture requirements
Checking test dependency graph...
Checking test dependency graph end
test 49
    Start 49: export_obj_cube

49: Test command: D:\dev\cmake-build\bin\Release\blender.exe "--background" \
    "--factory-startup" "--env-system-scripts" \
    "D:/dev/blender/release/scripts" \
    "D:/dev/blender/tests/data/io_tests/blend_geometry/all_quads.blend" \
    "--python" "D:/dev/blender/tests/python/bl_test.py" "--" "--run={'FINISHED'}& \ 
    bpy.ops.export_scene.obj\
    (filepath='D:/dev/cmake-build/tests/export_obj_cube.obj',use_selection=False)" \
    "--md5_source=D:/dev/cmake-build/tests/export_obj_cube.obj" \
    "--md5_source=D:/dev/cmake-build/tests/export_obj_cube.mtl" \
    "--md5=e80660437ad9bfe082849641c361a233" "--md5_method=FILE"
49: Test timeout computed to be: 9.99988e+06
49:   args: D:/dev/cmake-build/bin/Release/blender.exe --background \
    --factory-startup --env-system-scripts \
    D:/dev/blender/release/scripts D:/dev/blender/tests/data/io_tests/blend_geometry/all_quads.blend \
    --python D:/dev/blender/tests/python/bl_test.py -- --run={'FINISHED'}& \ 
    bpy.ops.export_scene.obj\
    (filepath='D:/dev/cmake-build/tests/export_obj_cube.obj',use_selection=False) \
    --md5_source=D:/dev/cmake-build/tests/export_obj_cube.obj \
    --md5_source=D:/dev/cmake-build/tests/export_obj_cube.mtl \
    --md5=e80660437ad9bfe082849641c361a233 --md5_method=FILE
49:   Running: '{'FINISHED'}&bpy.ops.export_scene.obj\
    (filepath='D:/dev/cmake-build/tests/export_obj_cube.obj',use_selection=False)'
49:   MD5: 'e80660437ad9bfe082849641c361a233'!
    (  0.0000 sec |   0.0000 sec) OBJ Export path: 'D:/dev/cmake-build/tests/export_obj_cube.obj'
    (  0.0010 sec |   0.0010 sec) Finished writing geometry of 'Cube'.
    (  0.0020 sec |   0.0020 sec) Finished exporting geometry, now exporting materials
    (  0.0020 sec |   0.0020 sec) OBJ Export Finished
Progress: 100.00%.00%
49:
49:   Result: '{'FINISHED'}'
49:   Success: {'FINISHED'}&bpy.ops.export_scene.obj(filepath='D:/dev/cmake-build/tests/export_obj_cube.obj',use_selection=False)
49: found bundled python: D:\dev\cmake-build\bin\Release\2.79\python
49: Read blend: D:/dev/blender/tests/data/io_tests/blend_geometry/all_quads.blend
49:
49: Blender quit
1/1 Test #49: export_obj_cube ..................   Passed    0.42 sec

The following tests passed:
        export_obj_cube

100% tests passed, 0 tests failed out of 1

Total Test time (real) =   0.49 sec
$

The -R Option is actually a regular expression. So you can run a set of tests in one go. You even can run all tests by calling:

$ ctest -C Release

And if you add the option -VV to the mix then you get even more information displayed.

Setup your own tests

This is actually not very complicated when you know where you have to look at. I give you a brief overview. That might help you to get a better understanding. I will only write about Python tests here. Also note that in this text i used the Collada test environment as an example. So wherever you see the term "collada" you will want to change this by your own name! Also the organization of tests into sub folders is not mandatory. I just expect to have a lot of test files and i do not want to put everything into one folder. You may not need this, so feel free to keep it simpler.

Where are the Resources

You find the test resources here:

/dev/blender/tests/data (git submodule of data files (e.g. .blend .png, .json, .dae, ...)
/dev/blender/tests/python (git checkout of python scripts)

The git part of the tests is automatically copied to your blender build folders when you clone the blender git repository. Nothing to do here for now.

The test data checkout happens as part of make test and make update.

Note: This can take a while. There are a LOT of data files in the test folders.

Creating the environment for your own tests

This is for development and i address people who want to add tests for the blender core. If you want to make tests for your own Add on or any other unofficial tests then you still can check out here how it is done, but you probably will want to put your resources at other locations.

In this example i show you how i added the test resources for the Blender Collada Module. You can take this as a blueprint for your own tests or just look at the other tests. that is how i did it in first place and it took me a while to get things to work. As noted above, please replace the term collada by whatever matches to your needs:

Adding a collada folder to the test data

$ cd /dev/blender/tests/data
$ mkdir -p collada

This directory contains subfolders with blend files and reference dae files for checking if the collada module exports/imports as designed. You will eventually want to check in your new files to the test repository. Ask the module owners before you do this :)

Adding test script environment

This is a tiny bit more complicated because you will have to also add/edit CMakeLists.txt files a bit...

$ cd /dev/blender/tests
$ mkdir -p collada

Now edit the CMakeLists.txt file in the tests folder:

$ cd /dev/blender/tests
$ vi CMakeLists.txt

Then add one line at the very end of the file:

add_subdirectory(collada)

This entry will add your own folder (collada in this case) to the test environment. Now enter your folder and create a new CMakeLists.txt file. You can just copy it from the collada folder. The upper part of the file is generic. Don't change it. What you will change is the macro at the bottom and the list of defined tests:

# ------------------------------------------------------------------------------
# GENERAL PYTHON CORRECTNESS TESTS
macro (COLLADA_TEST module test_name)
  add_test(
    NAME collada_${test_name}
    COMMAND
      "$<TARGET_FILE:blender>" ${TEST_BLENDER_EXE_PARAMS}
      ${TEST_SRC_DIR}/collada/${module}/${test_name}.blend
    --python ${CMAKE_CURRENT_LIST_DIR}/${module}/test_${test_name}.py
    --
    --testdir ${TEST_SRC_DIR}/collada/${module}
  )
endmacro()

COLLADA_TEST(mesh mesh_simple)

The macro contains 2 parameters in this case. I wanted to separate the tests into subfolders (modules) and for each module i wanted to have one or more test scripts. In the above example you will end up with this:

  • Blender will be asked to open the blend file mesh_simple.blend from the data folder collada/mesh/mesh_simple.blend
  • Also Blender is instructed to execute the python script mesh_simple.py from the testfolder collada/mesh/mesh_simple.py
  • Finally the python script gets an additional parameter to indicate where the test data is located (--testdir). But note: This comes with a pitfall. You have to take special care in the python script to avoid getting confused by misleading error messages.

Adding a testscript

I believe it is best when you just lookup the various test scripts to see how it is done. My scripts are super easy, so they might be good for getting a first idea of how to do it. To be honest i myself have taken my script structure from the Alembic test cases and modified it until it worked for my needs. Here are a few remarks so you avoid pitfalls:

The unittest pitfall

If you are using the unittest environment, then please take care to add this at the end of your scripts:

if __name__ == "__main__":
    sys.argv = [__file__] + (sys.argv[sys.argv.index("--") + 1:] if "--" in sys.argv else [])
    # Parser is only needed when you need to pass command-line options to your script.
    parser = argparse.ArgumentParser()
    parser.add_argument("--testdir", required=True)
    args, remaining = parser.parse_known_args()
    # if you do not need a parser, then skip until here. The Next line is important:
    unittest.main(argv=sys.argv[0:1] + remaining)

This actually separates the script parameters given to the script from the parameters needed by the unittest. If you do not make it as shown above, then you likely end up with pulling out your hair why things do not work. It took me a day to figure out what is going on here.

The problem is that the unittest does not understand the python parameters and creates bad error messages. Please check in the python script how the args are actually entered into the script environment. Also note that you only need the parser when you actually want to provide a command line option (like --testdir in this case)

Prepare the test environment for your additions

  1. It is important that you edit your CMakeLists.txt whenever you have added a new test! Otherwise the test will not be executed.
  2. After you edited the CMakeLists.txt you have to run cmake to update the test environment, see below:
$ cd /dev/build
$ cmake ../blender

Now you can run your new test like for example this is my very first test case:

$ ctest -R collada_mesh_simple -C Debug -VV

btw: -VV means "Very Verbose"

Python tests vs Blender Tests

You can setup tests in 2 ways:

  1. You configure to call a python script which prepares everything for calling Blender, then the script calls Blender and let some generated Python code get executed.
  2. You can configure to call blender with a blend file and a test script. The test script is then processed inside the running Blender and on the loaded blend file.

Which of the 2 scenarios is best for your purpose is up to you. I found the second scenario to match way better fro my purposes. So i used that.

Please feel free to add/modify/reorganize this document if you believe it can be improved. I do not feel like being responsible for it. I just provided my insight so far, hopefully for you convenience.

Have fun with testing.

Further Reading

See Blenders Automated Testing Project: http://developer.blender.org/tag/automated_testing