Page MenuHome

Compositor automated testing
Changes PlannedPublic

Authored by Habib Gahbiche (zazizizou) on Sat, Nov 30, 1:30 PM.

Details

Summary

A framework to test compositor nodes. The general idea is to create a node tree, render an image and compare the resulting render to a reference (i.e. expected) image.

The test can be called using
BLENDER_VERBOSE=1 ctest -R compositor --verbose

If a test fails, it is possible for the user to open blender and inspect the nodes of the failing test. The class OperatorNTests will print a command corresponding that the user can use to open blender with the failing test.

The tests can be updated using
BLENDER_TEST_UPDATE=1 ctest -R compositor

Diff Detail

Repository
rB Blender
Branch
compositor_test (branched from master)
Build Status
Buildable 5869
Build 5869: arc lint + arc unit

Event Timeline

The patch is almost done, but I would appreciate some help regarding these issues:

  1. Blank render when blender is ran in background mode: Nodes are created and linked correctly but blender always renders a blank image. Saving the blend file (after running in background) shows no background image. Selecting the same input image (again!) seems to refresh the compositor and give correct results again
  2. Beautify tree not working: this issue is less important (see comment in line 279)
tests/python/modules/compositor_test.py
280

This is always 0 for all nodes, I'm not sure why...

  1. Blank render when blender is ran in background mode:

Never mind, I just found out this is intentional: https://archive.blender.org/wiki/index.php/Dev:Ref/Release_Notes/2.67/Compositing_Nodes/
I will change the test to write the evaluated image on disk and then loaded again to compare with expected result.

Habib Gahbiche (zazizizou) planned changes to this revision.Sun, Dec 1, 11:24 AM

Rendered images are now written to disk and read again to compare them with expected result.

The class CompositorTest does not check if the passed node tree is valid but this check is done when CompositorTest.run_test() is called.

Correct me if i'm wrong, but seems you are assembling the node setup in the code. This sounds like an unnecessary overhead, and also doesn't allow to catch possible issues in versioning code when node behaviour changes.

I would imagine such tests will be done similar to Cycles, where there is a set of pre-defined .blend files which are rendered with the minimal required resolution and number of samples.
The script can be used to make an initial set of files. But having files helps maintaining the test, opening when it's needed to investigate what's going on and things like that.

Thanks for the feedback.

Correct me if i'm wrong, but seems you are assembling the node setup in the code. This sounds like an unnecessary overhead, [...]

It was intentional to use code to assemble a node tree. The idea behind it is to make it possible to generate a test case using one line of code (e.g. compositor.py line 35. I tried to explain the idea behind the framework in T71834.

and also doesn't allow to catch possible issues in versioning code when node behaviour changes.

These are meant to be regression tests, so if a node behavior changes, (e.g. output or API change) then the test will fail, since it always compares the rendered image with a reference image.

I would imagine such tests will be done similar to Cycles, where there is a set of pre-defined .blend files which are rendered with the minimal required resolution and number of samples.
The script can be used to make an initial set of files. But having files helps maintaining the test, opening when it's needed to investigate what's going on and things like that.

I thought it would be harder to maintain .blend files (which are versioned with SVN) than a script (versioned with git). Either way, the script that generates the tests has to be maintained, which comes down to the same effort.
Also, here, no blend files are needed at all to run the test. The blend file ${TEST_SRC_DIR}/compositing/compositor.blend is only used to display the test's results if the user wishes to inspect a test.

In any case, I'm willing to change the approach to use blend files if that aligns better with other blender tests. Is the rest of the approach otherwise alright?

I thought it would be harder to maintain .blend files (which are versioned with SVN) than a script (versioned with git). Either way, the script that generates the tests has to be maintained, which comes down to the same effort.

In the approach i was describing there is no need to maintain the script: it just takes care of spending an initial time creating a lot of similar .blend file. This is what i did to create an initial set of Cycles regression tests.

Also, here, no blend files are needed at all to run the test. The blend file ${TEST_SRC_DIR}/compositing/compositor.blend is only used to display the test's results if the user wishes to inspect a test.

That isn't a great approach to automated tests indeed.
Did you see the way how we do regression test for Cycles? It was also used for Eevee (although, due to specifics of render on GPU those are not enabled by default yet). I would imagine Compositor can use same exact approach.

To me it seems easier to understand what's going on when you can actually open .blend file and inspect it rather than trying to visualize the graph in your mind when reading code.

Did you see the way how we do regression test for Cycles? It was also used for Eevee (although, due to specifics of render on GPU those are not enabled by default yet). I would imagine Compositor can use same exact approach.

No, but I will have a closer look at those tests and see what I can reuse for compositor. Thanks for the hint.

To me it seems easier to understand what's going on when you can actually open .blend file and inspect it rather than trying to visualize the graph in your mind when reading code.

Might be true for complex node trees (then you could just execute the test with the option to open blender for inspection), but for simple one node tests, you only have to read the node's name and its parameters (so 1 line of "code").

But first I will look at Cycles tests :)

Habib Gahbiche (zazizizou) planned changes to this revision.Mon, Dec 2, 8:01 PM