Page MenuHome

GTest perf regressions
Closed, ResolvedPublicTO DO


Recently we have restructured the infrastructure around gtest a little bit, mostly due to a handful of tests linking "all of blender" leading to long link times for the initial builds leading to complaints from developers.

In an effort to increase developer productivity we solved this by putting all those tests into blender_test so we only had to pay those costs once.

Which was great! and things would still be great if we stopped there, but we didn't.

I have been somewhat unavailable lately, so I have missed what the reasoning behind some of these choices may have been. but the following things have happened:

1 - The cheaper tests, that did not link "all of blender" have been moved into blender_test as well, meaning if you have a previously standalone test that tested a bit shuffle (or something else independent) that would link quickly into it's own executable, you now have to pay the "all of blender" link costs, leading to reduced developer productivity for most developers.

1a - Now that all tests live in this mega binary, the startup time in the debugger while it resolves symbols has put a damper on things as well (@Howard Trickey (howardt) reported ~10 seconds for debugger startup for what used to be instantly) once more not great for developer productivity

2 - Rather than a single invocation for the test binary to run all tests we now call the test binary once per test, with > 1000 tests even small costs start to add up quickly, starting a process is an inherently expensive activity, starting a process that has a large amount of static initializers (cycles/USD/alembic/mantaflow being the main offenders here, I did validate in a profiler that static initializers during CRT startup are one of the main contributors there, this is not "a hunch") just makes it worse, randomly selected example

68: Test command: K:\BlenderGit\test_2017_debug\bin\tests\Debug\blender_test.exe "--gtest_filter=guardedalloc.LockfreeAlignedAlloc2" "--gtest_also_run_disabled_tests" "--test-assets-dir" "K:/BlenderGit/blender/../lib/tests" "--test-release-dir" "K:/BlenderGit/test_2017_debug/bin//2.91"

68: Test timeout computed to be: 10000000
68: Note: Google Test filter = guardedalloc.LockfreeAlignedAlloc2
68: [==========] Running 1 test from 1 test suite.
68: [----------] Global test environment set-up.
68: [----------] 1 test from guardedalloc
68: [ RUN      ] guardedalloc.LockfreeAlignedAlloc2
68: [       OK ] guardedalloc.LockfreeAlignedAlloc2 (0 ms)
68: [----------] 1 test from guardedalloc (0 ms total)
68: [----------] Global test environment tear-down
68: [==========] 1 test from 1 test suite ran. (2 ms total)
68: [  PASSED  ] 1 test.
  68/1021 Test   #68: guardedalloc.LockfreeAlignedAlloc2 ...........................
   Passed    0.42 sec

this test took 2 ms to actually run the test, but required 420ms (for debug, release it is about ~120ms) if you include process startup/tear-down time, now that times a 1000+ tests.. not good

So changes we did with good intentions to improve developer productivity have had a tiny bit of the exact opposite effect.

I don't have an easy solution here, but things cannot stay the way they are right now