There are multiple performance-related projects this year (T73359, T68908, T73360, Cycles). For this we need tools to measure performance for individual changes, and to automatically track performance in master over time.
I suggest to build on Open Data, but take functionality and ideas from the Cycles benchmarking scripts (which are a bit hacky).
The first step is for developers to be able to run tests locally.
- Create a Python API and utilities for implementing tests given a Blender executable, and return JSON data in Open Data format
- Implement script to easily run tests locally and present resulting data
- Implement tests for:
- Cycles rendering
- Animation playback
- Object and mesh editing operators
- Put benchmark .blend files in lib/benchmarks
Once that infrastructure is in place, we can automate this further:
- Setup buildbot to let developer performance tests changes
- By pushing a branch to a repository (or some other simple mechanism)
- For a code review
- Generate graph comparing performance before and after
- Setup buildbot to run performance tests every night on master
- Generate graph with performance over time
- Have a way to mark certain commits or benchmark changes as compatibility breaking, so it's clear when numbers are not comparable
- Make part of Open Data
- Where would the code for running this live? Open data repository? tests/ folder in Blender repository?
- How to easily compare multiple revisions? Would the script be responsible for building Blender too (like the Cycles benchmarking scripts)?
- Who implements this?