See this page for motivation and description of concepts:
See this video for UI explanation and demonstration of usage
This proposal attempts to improve usability of Blender's image stabilisation feature
for real-world footage esp. with moving and panning camera. It builds upon the feature tracking
to get a measurement of 2D image movement.
- use a weighted average of movement contributions (instead of a median)
- allow for rotation compensation and zoom (image scale) compensation
- allow to pick a different set of tracks for translation and for rotation/zoom
- treat translation / rotation / zoom contributions systematically in a similar way
- improve handling of partial tracking data with gaps and varying start / end points
- have a user definable anchor frame and interpolate / extrapolate data to avoid jumping back to "neutral" position when no tracking data is available
- support for travelling and panning shots by including an intended position/rotation/zoom ("target position"). The idea is for these parameters to be animated by the user, in order to supply an smooth, intended camera movement. This way, we can keep the image content roughly in frame even when moving completely away from the initial view.
A known shortcoming is that the pivot point for rotation compensation is set to the translation compensated image centre. This can produce spurious rotation on travelling shots, which needs to be compensated manually (by animating the target rotation parameter). There are several possible ways to address that problem, yet all of them are considered beyond the scope of this improvement proposal for now.