Page MenuHome

Rework 2D stabilizator
ClosedPublic

Authored by Hermann Voßeler (Ichthyostega) on Jun 6 2014, 4:43 AM.

Details

Summary

See this page for motivation and description of concepts:
https://github.com/Ichthyostega/blender/wiki

See this video for UI explanation and demonstration of usage
http://vimeo.com/blenderHack/stabilizerdemo

This proposal attempts to improve usability of Blender's image stabilisation feature
for real-world footage esp. with moving and panning camera. It builds upon the feature tracking
to get a measurement of 2D image movement.

  • use a weighted average of movement contributions (instead of a median)
  • allow for rotation compensation and zoom (image scale) compensation
  • allow to pick a different set of tracks for translation and for rotation/zoom
  • treat translation / rotation / zoom contributions systematically in a similar way
  • improve handling of partial tracking data with gaps and varying start / end points
  • have a user definable anchor frame and interpolate / extrapolate data to avoid jumping back to "neutral" position when no tracking data is available
  • support for travelling and panning shots by including an intended position/rotation/zoom ("target position"). The idea is for these parameters to be animated by the user, in order to supply an smooth, intended camera movement. This way, we can keep the image content roughly in frame even when moving completely away from the initial view.

A known shortcoming is that the pivot point for rotation compensation is set to the translation compensated image centre. This can produce spurious rotation on travelling shots, which needs to be compensated manually (by animating the target rotation parameter). There are several possible ways to address that problem, yet all of them are considered beyond the scope of this improvement proposal for now.

Diff Detail

Repository
rB Blender
Branch
stabilizer_rework
Build Status
Buildable 100
Build 100: arc lint + arc unit

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
source/blender/makesrna/intern/rna_tracking.c
1760

try to treat a shot with moving camera. Then you'll know why it's needed

Got lost under the pile of work. Will check again within few days..

no problem. Basically the same situation for me....

meanwhile I did some testing, which resulted in some bugfixes and additions. Will add them to the patch this evening, and also I'm preparing a screencast to show the reworked version in action.

Cheers!
Herman

Hermann Voßeler (Ichthyostega) updated this revision to Unknown Object (????).Jul 19 2014, 5:51 PM

Bugfixes and round-out

This submission basically completes my proposal -- as a first version.
It is usable now (at least for me) and doesn't lack essential features.
Most notably we're using automated values properly now,
even when calculating other than the "current frame".

  • Switch target position from absolute to relative coordinates
  • Bugfix: need to handle pivot point similar for measurement and compensation
  • Bugfix: don't duplicate stabilisation data in track map
  • Bugfix: really use closest enabled marker for track initialisation
  • Animation: access all possibly animated values through getter functions
  • Animation: allocate private working data (API CHANGE)
  • Animation: retrieve values from F-curves
  • use animated F-curve values only when necessary
NOTE: there is an API change in BKE_tracking_data_get() and friends. We need the clip data to remap the frame number for access to animation data.
This revision now requires changes to proceed.Aug 9 2014, 4:11 AM

Question to the reviewers:

shall I rebase my branch against current blender master, or do you prefer to do the review at the point where it is?

(I don' know arcanist enough: will all the detailed questions and comments be retained, when I rebase my branch and add the new rebased branch as a new version?)

Ichthyostega that's a good question about the patch. Drop in #blendercoders on freenode.net IRC if you can and ask Nazg-gul/Sergey personally. I would like to see this project keep moving forward if possible. It may take some persistence and patience on your part so don't give up. Sunday's dev meetings are a good time to drop in if you can't get a hold of someone during the week.

What is the basis for your algorithms for this enhancement code? Is it custom code or are you following some type of stabilization standard? If you are developing original algorithms do you have experience in video engineering? If you are following a standard, do you have reference links?

Core devs always mention maintainability. I am just now looking at your code. When I spend more time reviewing this patch I will post some more comments. I will be looking for any inline comments that will help Sergey maintain the code in the future in case you are not available to do so. This is a commonly mentioned concern by core devs and part of their decision whether to include a big patch like this or not. And I am constantly reminding all devs to do better inline docs for maintainability. : )

Good luck and I look forward to monitoring your progress on this.

Hi,
I have just written a small plugin that extracts the higher frequency components of a track's movement for proper 2d stabilization.
Here

If I understand the part about the expected target position correctly, my addon tries to automatically do something similiar to what the user will do by keyframing the target position.
So maybe you could use that idea to add an automatic target position feature that takes the inverted values of a temporarily smoothed version of the calculated image shift?

What is the basis for your algorithms for this enhancement code? Is it custom code or are you following some type of stabilization standard? If you are developing original algorithms do you have experience in video engineering? If you are following a standard, do you have reference links?

What I did is not a "textbook solution", but I consider the basic technique of stabilisation so simple that I'd hesitate to call it an "Algorithm". It is just a bit of simple vector math. (Of course we need to be careful to get the sometimes insidious details correct, like e.g. the non square pixels, or the wrap-around with trigonometric functions).
Also I'd like to stress the fact that I didn't write something entirely new here. Rather, I dissected the existing code, re-organised and generalised it. From there, I tried to base that on a more clear understanding of what we're doing here: in my understanding, actually we assume a somewhat simplified model of movement, and then we "fit" this model with the measurement data. My main focus is to use a model which is as simple as possible, while still yielding usable results in practice. (regarding background, yes I am not a beginner, I have experience as a user of media software, with editing sound and image, and know the production process "from the inside". And of course I have experience in engineering of editing and video software)

Core devs always mention maintainability. I am just now looking at your code. When I spend more time reviewing this patch I will post some more comments.

Yes, please do.
Especially, I'd like to hear which code you consider hard to read (or easy to read); IMHO naming of variables, using well named helper functions and this kind of stylistic matters is a big factor in maintainability.

I have just written a small plugin that extracts the higher frequency components of a track's movement for proper 2d stabilization.
Here

Tnx for sharing and pointing that out, I'll have a look into that later...

If I understand the part about the expected target position correctly, my addon tries to automatically do something similiar to what the user will do by keyframing the target position.

Yes and no.
Yes, because the goal is the same: we want to support a mobile camera and thus want to integrate the intended movements of the cam. But conceptually, my approach is different. I choose to give the user a tool with full control, since my argument is that the user has the full knowledge about the intentions and can just trivially "do the right thing". The tricky part with an automatic approach is how to discern the unwanted from the wanted movements. Inevitably, you need some damping and thus you'll be always too late with the onset of the movement; you get a tricky balance between triggering unwanted movements, and missing the proper timings of the movement, missing very subtle and very aggressive movements.

This is somewhat similar to the handling of a (mechanical) steadycam, where the key point is to get a feel for the damping and delay of the stabilisation mechanism, and to start the movement to the right amount in advance, so the effective movement starts at the right point and to the right degree. So to some degree you spend additional effort and training, just to defeat the shortcomings of the automatism. But I do not want to sound dismissive....

So maybe you could use that idea to add an automatic target position feature that takes the inverted values of a temporarily smoothed version of the calculated image shift?

...and I think this is indeed a clever approach, but I consider this an optional add-on, which will work really well in some cases, and could get into the way in other cases. In addition, I could think of another likewise interesting approach: we need a improved version of the "auto-scale" feature, which determines the scale factor not based on the whole sequence, but only on a limited number of neighbouring frames. This idea could also be extended to make it "push the image" around so to keep the image in frame by a certain limited scale factor.

Actually, as a user, I'd like to have all three methods to my disposal, and I'd also like to have a feature to fill in the black borders by using previous (or later) frames, similar to what virtualDub does. Thus my intention would be to add this kind of improvements later as additional features. But first, I'd like to concentrate on the basic stuff and try to get an agreement what is the basic feature set to get that accepted into the official version.

Hi Ichthyostega, Are you confident that your changes answer Sergey's concern? If so you should poke him on the bf-committers mailing list and in #blendercoders on freenode.net. Especially on Sunday's before or after the developer's meeting.

This is very interesting work and it would be a shame to see it not move forward.
Thanks,
JT

Incidentally, I am waiting since more than half a year for the continuation of the review. IMHO, I have addressed all the (valid) issues pointed out by Sergey on the initial review. Moreover, I've used this extesion a lot in production work, so I think it is more than ready to get it into the hands of the users.

I must add that I myself hadn't cared to remind anyone to continue with the review, since I have more than enough of other stuff to care about. I am well aware that Sergey is also overloaded with other responsibilities, so I defintively do not want to blame Sergey.

Such changes are not just about code review. Code still needs work, but that wouldn't be huge problem to deal with.

What is really lacking here is the artists feedback. There was some discussion in bf-vfx but i never saw feedback from users area owner such as @Sebastian Koenig (sebastian_k) and @Sean Kennedy (hype). This is something crucial to have to process further.

Yes, I should definitely do a review for this. The functionality is much needed! :)

Hello Sergey,

last days I got the latest release 2.75a to build and then rebased my changeset on top of it.

I updated it with arcanist, but seemingly arcanist created a "new differential revision"
https://developer.blender.org/D1402

Hopefully I didn't mess things up, otherwise please give me a hint how to proceed

you can also find this branch at https://github.com/Ichthyostega/blender.git

on branch stabilizer_rework

You're not only created new revision for the change, but also included loads of changes which are not related on the change at all (like, it includes all the commits from me, campbell and other developers).

Make sure you're diffing against proper branch and that you're upadting the revision, not creating the new one.

Hermann Voßeler (Ichthyostega) edited edge metadata.
Hermann Voßeler (Ichthyostega) updated this object.

This is essentially the same patchset as submitted last year,
with the addition of two bugfixes (pixel_aspect handling
and fix for a possible double free).

I have used this feature a lot on production stuff this year
and it's IMHO ready to get tested by a wider audience

This patchset applies against v2.75a and master (as of 14.7.15)

PPA for Ubuntu Vivid (15.04) -- might be helpful for some people to try it out
http://ppa.launchpad.net/ichthyo/blender

what can we do to bring that review ahead?

The patch set is now rebased on recent Blender 2.77 -- I'll post the update next days.

For the Linux users interested in trying out the new feature: recent packages for Ubuntu Vivid (15.04), Wily (15.10) and Xenial (16.04, upcoming LTS) are again published in my PPA http://ppa.launchpad.net/ichthyo/blender

@Sergey Sharybin (sergey) could (nothing guaranteed!) look into this patch for Blender 2.78, if @Sebastian Koenig (sebastian_k) and @Sean Kennedy (hype) could look at it and give a feedback. @Sean Kennedy (hype) needs a Windows build. Can someone help here out with a build? Thanks!

thanks...

regarding the patchset, I've ported to 2.77, but then decided to wait for 2.77a, and will publish an update rebased on top of current master next days

Hermann Voßeler (Ichthyostega) edited edge metadata.

Update my proposal, now rebased onto current master (2.77a)

did a bit of style clean-up, but beyond that no code changes.
I still consider this submission ready for review and basically complete
It is usable (at least for me) and does not lack essential features.

NOTE: there is an API change in BKE_tracking_data_get() and friends. We need the clip data to remap the frame number for access to animation data.
  1. I can't apply your patch, because I get some errors.
error: source/scripts/startup/bl_ui/space_clip.py: No such file or directory
error: patch failed: source/blender/editors/space_clip/tracking_ops.c:3316
error: source/blender/editors/space_clip/tracking_ops.c: patch does not apply
source/blender/makesrna/intern/rna_tracking.c:2125: new blank line at EOF.
+

Also the format of the patch has changed

old patch format:
diff --git a/release/scripts/startup/bl_ui/space_clip.py b/release/scripts/startup/bl_ui/space_clip.py
--- a/release/scripts/startup/bl_ui/space_clip.py
+++ b/release/scripts/startup/bl_ui/space_clip.py

new patch format, which needs a path change to the repository:
Index: release/scripts/startup/bl_ui/space_clip.py
===================================================================
--- release/scripts/startup/bl_ui/space_clip.py
+++ release/scripts/startup/bl_ui/space_clip.py
  1. You probably should build the patch against the latest master. Makes the building and testing easier.

Not sure where you got the "patch" from and what the problem with the format is all about. On Unix like systems, you'd typically use the --strip=1 option for the patch command.

My latest submission is based on master from last Monday (c42a796) which should be sufficiently recent for all practical purposes. I wouldn't be surprised when right after an release is out, some rework and reshuffling happens and master might become unstable. For that reason, I based my published packages on the official release tag and used a point on master very close to that (just after the obvious fixes went in). I'd recommend we'll stick to that anchor point for now, otherwise we'll be hunting a moving target.

If in doubt, please refer to my git repro https://github.com/Ichthyostega/blender.git branch stabilizer_rework
which I care always to keep strictly in sync with what I publish and submit here.

Ok, I tried to apply it again. This is what I do and get in response:

[...]/blender-git/source ((c42a796...))
$ git apply ../../blender-git-patches/D583.id6485.diff
error: source/scripts/startup/bl_ui/space_clip.py: No such file or directory
source/blender/makesrna/intern/rna_tracking.c:1865: new blank line at EOF.
+

The file space_clip.py can't be found because it's in the "release" folder and not the "source" folder.
And the patch is build from the source folder not the "git root" folder, like the first patch, I think.

This is the diff file I use

I was able to apply the previous patch for 2.75 v4645 ( rev 2de497e ) with no issues, but I am also having similar issues on this new diff. However, when I was able to use it on 2.75, this was fantastic! In the next week I will be uploading some image stabilizations I did from an airplane flight (I put a gopro under the wing) that I have been working on.

ah, I see.
You are using the patch created automatically by the arcanist tool we use here for review (which as such is fine).
I must have updated this tool some months ago (I guess so, when I upgraded my system to Debian/Jessie). arcanist comes from its own Git repo...

Anyway, while that diff is indeed drawn against the Git root, many patch tools (like the original patch command from Unix) have the habit by default to strip the first directory component up to and including the first slash. I.e

  • when the diff contains the path a/release/scripts/startup/bl_ui/space_clip.py this becomes 'release/scripts/startup/bl_ui/space_clip.py'
  • but when the diff contains the path release/scripts/startup/bl_ui/space_clip.py this becomes 'scripts/startup/bl_ui/space_clip.py'

Obviously the latter can not be applied.
Typically, diff tools offer a parameter to control this behaviour. For the original Unix diff command, this parameter is named --strip -- Thus we need to tell diff to strip zero (no) directory components. I just tried this with git master from today (Note: everything done from Git root directory)

git fetch upstream
git checkout -b tmp upstream/master
patch --strip=0 < D583.id6485.diff

@Hermann Voßeler (Ichthyostega)
Patched this with the latest master and everything went smooth. This was the key:

patch --strip=0 < D583.id6485.diff

Also I did a first test and compared it with the current 2D stabiliser. It gives a much better result, less jitter and more stable. The additional functionality is very promising. I will test it further.

This is how the GUI changes:

Blender 2.77 (master)
Blender 2.77 (2D stabiliser patch)

For those who are interested, I can provide a Windows 64bit build (zip, no CUDA).

an update / nag
Sean Kennedy @Sean Kennedy (hype) did a functionality review some time ago

https://www.dropbox.com/s/wdzs97xdpjs37rq/stabilizer_problems.mp4?dl=0

I deem this review favourable, with respect to UI and handling, but Sean also saw a not perfect stabilisation behaviour due to some shortcomings, I am well aware of. See my response on the ML (I can repeat this explanation here if you ask me to).

My point is now, shall we aim at getting this behaviour perfect? Which means to extend this patch and especially change compositing nodes and cache keys? Or shall we aim at getting this technical review ahead? with the goal to get this version into people's hands?

Personally I'd favour the latter approach (but I am willing to go for the former if necessary).
The reason is also, this patch is waiting so long, years now. it is a significant feature, it makes a problematic function of blender way more usable. And I believe, software is better developed in increments, than by aiming at the perfect solution right away.

So I'd really appreciate if there is a way to get that technical review discussion going.

It's not required to have perfect or ideal behavior as long as it's more usable than the current approach and doesn't cause regressions or bugs.

Perhaps i've missed replies from Sean in the ML, but if he's happy with the new approach i'll give it a code review shortly.

I watched the video from Sean Kennedy @Sean Kennedy (hype) and he uses one track for position and the other for rotation stabilization.
But both tracks (and/or more tracks) should be used for Location and Rotation. I recorded a short video, where I tested this
with Seans footage (from https://openvisualfx.wordpress.com). It works as expected.

Maybe one minor point to consider would be the "Expected Rotation" slider. It has a threshold of 0.100, but it can be set to 0
and this gives unexpected results.

Video (7.5 MB):

Ah, okay, I see. That definitely makes a difference. Indeed it does work. There's still a slight problem that when Zoom is enabled, in my 2 point example, the image drifts upwards. I made a quick video to demonstrate.

https://www.dropbox.com/s/y2vm2ytbqq0e656/stabilizer_problems_02.mp4?dl=0

This is small, though, and perhaps if I test it further on different types of footage, what's happening will become clear to me, allowing me to figure out a way around it.

@Sean Kennedy (hype) I have watched the second video, and you're right. Something seems to be not right...

I asked myself: "Could it be, that I had used the stabilization the wrong way?"
So I've tested it further:

Currently Stabilization (master):

  • it depends on the selected markers for location and rotation (obvious)
  • scale can be set and then the scale influence can be adjusted (and animated!)
  • i.e. I set the scale to 0.800 (shrink) and can adjust it with the influence slider between 0.000 (= actual scale) and 1.000 (= scaled down to 0.800)
  • the same for location and rotation influence
  • but it is very time consuming to adjust this by hand, also grease pencil strokes are needed, but possible (!)

Blender file (no footage):


Video file:

Patch Stabilization (patch):

  • it seems to me that the best solution I get with the least adjustment are those tracks, those are (more or less) at the same "y-postion" in the middle of the frame.
  • tracks that are wider apart in the y-position, also give not so stable solution
  • but I only touched the Expected Rotation, and took down the Location Influence and no animation
  • it works best with 2 tracks for location and for rotation/scale
  • less or more tracks than 2 give a unpredictable behaviour
  • and the grease pencil strokes are needed too

Blender file (no footage):


Video file:

So the big question is how to help the artist selecting the best tracks. It would help automatic guess of best settings: Influence, Expected Rotation, Expected Postion, etc.

It would also help a lot, if we could get a grease pencil attached to the "canvas" in movie clip editor while "Display 2D Stabilization" is on. So the workaround movieclip editor > compositor > image editor would be not necessary.

Ah, okay, I see. That definitely makes a difference. Indeed it does work. There's still a slight problem that when Zoom is enabled, in my 2 point example, the image drifts upwards.

@Sean Kennedy (hype) : your observation is correct. This is the manifestation of the "shortcoming" I was talking about.

To reiterate: the current version uses the image centre as "pivot point" for rotation / zoom. This is a compromise.
There would be several alternatives, probably yielding better results -- however, all of these would require to pass the coordinates of that "pivot point" from the stabiliser to the correction or to the compositor nodes. And it would require to include that information in the cache key. All of which would massively increase the footprint of the patch.

Thus, the explanation of the weird behavour is simple: when you enable zoom, then we also measure the averaged diestance of your probe points from the image centre ("pivot point"). We then apply the inverse scale around the image centre ("pivot point"). Which causes the upshift you observed.

In your example, the perfect solution would be to use the weight centre of the location markers as pivot point, and to apply the rotation/scale around that. But, as said, then we'd have to feed the coordinates of that point for every frame, which we can't (since the existing API does not allow for that).

UI fixes as indicated by @Michael P. (forest-house)

range definition for target rotation and target scale

So the big question is how to help the artist selecting the best tracks.

agreed. Placement and selection of the right tracks is crucial, and right now it requires some understanding of how the stabilisation works.

Patch Stabilization (patch):

  • it seems to me that the best solution I get with the least adjustment are those tracks, those are (more or less) at the same "y-postion" in the middle of the frame.
  • tracks that are wider apart in the y-position, also give not so stable solution
  • but I only touched the Expected Rotation, and took down the Location Influence and no animation
  • it works best with 2 tracks for location and for rotation/scale
  • less or more tracks than 2 give a unpredictable behaviour
  • and the grease pencil strokes are needed too

Some remarks.
Regarding the current version in trunk, you are correct:

  • it is pointless to use more than two tracks
  • the only way to control the result is to animate the "influence" parameters

But my patched version gives you a lot more options here...

  • please note, that the patch-version really uses the information from all defined tracks. Actually, it uses the (weighted) average. So, if you use more tracks, its best to have them in roughly the same distance plane. Also, the tracked features should be somehow related to the object you want to keep "in focus". It is a good idea to choose the points in a way that the spurious, perspective induced relative movement of these points cancels itself out through the averaging
  • same for rotation, you may use the same or completely different tracks, dedicated only for rotation. And again, the rotational contribution of all those tracks is averaged. Thus, it is a good idea to pick the rotation tracks in a way to smooth out any spurious rotation component created by perspective movement.
  • the shortcoming of the patch version is that we use a fixed reference point (pivot point) for measurement and compensation of rotation / scale. This explains why you saw best results when the tracks had roughly the same y-position, but located on opposite side of the image.
  • the "expected position" / "expected rotation" / "scale" parameters give you extended control about the result. These values are applied as "baseline" and the compensation is layered on top. Please note that these can be animated. This allows you to go the "last mile" and manually compensate any remaining residual movement.
  • usually, there is no need to use the "influence" parameters anymore.

I understand the issues now, I think. Seems odd to me that there's no way to stabilize the scale without a pivot point, but I'm not a coder, so not sure how much advice I can give on how best to make this work.

Last year some time, or maybe the year before, Sergey had sent me a blend file using only 2 tracks and a bunch of math nodes and was able to stabilize out everything perfectly using just those 2 tracks. I believe he was using the first frame of the track as reference for the entire sequence, getting back a segment length, and using those values to remove the scale perfectly. If it might be helpful, I can upload that project. Wouldn't it make sense that if it could be setup in Math nodes, it could be setup in code?

I understand the issues now, I think. Seems odd to me that there's no way to stabilize the scale without a pivot point...

mathematically, scaling and rotation are two parts of the same thing (a linear transform).

Last year some time, or maybe the year before, Sergey had sent me a blend file using only 2 tracks and a bunch of math nodes and was able to stabilize out everything perfectly using just those 2 tracks.

Of course this works, but note, in this case we only use one piece of information or data: this is the difference between those two points. One of those points acts as pivot, and we measure and compensate the distance of the second point to the first one.

In the approach taken here (in this patch), if you use two rotation tracks, you use two pieces of information: for each (rotation) track we measure the difference to the (implicit) pivot point. Thus, if those two tracks contain some counter movement (e.g. as created by perspective), it gets cancelled out through the process of averaging, and only the net effect is what is evaluated and compensated. Obviously, all of this is much more of relevance for rotation; scale/zoom is just a nice take away.

From the video seems nice functionality!

Some preliminary feedback:

  • Seems you still do have some runtime-only data in DNA, those are to go away
  • You deprecated some variables in DNA, but did not update all use of them. If you compile with -Wdeprecated-declarations you'll see quite reasonable amount of use of deprecated stuff. All the areas are to be switched to a new system.
  • Am i right you're using same Weight as used by the camera solver? This might give issues since you might want strong weight for the solver but weak weight for stabilization. Or maybe want to animate them differently.

Are there any expected regressions for an existing files?

  • Seems you still do have some runtime-only data in DNA, those are to go away

As you requested in the initial review, I'm managing my own runtime memory now. But of course, we somehow need to attach that additional data to the stock data in DNA. Thus, we have the fields

  • void * MovieTrackingMarker::stabilizationBase (per track baseline initialization)
  • void * MovieTrackingStabilization::animated_params (globals for stabilization)
  • additionally, the old implementation used the flag int MovieTrackingStabilization::ok to control the initialization sequence; my reworked version does the same. (of course, this could be pushed into a global static variable)

Beyond that, as far as I can see, all other fields added to DNA are indeed intended to be persistent. Or did I miss something?

Can you advise for an alternative / preferred way how to attach my private runtime allocation to the persistent records?

  • You deprecated some variables in DNA, but did not update all use of them. If you compile with -Wdeprecated-declarations you'll see quite reasonable amount of use of deprecated stuff. All the areas are to be switched to a new system.

Thanks for the hint. Looks like I've missed some more operators. Will fix that!

Question: may I just remove operators no longer needed? e.g. MovieTrackingStabilization_rotation_track_get
Or would it be better to leave such operators in the code, just with a NOP implementation?. At least as long as the @deprecated rot_track is still in code?

  • Am i right you're using same Weight as used by the camera solver? This might give issues since you might want strong weight for the solver but weak weight for stabilization. Or maybe want to animate them differently.

Yes, correct, good point, I re-used the existing per track weight field.
I'll add a dedicated new field to DNA...

Are there any expected regressions for an existing files?

Do you mean *blend files? They will be migrated to the new settings, AFAIK without loss of functionality. Of course, when opening a new *blend file with an old Blender version, stabilization will not work.

Python scripts: when we remove the deprecated operators (see above), some scripts might be broken.

Beyond that, as far as I can see, all other fields added to DNA are indeed intended to be persistent. Or did I miss something?
Can you advise for an alternative / preferred way how to attach my private runtime allocation to the persistent records?

basically it should only contain data which is intended to be saved to disk and read back. Keeping runtime data in DNA only caused pain.

Is your runtime data only needed to calculate stabilization parameters? Or is it needed during the whole Blender session timelife?

Question: may I just remove operators no longer needed? e.g. MovieTrackingStabilization_rotation_track_get

Or would it be better to leave such operators in the code, just with a NOP implementation?. At least as long as the @deprecated rot_track is still in code?

Yes, operations which were relying on something we deprecate and don't want to use anymore should be deleted.

Do you mean *blend files? They will be migrated to the new settings, AFAIK without loss of functionality. Of course, when opening a new *blend file with an old Blender version, stabilization will not work.

Mainly asking about backward compat, as in opening old files in new blender. As long as this gives same results i am not concerned here.

Breaking forward compat (as in opening new file in old blender) is kind of expected to not work.

Python scripts: when we remove the deprecated operators (see above), some scripts might be broken.

That is something we'll just live with. Can't keep improving things and keep full compatibility.

basically it should only contain data which is intended to be saved to disk and read back. Keeping runtime data in DNA only caused pain.

Is your runtime data only needed to calculate stabilization parameters? Or is it needed during the whole Blender session timelife?

As said, currently I have only added two pointers to private data in the DNA structs. These are never persisted and always read back as NULL.

Thus my question: can you advise for a better way how to attach this private working data?

Do you mean *blend files? They will be migrated to the new settings, AFAIK without loss of functionality. Of course, when opening a new *blend file with an old Blender version, stabilization will not work.

Mainly asking about backward compat, as in opening old files in new blender. As long as this gives same results i am not concerned here.

My answer yesterday was a bit superficial. We migrate existing settings "logically equivalent". Yet the new stabiliser works differently than the old one. In all cases I saw myself, after migration the stabilisation result was better.

But in theory, you could imagine a situation where you'd need to adjust parameters or even throw out some tracks after migration, in order to get a good result. So yes, this could be counted as possible regression!

Let me explain in detail...

  • the old stabiliser used a set of tracks only to establish a median. This means, the old stabiliser in fact throws away most of the information in additional tracks. You can add a track with weird movements -- as long as the track is in the middle of the flock, this won't matter, since the old stabiliser just ignores the actual track data. To the contrary, the new stabiliser calculates an average and thus every track has now a tangible effect.
  • the old stabiliser used the median as pivot point for rotation measurement, but then applied the rotation around the image centre. The new stabiliser also uses the image centre as pivot point. Thus, an migrated rotation track will be interpreted slightly different. Usually more precise.
  • rebased changeset against current master
  • addressed the obious points from recent review (@Sergey Sharybin (sergey))
    • remove any access to deprecated rot_track
    • introduce dedicated field MovieTrackingTrack::weight_stab

No working data in DNA - attach private data via GHash

Bugfix and clean-up

  • resolve problems with clipboard copy/paste of tracks
  • clean-up: use private header

Question @Sergey Sharybin (sergey) :

We use now a GHash in a static variable in tracking_stabilize.c
In theory, there is a possible race on initialisation. I can't judge how relevant this could be in practice. It would probably require a concurrent call to BKE_tracking_stabilize_frame for two different movie clips.

Do you consider this relevant?

Obviously, syncing such an init is tricky. Is there something like a global init hook for blender?

Eeeh, global static variable is same bad as storing runtime data in DNA.

Let's try to make it clear what would be a good design:

  • All stuff which is related on state is allowed to be in DNA.

So, for example, if you're scrubbing across the frame range to calculate maximum requires scale factor you can (and should) store this in DNA since it defines state. This state will then be later used when scrubbing the timeline and not re-evaluated (it's only re-evaluating when you move marker).

  • All stuff which is only required when calculating stabilization parameters should be inside own context structure which is passed to the stabilization functions.

So, basically, when you start calculating stabilization parameters you create a new context, within which you store all working data. You access/modify this data during evaluation. Once you've done with evaluation you free this context.

Everything which you consider presistent is a state and belongs to DNA.

It seems the patch lacks separation across working (runtime) data and a state and tries to use state data as a runtime and vise versa (since your hash is a static global).

For example, storing pointers to FCurve in a static global hash is a bad idea because that pointer might be easily invalidated by deleteing corresponding FCurve in the interface. Other pointers in the storage are also easily invalidatable.


In the original implementation i remember having parts which only run when there was modifications to the markers done, to recalculate various things like auto-scale factor. And there were parts which runs on every frame change to give actual stabilization parameters at the new frame.

If that's still the case, then separation should be something like this:

  • If you're recalculating any factor when stabilization parameter is changed or when marker is moved, this we should consider a state and store in DNA.
  • If that process requires access to various things like animation curves which makes sense to be cached somewhere for the faster access, this cache should be constructed when you start evaluation stabilization and destroyed once you've done with evaluation. Meaning, lifetime of this context is really short and not persistent during the application run.
source/blender/blenkernel/intern/tracking.c
71

Any reason why we can't share single private file for new stabilization functions?

Eeeh, global static variable is same bad as storing runtime data in DNA.

... separation should be something like this:

  • If you're recalculating any factor when stabilization parameter is changed or when marker is moved, this we should consider a state and store in DNA.
  • If that process requires access to various things like animation curves which makes sense to be cached somewhere for the faster access, this cache should be constructed when you start evaluation stabilization and destroyed once you've done with evaluation. Meaning, lifetime of this context is really short and not persistent during the application run.

The data in question does not fall into any of your categories

  • it is not persistent state, since it is discarded and re-built on initialisation and re-initialisation.
  • it is not context state, because it is not per frame, but initialised once for the whole stabilisation instance

Rather, it is per stabilisation instance and shared over all calculations of that instance.

Please note the following:

  • precalculating and caching makes only sense, if we re-use those values on several frames (which we do)
  • persisting and using persisted values would be wrong, because we can not be sure of consistency.

Thus we never use any values from persistent DNA, rather we do an initialisation pass, at startup, and when an update is triggered. Mostly, updates are triggered via UI-Redrawing, via the clip_listener. Additionally, the relevant operators set stab->ok = false and thus force an initialisation pass.

NOTE: the original version did the same: it never used persistent values loaded from DNA.

This setup looks sensible, because we can never be really sure about the consistency of state we load from persistent DNA. For one, Animation could have been changed. Then, anyone can add anything to DNA. And we can never be sure anyone doing relevant changes to persistent DNA also sets all relevant flags correctly.

I have found and fixed several instances where it was just forgotten to do the necessary updates. I can't blame anyone for that, since it's non-obvious internals.


Now, how to proceed? We need a decision

  • in my opinion, the previous solution with void-pointers was the most sane solution. Because we could reach our per instance working data via DNA, could access this from everywhere without overhead, but did never write those values persistently to storage.
  • the current solution is just slightly more crappy in my opinion, due to the race when initializing the hash in a static variable. And, each access to working data has a tiny overhead.
  • instead of a static var, we could write a GHash* to the MovieTrackingStabilization struct in DNA. But frankly, what would this be different then writing a void* at the same place? For that reason, I'd disregard that option
  • finally, we could revert to the initial submission, which you initially rejected: put all per instance working data into DNA, yet still discard and re-calculate all of it on initialisation pass.

Any of these solution would be easy to establish from the changesets for me, so please let's come to a decision!

source/blender/blenkernel/intern/tracking.c
71

yes, existing code structure is the reason.

Handling of tracks and markers is littered over various files, esp. due to the operators.
We need to clean-up and set some flags and adjust counters on various track related operations. In fact, this had caused bugs in the past, it is easy to overlook for someone not familiar with the internals (I've fixed several of these bugs!).

The old solution placed temporary working data in the DNA, and thus could just mess around with that from everywhere.

Now, how to proceed? We need a decision

Short story

Decision is simple: all the data which you can not be re-used across the frames should have a timelife of BKE_tracking_stabilization_data_get(). As in, created before you do actual evaluation and destroyed right afterwards.

All the persistent data, like flag whether stabilization is up-to-date or auto-scale factor (and other tihngs which defines actual *state*) stays in DNA.

Long story

in my opinion, the previous solution with void-pointers was the most sane solution. Because we could reach our per instance working data via DNA, could access this from everywhere without overhead, but did never write those values persistently to storage.

Regardless of where you store the data, there are parts of it which can not and must not be done more persistent that just evaluation time life.

For example, you can not store any pointers. And you do it for FCurves and MovieClip at least. The issue here is that this pointers might (and will be with high probability) changes in-between of calls to BKE_tracking_stabilization_data_get().

My guess is that you tore such pointers for faster/easier access to the things when doing evaluation. This is fair enough, but you must be careful here: if you want this data to be persistent to the timelife of stabilization, you'll end up in nasty code which will need to keep those pointers relevant. Doing undo, redo, fcurve manipulation and so on would need to take care of such pointers synchronization.

Easier approach is just not to bother with anything and limit timelife of such data. I don't see how that would be a bottleneck.

the current solution is just slightly more crappy in my opinion, due to the race when initializing the hash in a static variable. And, each access to working data has a tiny overhead.

It's not just that. It allows really easily to run into situation when pointers in the data are becoming invalid. If you simply undo after changing any parameters Blender will currently crash.

instead of a static var, we could write a GHash* to the MovieTrackingStabilization struct in DNA. But frankly, what would this be different then writing a void* at the same place? For that reason, I'd disregard that option

You could, but again. You need to decide which data can be and worth to be written to any morer or less persistent storage, and which one is just easier to reconstruct and destroy from BKE_tracking_stabilization_data_get().

Review: introduce per call private working data context

As discussed today in IRC, we resolve the question by treating
all initialisation working data as per call local, i.e. we
perform the init sequence on every frame.

Looks like a better solution in hindsight: Management of private
allocations is straight forward now, and tracking_stabilize.c
becomes self contained again.

Rebased against current master

Seems indeed much better. Some crucial feedback before we can commit this and keep cleaning things up in master.

Here is a patch which needed to be applied for proper versioning: http://hastebin.com/kigizovaco and which i think covers all the concerns from comments.

source/blender/blenkernel/intern/tracking.c
1264

You would crash anyway, in other areas. it sohuldn't be so that track has no markers.

Perhaps we should replace return NULL with BLI_assert() in BKE_tracking_marker_get(), but that we can do later after committing the patch.

source/blender/blenloader/intern/readfile.c
7481

You can not zero it out because it's required for the versioning.

7483

Is this a part of versioning to make existing files work?

source/blender/blenloader/intern/versioning_270.c
1346

No need to check that there is an existing DNA field in the struct. We almost never remove anything from DNA because of versioning.

1368

Same as above, but here it is actually a harmful thing: the evrsioning code was never run because of misuse of the function.

Hermann Voßeler (Ichthyostega) marked 5 inline comments as done.Aug 15 2016, 2:15 PM

Thanks! Hadn't retested migration after the DNA cleanup, it was a bug to set this field to zero.

Also applied your patch. At the time I started the extension, Tracking support was rather new to Blender and thus I thought I might guard against versions prior to addition of Tracking support, thus the second additional clause.

Probably superfluous today and makes code simple... we might want to discuss this question in IRC though. For now, I've applied the patch and removed those additional lower bound guards.

Also, I've investigated why I init stabiliztion.scale.
It is really a migration concern, thus I've moved that code into the versioning_70.c
At that time, it was also a display issue, but meanwhile we protect via RNA, thus it is no longer possible to set the scale to zero via UI, thus it should be sufficient to do this init once on migration (for the cases where the user
previously did not use autoscale)

source/blender/blenkernel/intern/tracking.c
1264

makes sense.
Removed that additional check

source/blender/blenloader/intern/readfile.c
7481

thanks! My dumbness...

7483

kindof versioning related. Thus I've moved it into the migration code.

Long Story: the old version used stabilization.scale only as value cache for the Autoscale feature. The new version treats Autoscale as a special case of setting an intended "target scale". Thus this setting is now a first class, persistent value (and is typically even animated). The setting scale == 0.0 can give weird artefacts in display; meanwhile we've also blocked it via RNA, so you can't set scale to zero anymore via UI.

Moreover the stab. code is guarded against zero scale, thus it is now really just a migration concern.

source/blender/blenloader/intern/versioning_270.c
1346

The reason why I added these clauses initially was to guard a lower version bound, i.e we do not attempt to migrate from Blender versions prior to addition of Tracking support.

At the time I started my patch, Tracking support was quite new. Moreover, I saw those lower bounds on other example code (can't find it anymore).

Meanwhile Tracking is in for a long time, and so it might be debatable just to ignore that corner case and keep the code more simple. Thus I've applied your patch.

We should discuss this point on occasion, in IRC though

Hermann Voßeler (Ichthyostega) marked 4 inline comments as done.

Bugfixes and clean-up in migration code

  • rebased onto current master
  • applied the patch from sergey
  • Bugfixes
This revision was automatically updated to reflect the committed changes.