Page MenuHome

hard limits of 120 for fps/fps_base do not limit frame-rate to 120. In practice these can be much higher
Closed, InvalidPublic


System Information
GNU/Linux bit (fedora) 64 bit, intel CPU, nvidia GPU

Blender Version
Broken: 2.78c
Worked: (optional)

Short description of error
scene.render.fps and scene.render.fps_base both have an upper limit of 120, presumably an attempt to limit frame rate to that number. However, the actual framerate is a ratio: rate = fps/fps_base , so limiting does not make sense as: 100/0.1 = 1000 which does not trigger the limit, but 24000/1000 should be 24 (no problem) but blender will result in 1 fps since it actually does 120/120

Exact steps for others to reproduce the error
Here is an example video file: video example
On a linux system, use the following command to get the frame rate settings for the file:

ffprobe -loglevel error -select_streams v:0 -show_entries stream=r_frame_rate -of default=nw=1:nk=1 drinception.mp4

this will give the result

Now try in a blender console (or the UI):

bpy.context.scene.render.fps, bpy.context.scene.render.fps_base = 24000, 1001

the result will be 120/120 e.g. 1 fps.
I found this while doing a batch script using blender and some input videos.

ugly workaround

while any(f > 120 for f in (fps, fps_base)):
    fps = fps / 10
    fps_base = fps_base / 10
bpy.context.scene.render.fps, bpy.context.scene.render.fps_base = fps, fps_base



Event Timeline

Thanks for the report, but there is no bug here, our fps_base is a float value for a reason - it is supposed to remain close to 1.0f. Among other advantages, it allows to keep the integer fps value close to its real floating value.

Your 'ugly' workaround is hence what's supposed to be done here, though you can express it in a much nicer way, e.g.:

div = pow(10, int(log(fps_base)))
fps /= div
fps_base /= div

please look at this short thread: I didn't post this for no reason - first I asked on IRC, got told "ask on the ML" then I posted on the ML, then got told "post on the bug tracker, it's a bug" - sergey specifically asked me to assign it to him.
the bug ISNT that something is or isnt an int. Rather that a hard limit is set that does not do what is intended. It's clearly the case that if someone enters e.g.
an fps of 100 (legal, an int and under 120) it will work
and then enters an fps_base of .001 (under 120 so it will also work)
they will get an fps of 100000 - way over the 120 number.

So what is the max limit of 120 doing? please tell me what it is for. Your answer with the "better workaround" is very intelligent, more efficent code and good math, but it misses the actual point. Should we run this formula every time we enter number by hand here too?

so, to reiterate, since I wasn't clear enough before it seems:

when :

C = a/b and you want C to be < 120, putting a limit of a < 120 and b < 120 does not do what you want it to do. That is the bug. The fact that you can defend bad math by doing better math to work around it is frankly bewildering.

I don’t see the issue here. If Sergey thinks this is a bug, he is free to reopen and fix it the way he likes it - changing hard limits of RNA values is beyond trivial, the longest part will be to write the commit message.

To me there is no bug here, we do not necessarily have to cope with all possible ways to represent non-integer framerates, current one allows to cover all possible reasonable values (as you can see with presets, we represent that crazy NTSC framerate by 24/1.001), hence I close the report.

The issue is that the limit does not work. It tries to limit final frame rate (fps/fps_base) to 120 by limiting both fps and fps_base to 120... this is mathematically wrong, and allows users to get astronomically high frame rates simply by choosing a tiny value in fps_base (this one is a float), as well as preventing 1 to 1 conversions from commonly reported values from actual videos without a workaround (it seems those videos use ints for both numerator and denominator)

I understand that that the crazy framerate is crazy, I'm not arguing against the *way* blender implements it, I'm just saying that this 120 limit is not doing what it is supposed to do, and making life harder.

All I suggest is to remove it - whoever, if you want to implement a limit logic that uses the final calculation into account I would not be averse.

Sure, it does not cause a crash, but it is technically wrong - fails in corner cases. Your response is "it works most of the time, corner cases don't matter"... I'm mainly annoyed because I asked multiple times on IRC and ML "should I report this" and every coder said "REPORT IT" and now here we are. I don't really care, but starting to feel this is turning into a pattern.