Video import/export via FFmpeg: Poor-quality color conversion causes banding
Closed, ArchivedPublic

Description

System Information
openSUSE Tumbleweed

Blender Version
Git, 11a9434c2de

Blender’s video import/export through FFmpeg suffers from swscale’s poor-quality color conversion. Anything involving YUV and RGB will result in ugly banding artifacts due to large error. The only way to mitigate this within FFmpeg is by inserting the zscale filter before the format conversion, and choosing an appropriate dithering algorithm with it. However, this relies on a separate library (libzimg) that is neither compiled into FFmpeg by default nor packaged by many Linux distributions.

Since most output formats only support YUV color spaces, something needs to be done about that as well. For conversion from RGB to YUV, however, it is required to explicitly set at least the colormatrix for zscale (an explicit setting would be a good thing in general). I’m not even sure how Blender handles this right now. FFmpeg might default to Rec. 601, which would be quite bad and result in outright wrong colors during playback if the stream isn’t tagged appropriately but triggers resolution-based heuristics in playback software.

Blender could of course also use libzimg internally to do the conversion for FFmpeg. Maybe that’s easier to support. I’m not sure if there are additional formats available that way, but maybe it could directly convert to something useful to Blender (e. g. high-precision float RGB) to reduce the error.

Details

Type
Bug

Just confirmed that FFmpeg defaults to Rec. 601. Bad. This needs a selection on import (defaulting to “Auto”, which should apply the same resolution-based heuristics as video players for untagged files, i.e. use 709 with HD resolutions) and export (should autoselect based on render resolution and also mark the colorspace appropriately in the resulting file—no idea what the API is for that on lavf’s side).

FYI: For now I’m preprocessing the input by converting to 24-bit RGB before importing into Blender, but that clearly isn’t enough most of the time. Even 8-bit (per channel) YUV can only be used with heavy dithering, and e. g. 10-bit input (which is getting increasingly common) loses way too much precision. Given FFmpeg’s rather awkward pixfmt support (owing to swscale legacy), it seems like using libzimg directly is the best option.

Makes me wish mpv’s OpenGL renderer were its own library, could do arbitrary conversions (not just to RGB) and supported offscreen rendering. Maybe in some distant future (the topic has been brought up in the devel channel a few times)…

Sergey Sharybin (sergey) closed this task as Archived.Sep 13 2017, 1:42 PM
Sergey Sharybin (sergey) claimed this task.

Thanks for the report, but it doesn't sound as a bug at all, more like a feature request or an enhancement request which we do not accept in the bug tracker. There are lots of areas in Blender which can behave better, but with are behaving fully according to the design and hence not considered a bug.

Such requests for improvements are better be addressed to bf-funboard mailing list, or (what is even better ;) as a patches here :)

Alright. My bad. Wasn’t aware. I’ll see if I can come up with a patch or two to address the issues, though, because this is pretty important to me.

Thanks.