Blender’s video import/export through FFmpeg suffers from swscale’s poor-quality color conversion. Anything involving YUV and RGB will result in ugly banding artifacts due to large error. The only way to mitigate this within FFmpeg is by inserting the zscale filter before the format conversion, and choosing an appropriate dithering algorithm with it. However, this relies on a separate library (libzimg) that is neither compiled into FFmpeg by default nor packaged by many Linux distributions.
Since most output formats only support YUV color spaces, something needs to be done about that as well. For conversion from RGB to YUV, however, it is required to explicitly set at least the colormatrix for zscale (an explicit setting would be a good thing in general). I’m not even sure how Blender handles this right now. FFmpeg might default to Rec. 601, which would be quite bad and result in outright wrong colors during playback if the stream isn’t tagged appropriately but triggers resolution-based heuristics in playback software.
Blender could of course also use libzimg internally to do the conversion for FFmpeg. Maybe that’s easier to support. I’m not sure if there are additional formats available that way, but maybe it could directly convert to something useful to Blender (e. g. high-precision float RGB) to reduce the error.