Current handling of size (in bytes) in writer code is rather messy, to say the least: different part of the code uses int, uint and size_t. BHead itself stores size in int...
While int should be enough in most cases (it allows chunks of 2GB at most), we are now hitting some rare issues, see e.g. T78529: Blend file corrupted during save caused by high Cubemap Size.
I think we should at the very least use size_t everywhere in functions, and assert/try to handle the issue when actual size exceeds BHead's int capacity?
Ultimately it might be nice to allow bigger chunks (using int64_t in BHead)? But I am not sure how we could handle that in a compatible way?