It works by sampling (number of samples configurable) the image and drawing a point cloud of those sampled data. An option to use full resolution is available but is slow with high res image and both waveform and vectorcope pannel expanded)
Saving the statistic data was too much data and too slow, if using subsampled data is not ok for preview. Calculating an image of the scope and only displaying it as a opengl texture seems be the best way to go (more like the sequencer scopes). For what it is worth: after effect seems to use a similar subsampling method.
Further (if design is ok):
-some speed optimisation
-zoom in/out with out of range float support
-colorspace linear and srgb (for histogram too)
Here a another patch with some clean up and graphical changes.
Not that it uses the blender rgb_to_ycc wich seems to only handle BT.601 but with rounding errors.
Will try to correct these ycc conversion tomorrow.
I don't want to sound negative, it's meant as constructive but is the N key panel really the place for the scopes. They are pretty unusable at the moment, after 3 diff. :-(
I feel they be far more usable if they were detachable, scalable, resizable and available horizontal. They are very restricted in the N key panel at the moment.
A video frame is wider than it is tall, so it makes more sense to have scopes laid out horizontal, then you stand a chance of seeing them all at a decent size whilst scrubbing through a shot.
Can the scopes represent the aspect ratio of the feed? You can't easily visualise the luma wave form over the video frame if they don't have the same aspect ratio.
Can they update with frame changes of a video, currently they don't which kind of defeats some of the purpose of having them, the luma waveform only updates if you click into the histogram for example.
Can they have graticules? Do they show a full 0 - 255 range whether YCbCr or RGB?
I'm using 2x 21" CRT's a desktop resolution of 2560x1024 and I can't see any detail in the scopes, they are far too small.
Can the histogram have Min and Max values showing, separate R, G, B and Luma ?
Can all the scopes windows have a bit more space around their edge to clearly show the values to the edges before the panel frame?
It's great to see progress on these though all the same.
Thanks for the feed back.
First my last patch corrected the update problems.
Second beside the update bug, my patch didn't touch the histogram, they are based on it and works like it. I already recieved feature request for adding luma and separate RGB for the histogam but I guess this is for another patch.
I did not spend a lot of time on grid drawing, specially for the waveform I agree, but all this could be changed easily. And I will do it with pleasure if I get a positive review of the patch.
I also planned to add resizing and zooming in/out but again I am not really into spending time before knowing if the whole thing as chances to be accepted or not.
As of their position in the P panel (not the N) I think this is because of some convention taken on the wintercamp. But I think making this P panel horizontal with transparent background and drawing on top of the main area instead of resizing it was part of the plan.
I will look into making other patches for the histogram, the YCbCr conversion and the panel soon.
Ok, great, looking forward to it, I understand small steps, only do so much and code keeps changing too. :-)
I couldn't build patch 3 from svn last night, failed on a couple of chunks, I'll try again.
Also Filter Stack failed against latest svn of coarse. I wish I'd kept a build with that working, I had one but built over and it wouldn't patch after further svn changes. :-(
Is that something you are working on at all. Could you find time to update the filter stack patch for current svn?
Patch fails still. :-(
patching file source/blender/editors/space_image/space_image.c
Hunk #7 FAILED at 613.
Hunk #8 FAILED at 633.
Hunk #9 succeeded at 856 (offset 6 lines).
2 out of 9 hunks FAILED -- saving rejects to file source/blender/editors/space_image/space_image.c.rej
patching file source/blender/editors/interface/interface_draw.c
patching file source/blender/editors/interface/interface_templates.c
patching file source/blender/editors/interface/interface_intern.h
patching file source/blender/editors/interface/interface_widgets.c
Sorry again but it fails since the rgb to ycc patch was added I think.
Compiling ==> 'interface_draw.c'
source/blender/editors/interface/interface_draw.c: In function ‘ui_draw_but_WAVEFORM’:
source/blender/editors/interface/interface_draw.c:863: error: too few arguments to function ‘rgb_to_ycc’
source/blender/editors/interface/interface_draw.c: In function ‘ui_draw_but_VECTORSCOPE’:
source/blender/editors/interface/interface_draw.c:945: error: too few arguments to function ‘rgb_to_ycc’
source/blender/editors/interface/interface_draw.c:949: error: too few arguments to function ‘rgb_to_ycc’
source/blender/editors/interface/interface_draw.c:968: error: too few arguments to function ‘rgb_to_ycc’
scons: *** [/home/.../Apps/252/build/linux2/source/blender/editors/interface/interface_draw.o] Error 1
scons: building terminated because of errors.
Fantastic stuff Xat.
We should likely set up a wiki page with all of the relevant sequencer patches.
I'd love to test all of these recent onces, but since our tracker is suffering currently, we need some manual legwork to find them all.
I reworked the patch a lot, here is the changes:
- Merged the update function with the histogram for better performance
- Graticules with IRE scale for waveform and vectorscope
- Support for 32bit float/channel images, without clamping, so we an see out of range value
- Waveform can be scaled in/out
- Waveform and vectorscope can be resized
- Histogram Waveform and Vectorscope support color management (only sRGB for now)
- Waveform permit to view YCbCr value according JPEG, REC601 or REC709 conversion
- Waveform show min max values on the side
- Histogram support Luma and individual R G or B mode
- Correct a small bug in rgb->ycc conversion (a +16 was left in the jpeg conversion)
- Add the luma sampling to lina sample tool.
Great development in scopes Xat.
I hope you don't mind a few comments.
Graticules, I can't seem to alter their colour or brightness in User Preferences.
Resizing the panels vertically is great.
A Min and Max value for each channel and luma would be useful. In Numerals undernearh or alongside histogram.
The waveform view remains centered when stretching the panel vertically but horizontal gaps appear between the samples and a increasing gap top and bottom of the waveform, so the user then needs to zoom to remove the lines. Seeing lines between samples could suggest that there are gaps in the luma or chroma range the sort of thing seen when a 16 - 235 to 0 - 255 mapping is done. Could the waveform stretch vertically with the panel. This would also maintain a uniform space above and below the waveform graticules.
The graticules appear to show 0 to 100 if this is IRE then 100% is 225 white. Could the waveform show the full 0 - 255, which would be max 110% IRE? Could the graticules have a 70% and 100% dashed line?
The waveform appears to expand to the full width of the panel box, so the graticules are nearly hidden from view depending on opacity of scopes and density of samples, could the graticule values be clear of the waveform to the left hand side.
A zoom window to view just a section of the waveform could be useful.
Seperate R, G, B and Luma view choices.
I can't seem to zoom this scope so it's difficult to read.
A colour band around the perimeter of the scope similar to the old VSE one would be helpful and the text for the colours appears to be missing.
Now have to rebuild to see Matts tweaks.
New patch tu use vertex array for the vectorscope too.
Some update speedup and code cleaning too.
Thanks for your feedback.
I added a line at 7.5 IRE in the luma waveform like some asked.
Regarding the whole 235 white point I think there is some confusion here, so let me clear it up:
In the luma mode, the scale 0-100 is IRE. In the other mode no, the scale represent more the % in the whole range (0-255 for those who speak 8bit).
The 16-235-240 limit range comes from norms that regulate digital encoding of analogical video (ITU-R BT.601 and 709). These norms also specify the YCbCR colorspace use for encoding. When the waveform is in a mode that does use 16-235-240 range, the limits are shown by some light red lines.
In luma mode it tries to simulate an actual waveform monitor displaying analog NTSC/PAL signal in YUV color space (or YIQ)
@Xat In the luma mode, the scale 0-100 is IRE. In the other mode no, the scale represent more the % in the whole range (0-255 for those who speak 8bit)."
Ok so the inevitable question is:
I recently added a setColorspace parameter to force RGB full range to the Sequencer / Compositor for YUV based MPEGS.
The issue I am curious about is how the Sequencer can deal with this or notice it? Do we have internal flags in the Blender anim struct? Does Matt's work on color space (linearizing right now obviously) have any of the structures in place for dealing with space manipulations?
If so, then the patch likely needs to toggle them (unless it relies on further calls to ffmpeg getColorspace to determine the nature of the space) so that we can tell whether or not we are in 601 (16-235/240/etc) scaling mode.
Fantastic work Xat.
A couple more comments sorry:
In the User Preferences I set the scopes windows to Black. I can't see the graticule numeric scale or 7.5% line at all clearly even scaling/zooming the waveform box to 1/3rd height of my 21" CRT display.
The 7.5% red line is too dark is it possible to add a user pref for graticule colour, I think a bright orange would be clearer than mid grey. I think that user pref theming should be considered with some of these displays, it's not always going to be default mid grey theme that 2.5 has now.
Also is it possible to have a 70%, 100% and 110% line?
Waveform Luma mode is still called Luminance. http://poynton.com/papers/YUV_and_luminance_harmful.html
I differ in opinion as to clearing up the white point. The maximum white that can be captured in YCbCr is 255. IRE 109%. IRE 100% is just max legal white for broadcast, Y 235.
I think that there should be no 16 - 235 to 0 - 255 scaling in blender and that YCbCr luma values should be converted to RGB without scaling. The user then sets the black and white points to suit there needs.
For that reason I feel the Waveform should show 110% max not 100%.
Video Demystified under YCbCr Equations:HDTV talks of 'Studio RGB, that is a nominal range of 16 - 235 for YCbCr' and 'possible occasional excursions into 0 - 15 and 236 - 255 values.' I'm no video engineer but I think this is misleading as it discusses strictly broadcast legal range only. Cameras regularly capture rather than occasionally capture outside the 'legal' range in fact many do so deliberately to maximise dynamic range in a crippled space like 8bit.
Although I struggle to capture black levels below about Y 10 I can easily and actually aim to get white values right upto 255. ie As the DV Rebel Guide calls it, the 'Top 10%' those values between IRE 100 and clipping point at 109%, 'Exposing to the Right', from personal experience all Canon HVxx range, Canon 5D, 7D and T2i capture full range luma. Even my old consumer JVC DV cam captures luma to 255.
The blue box in the link below discusses hypergamma 3:
This quote, "Hypergammas allow for knee and ped control, and a skilled DIT/video engineer can manipulate those to fit most shots into the 0-109% “bit bucket” and fill the entire dynamic range. The idea is to fill the bit bucket as fully as possible to preserve the most scene information. (The extra 9% above 100% offers at least an extra stop of highlight latitude, so if you aren’t shooting for broadcast,...."
Shoot for maximum dynamic range, capture as much data off the vid cam chip as possible and pull it within the 16 - 235 range for broadcast in post. Should anyone actually be going to broadcast.
From this link which also includes discussion of camera sensor being linear, signal to noise ratio etc.
16 - 235 to 0 - 255 RGB clipping.
Blender can import RGB and YCbCr sources with differing levels and composite them together. In the UV Image Editor if we create a new image, make it white at 255 RGB then view the waveform a white line can be clearly seen at 100% IRE, but RGB 255 is 110% IRE. Which is inconsistent.
Sorry Xat I feel I'm splitting hairs but I really value the work you're doing.
Maybe it is to transparent, but note this is only in the waveform in "luminance" mode (which should be name luma I agree). I suppose theming could be added, I did not look at it yet.
The waveform in luma mode display the luma Y' as it is displayed by a real waveform monitor using composite (non component) video signal in the real YUV (or Y'UV) color space, not Y'CbCr. This is analog, no 8bit coding so clearly no 255 and 235 ;) You can't actually verify that your colors are save in the waveform (just the black and white) as the chrominance is modulated over the luminance signal you would have to verify that the sum/substraction of both term stays below 100 IRE. A full blank image shows a line at 100IRE because it is.
In the YCbCr modes and RGB mode, the scale is not IRE, it is %. and when in one mode that does use 16-235-240 range, a red line shows those limits (and these would represent 0 and 100 IRE on a component sigal)
The maximum Y' value possible in Y'CbCr space is 235 for ITU-R BT.601 and 709 while it is 255 for JFIF-Jpeg. Howerver in both case when converted to R'G'B' it correspond to 255 255 255 (in 8bit/channel). Sometimes the values are slightly over 235 (or under 16) but this is due to some rounding or interference and they must be clipped (not in JFIF conversion of course).
An other colorspace is xvYCC which basicly use the conversion specified by ITU-R BT.601 and 709 exept that the floor room (0-16) or top room (235 or 240 to 255) are all used to permit a wider gammut. When converting to R'G'B' in this case the Y'CbCr must not be clipped and the resulting R'G'B' value can be superior to 255 or inferior to 0 (so you need to convert to float directly) providing extra whiter, super black and oversaturated pixels.
That said there is not YCbCr conversion being done by blender (except for displaying scope). For encoding/decoding digital video this is handled by libswscale which does not support float so no xvYCC for now. Regarding the colorspace conversion done by libswscale and the auto-detection (or ignorance) of the source/destination colorspace, I think we must add a way in blender to chose (force) one, while still letting libswscale handling it. (and an info telling which is autodetected). But if you encode for broadcasting/hardware playing, you really always used one of the ITU-R601 or 709. What comes out of consumer camera or files from the internet is really a big mess regarding this.
I am a engineer (with 4 years of video experience) but still I can make mistake.
Your problem is that your camera use xvYCC color space (or another but didn't care clipping, and thats good) so you want to get back these ultrawhite data. As libswscale can't handle them or clip them, you want to force the JFIF transformation (wich use the whole 0-255 range) it would work ok if blender hallowed you to chose the conversion except that your black and white point would be a little off (but I guess that is what you want, to have more space to set them manualy later).
You also want to encode using JFIF conversion so white 255-255-255 is converted to Y'= 255 for then being decoded by an apparatus that use ITU601/709 conversion so Y'255 would be 110 IRE (or clipped).
I did not test your patch yet but I will soon. I had this exact same peace of code pasted in my block note from ffmpeg svn since a few week. I suppose it tries to use the ycc conversion specified in the codec and without your patch it always assume to be ITU-R601.
Don't feel sorry for asking, I am glad if I could clear things up and happy to get feed back, specially by someone who know his subject.
In brief you want what we all want: use xvYCC for all video (as it is save to use for ITU 601/709 compatibility) and as blender/libswscale does not support its wide gammut you are trying to get back this info by forcing wrong conversion, which is often the recommended method.
I am thinking into rescaling YCbCr waveform so that scale would correspond to IRE of component signal. In that case i suppose I can remove the YCbCr JFIF mode.
Thanks for your patience.
I spotted other user prefs for the scopes so thought choice of graticules could be added easily?
Using the waveform in RGB mode especially, the horizontal lines that appear when stretching the panel vertically is quite off putting suggesting gaps in the range.
Although the old VSE scope didn't allow vertical resize only zoom it didn't exhibit horizontal gaps unless source was scaled. :-)
Using the waveform more brings another comment, which is I struggle with the waveform opacity. Opacity is good for fine detail, but with that comes lack of brightness, also more samples saturates the waveform. I think it would be useful to be able to set opacity for fine detail, but then boost brightness to see that fine detail clearly, hope that makes sense.
Do you see Troy's patch now in svn, we now have 0 - 255 luma import into blender for progressive video currently and hopefully interlaced and psf soon.
Previously I would use and still may use AVISynth ConvertToRGB() PC.709 to get full range into Blender as .png image sequences or lossless RGB .avi's. Quality is far better than using FFMPEG. I have yet to test it as most of my source material is HDV psf or DV. My T2i body comes soon. :-)
I blogged about full range at www.yellowsblog.wordpress.com where it mentions my original bug report for clipping. As you are a video engineer I'd value your comments.
Do you think with Troys excellent patch, xvYCC is a reality in Blender now?
Update on the patch using job manager (was crashing when opening blend file with an image displayed in image editor).
I also change the label Luminance to Luma for the waveform mode to use recommended vocabulary.
Added a new patch to port the update scope function to a background job.
Complete rewrite because previous method was causing openGl to crash randomly (modifying vertex array in another thread while drawing cause crash on code like GlScisor() or Glcolor() even when vertex array allocation is done out of thread. I suppose this is due to marshaling system calls between GL lib and driver)
So now I duplicated the buffers and swap them at end of job.
Still the drawing function is call 2 times (once when image as changed but scopes update thread is not finished yet, and another when thread finish)
I introduced a hack to avoid flickering on frame changes, but a better solution is to not redraw scopes AreaRegion when it is not needed but I never managed to get it working.
I am an open source enthusiast, currently working on some camera related image processing i.e. calculating waveform and vectorscope for an image (pixel data is given in packed RGGB form, having 4096*3072 pixels ). I am not able to make out what should be my approach to calculate these tools, please help me out to figure it out.
Please help me just give an approach on how should I calculate waveform and vectorscope.