Page MenuHome

This branch contains a set of features aiming at mixing video stream with BGE scene as efficiently as possible. 1) Video capture with DeckLink cards 2) Video keying with DeckLink cards 3) Offscreen render...
Needs ReviewPublic

Authored by Benoit Bolsee (ben2610) on Nov 15 2015, 6:01 PM.

Details

Summary

It covers 3 types of video interactions:

  1. Video capture with DeckLink cards
  2. Video keying with DeckLink cards
  3. Offscreen render and frame buffer extraction.

The last set of features is hardware agnostic: it provides the means to
send BGE custom renders to external video devices (e.g. Occulus Rift) with the
higest possible efficiency.

Read the wiki to know more about the branch:
http://wiki.blender.org/index.php/Dev:Source/GameEngine/Decklink

The Python API documentation is up to date in the branch.

Diff Detail

Repository
rB Blender
Branch
decklink

Event Timeline

There are a very large number of changes, so older changes are hidden. Show Older Changes
  • BGE: Set render flag true on game start.
  • BGE: Fix offscreen render on texture.
  • BGE: Display shadow when default render is off.
  • BGE: Add GL synchronization on ImageRender.refresh() without buffer.

Updating D1618: This branch contains a set of features aiming at mixing video stream with

BGE scene as efficiently as possible.

  1. Video capture with DeckLink cards
  2. Video keying with DeckLink cards
  3. Offscreen render...

The offscreen buffer pipeline is much better now, thanks. It runs smooth in Oculus in my tests here.

Another bug: If I change the scale of the camera (e.g., from 1.0 to 5.0) the lighting of the scene changes drastically. As an example take the

file, remove all the logic bricks and play with the main camera scale. (note: the bug affects the main drawing, I didn't even test this situation for the offscreen rendering).

@Dalai Felinto (dfelinto). It seems to be a side effect of the Blender GLSL material. I'll try to figure out where it's coming from but fixing this will in any case require to change the GLSL node engine, which I'm not keen to do. A few of remarks first:

  • I implemented scaling on the camera mostly (and only) to support negative scale for vertical flip at 0 cost for the CPU. When using Y scale -1 there is no lighting modification, hence it's a perfect flip.
  • One should only scale X and Y, there is no point is scaling the 3 axis and the same time: a matrix sude effect makes this ineffective.
  • When X and/or Y scale are greater than 1 or -1, the luminosity decreases. Physically it makes sense though: zooming on an object in the real world also decrease the luminosity of the image.
  • To get a zoom effect in the BGE, the traditional and best way is to change the camera sensor size.

For these reasons I would not call this a bug and would leave like that. The documentation should say that scaling on the camera object is only supported in the BGE for -1 scale to obtain a flip effect.

Good evening, I've been trying this branch, and I think I found a bug (or maybe I'm doing something wrong). When I try to stream video to a texture from my decklink card, I get only black on my texture (or white if using texturemode)... The status indicates video is playing, but when I test the framerate, it shows 26 (I'm using PAL, and it should read 25). Could this have anything to do with video not showing up on my texture?

I'm using tex.source = vt.VideoDeckLink("pal /v210", 0) to define the input as PAL, but I have failed to get it to work...

Is there a way I can upload my test file to see if it is my mistake?
Thanks in advance...

@Nuno Estanqueiro (nuno.estanqueiro): Hi Nuno, thanks for testing the branch. As I explained in the documentation, I only tested it with Decklink 4K Extreme with the hope that it works with all the decklink cards as they share the same API. You can upload files by simply dragging and dropping them in the comment text area as you typing (the cloud icon in the menu bar will remind it to you). Send your blend, I'll be happy to check it up.

A few things to note:

  • Make sure that you are using Desktop Video version 10.4 or above
  • Did you check that v210 is the pixel format that the card produces? You must find which pixel format the card is using and use it otherwise the texture is void.
  • You need to activate a shader to convert the video frame to RGB in the GPU as shown in doc/python_api/examples/bge.texture.2.py

Good Evening Benoit,

I'm using a Decklink Extreme 2, a bit old, but quite capable of using v210, as stated by the sample application DeviceList (Supported video input display modes and pixel formats: PAL 720 x 576 25 FPS 8-bitYUV 10-bitYUV). I am using Desktop Video 10.4.1 on a Linux machine, and first I've tried with your sample py, with no success, and then I've changed it to try and get simple 8bitYUV, also with no luck... I get the shader to compile and link, but see no texture on it. The shader is taken from the Decklink sample LoopThroughWithOpenGLCompositing. Also when streaming the gameengine to the output of the card, the framerate just colapses (0.4 FPS on a 720x576 frame size). This is my sample Blend


I would really appreciate you could try it out and see if it works for you, or if I'm doing something obviously wrong...

Tomorrow I will test it on a Decklink Quad to see if the problem is in any way related to this card/computer.

Thank you very much for your time and effort on this, which is very useful to me... and I hope at least the transparency on the gameengine and the offscreen render makes it to trunk.
Nuno Estanqueiro

@Nuno Estanqueiro (nuno.estanqueiro): I started to look at your file. I didn't test it yet but I noticed one error in the shader: the texture should not be sampled as RGBA but RED_INTEGER as stated in the documentation: "Only the ‘ 8BitARGB’ and ‘8BitBGRA’ pixel format have an equivalent in OpenGL. Other formats are sent to the GPU as a ‘GL_RED_INTEGER’ texture (i.e. a texture with only the red channel coded as an unsigned 32 bit integer)".
This means that the shader should actually be the following (I didn't test it but you get the idea):

		#version 130
		uniform sampler2D tex; 
		void main(void) 
		{
			vec4 color;
			float tx, ty, true_width, Y, Cb, Cr; 
			unsigned int r;
			tx = gl_TexCoord[0].x; 
			ty = gl_TexCoord[0].y; 
			r = texture(tex, vec2(tx,ty)).r;
			true_width = float(textureSize(tex, 0).x) * 2.0;
			if (fract(floor(tx*true_width+0.5) * 0.5) > 0.0)
				Y = float((r >> 24U) & 0xFFU);
			else
				Y = float((r >> 8U) & 0xFFU);
			Cb = float(r & 0xFFU);
			Cr = float((r >> 16U) & 0xFFU);
			Y = (Y - 16.0) / 219.0; 
			Cb = (Cb - 16.0) / 224.0 - 0.5; 
			Cr = (Cr - 16.0) / 224.0 - 0.5; 
			color.r = Y + 1.5748 * Cr; 
			color.g = Y - 0.1873 * Cb - 0.4681 * Cr;
			color.b = Y + 1.8556 * Cb;
			color.a = 0.7
			gl_FragColor = color; 
		}

Apart from that your file looks good. I will now try to produce a PAL video stream and test your blend for real.

@Nuno Estanqueiro (nuno.estanqueiro): I managed to configure a PS3 to produce PAL output and I managed to get it inside the Decklink 4K Extreme with a bit of cabling. I confirm that it works with PAL/8bitYUV because I get the video in 'mediaexpress' but in the BGE the texture is just green. I'll investigate and let you know.

Hello Benoit,

Same here, and with the Decklink Quad as well, although I had to change one line in the corrected shader to get it to complie... it was complaining about

error C7011: implicit cast from "float" to "uint"

so I changed line 31 to

r = uint(texture(tex, vec2(tx,ty)).r);

That got the shader to compile, but gives me green as well (and MediaExpress reads video on that input), so next thing for me to try is getting it tested with HD video to see if I get a picture on it.

I'll post feedback on that later today if I can...

Thank you for your time!

Please note that there were actually 2 errors in the shader (note the 'usampler' that must be used for integer texture). Yet that doesn't solve the issue. Next step is go debugging.

		#version 130
		uniform usampler2D tex; 
		void main(void) 
		{
			vec4 color;
			float tx, ty, true_width, Y, Cb, Cr; 
			uint r;
			tx = gl_TexCoord[0].x; 
			ty = gl_TexCoord[0].y; 
			r = texture(tex, vec2(tx,ty)).r;
			true_width = float(textureSize(tex, 0).x) * 2.0;
			if (fract(floor(tx*true_width+0.5) * 0.5) > 0.0)
				Y = float((r >> 24U) & 0xFFU);
			else
				Y = float((r >> 8U) & 0xFFU);
			Cb = float(r & 0xFFU);
			Cr = float((r >> 16U) & 0xFFU);
			Y = (Y - 16.0) / 219.0; 
			Cb = (Cb - 16.0) / 224.0 - 0.5; 
			Cr = (Cr - 16.0) / 224.0 - 0.5; 
			color.r = Y + 1.5748 * Cr; 
			color.g = Y - 0.1873 * Cb - 0.4681 * Cr;
			color.b = Y + 1.8556 * Cb;
			color.a = 0.7;
			gl_FragColor = color; 
		}

@Nuno Estanqueiro (nuno.estanqueiro): I found two issues:

  1. There is an error in the vertex shader; you must also pass the texture coordinates if you want them available in the fragment shader:
#version 130
void main()
{
   gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
   gl_TexCoord[0] = gl_MultiTexCoord0;
}
  1. The Intel GPU that I am testing with has a problem with the GL_R32UI internal texture format. I am using this format in VideoTexture to pass non-RGBA video frame to the GPU because it makes it easy to extract color components with bitwise operations but it seems incompatible with Intel GMA. I switched to GL_RGBA (as YUV is somewhat compatible) and then it worked. However, this doesn't solve the problem for the more complex pixel formats such as r210 and v210. I'll see if I can find a method that works for Intel and nVidia.

Note 1: GL_R32UI has been texted with nVidia previously and I know it works. So if you are using a nVidia graphic card, the fix in the vertex shader is all that you need because other than that, the frames are correctly sent to the GPU.

Note 2: VideoDecklink reports framerate=26 for PAL. This is due to a formula that adds 1 to the frame rate reported by the Decklink SDK. I wrote this formula, so there must be reason but I don't remember it. I'll check it up.

hi @Benoit Bolsee (ben2610) some comments after talking with @campbell and it would definitively help review if we split the code at least in a few chunks/patches:

Small fixes (atomic ops, camera scale, ...) could each be sent as a separated review, and move to master as soon as they are tested and peer reviewed.

Big features (DECKLINK, GPU_DIRECT, X11_ALPHA) could also be split and tackled separately. Their WITH_... could even be removed from cmake, but for now this will help split the code.

Offscreen related features (meaning all the remaining code in the branch that is not under a WITH_...) could be separated too. I personally feel that this part of the code is more finished and tested, and will go to master sooner than the rest.

What do you think? Is it something you can handle? Need help with this code splitting?

@Benoit Bolsee (ben2610) : After correcting the shader, I still don't have image showing on my texture... I'm trying this on two different machines, both on linux, since my windows build fails, at linking moment, complaining about DVP... On one machine (at work) I have a Decklink Quad, with an NVIDIA 970 GTX... at home I have an older Decklink Extreme 2, with an NVIDIA 8400 gs...

I've tried changing the test file, combining your original one with the corrected 8BitYUV version, to see If I could get some picture, in other pixelformat combinations... but still nothing. Only green.
I'll append it here so you can try it... Are you using windows, or Linux?

If this file works for you, then maybe the problem is my build, or my setup...

Thank You

I did extensive video capture tests with the Decklink 4k extreme. I tested all combinations of:

  • GPU: nVidia Quadro K4000 and Intel HD2000
  • Video format: PAL/HD720p
  • Video input: Composite/HDMI/SDI
  • OS: Linux Mint 17 (Ubuntu 14.04)/WIndows 8
  • GPU Direct: with and without

I manage to get the video capture to work in all cases provided that:

  1. The video and pixel format match exactly the video stream. I used MediaExpress to detect the video and pixel format (RGB or YUV). In this example, although the capture is set to store RGB in the capture file, the actual input is YUV (and thus YUV must be used in the BGE) as visible in the Edit->Preference panel:
  2. HDCP (i.e encryption) must be off on HDMI input because the Decklink cards are not HDCP compatible. I used a GeoBox G-303 device to remove HDCP on the HDMI output of my PS3.
  3. The video frame must be mapped to a GL float texture. This is because Intel GPU will not support integer texture, although it says it does (the GL_texture_rg extension is reported available). Only a limited number of pixel formats can be mapped to a float texture without host bit manipulations and it depends on the endianess of the CPU. Here is the exhaustive list for little-endian CPU (e.g. Intel): 8BitYUV (2vuy), 10BitYUV (v210), 8BitBGRA, 8BitARGB, 10BitRGBXLE (R10l). All the other pixel formats are sent as unsigned int to a GL_R32UI texture and bit ops on the red channel are needed to extract the original color channels. Hopefully this should not be needed as the list is normally sufficient for all practical applications.

Notes:

  • if the source is YUV (RGB), the capture is likely to work with any of the YUV (RGB) pixel formats. This depends on the model of the Decklink card though, some cards may not be able to convert.
  • The source code had to be modified to use float texture format. Update the branch and recompile.
  • The pixel shaders in the documentation are no longer valid. Here is a blend that covers all the above cases and work with the current branch revision:
  • Before testing in the BGE, always use MediaExpress to detect the video and pixel format. If you can see the video in the 'log and capture' window, it will also work in the BGE.

@Nuno Estanqueiro (nuno.estanqueiro): I didn't test your blend because it won't be valid anymore after changing to float texture. Please test the blend provided herein and follow all advises. Let me know if it works for you. I can provide you with a Windows 64 build if you want.

  • Decklink: use floating point texture format when possible.
  • Decklink: fix framerate calculation.

Updating D1618: This branch contains a set of features aiming at mixing video stream with

BGE scene as efficiently as possible.

  1. Video capture with DeckLink cards
  2. Video keying with DeckLink cards
  3. Offscreen render...

Updated my source and rebuilt, tried your new file, still without success. I did double check the input formats, and receive video on media express, but it still only shows green on bge... I see the card output on video monitor while bge is running, but not in Blender.
I would like to try your windows build on my machine, to see if it is related to my hardware... It will take a couple of days to try it because I'll have to install a fresh windows on that machine, but it is worth trying.
Thank you.

@Nuno Estanqueiro (nuno.estanqueiro): Here is my Windows 64 build
And my Linux 64 build (Mint 17/Ubuntu 14.04) just in case. Note that it depends on external libraries OIIO and openexr that must be compiled separately with 'install_deps.sh'.

I see in your early comments that you are using 10.4.1. I'm still on 10.4 and Blender is compiled with the 10.4 SDK.
I will install Desktop Video 10.4.1 to see if this is the issue.

@Benoit Bolsee (ben2610) In the meantime I updated to Desktop Video 10.5.2 to see if it would make a difference, and no... all the same.
I will now try your versions. If I recall, OIIO and OpenEXR failed to build on my distro, and I'm building without it... could that be the cause? I'll post soon about my progress.
Thank you so much for your advice, and files.

@Nuno Estanqueiro (nuno.estanqueiro): I tested with 10.4.1 and 10.5.2 under Windows and Linux (PAL composite input): it all works in the BGE without recompilation (hence using 10.4 API). I was expecting this but still it's nice to see that it works. Note that I had to upgrade the firmware of the card after the installation of 10.5.2; going back to 10.4 might difficult but since it works as is I might just stick to it.

The lack of OIIO and OpenEXR should not have any impact on the BGE: the DeckLink API is very low level, it accesses the DeckLink dll directly and the transfer of the frame to the card is plain OGL. You might want to verify the fundamentals: does VideoTexture work with still images (using ImageFFpmeg as source)?

Another thing you might want to try: the DeckLink SDK comes with source code examples. There is a 'Capture' utility that uses just the same API than the BGE. You might want to recompile it (or use the provided binary) to see if it works. Don't forget to test with the DeckLink SDK 10.4 to match the BGE even if you have installed DesktopVideo 10.5.2.

@Nuno Estanqueiro (nuno.estanqueiro): any success with your testings? I would really love to see the branch working for your card.

@Dalai Felinto (dfelinto): splitting the code is all very good but I fear that if the decklink stuff is put aside, it will stay in that state for an indefinite amount of time. I would instead like to see the branch merged into trunk as soon as possible for the following reasons:

  • All features in this branch have been well tested, by myself and by the customer using them. From my coding records, this should mean something.
  • The decklink part was less tested but Nuno's experiments forced me to go through an in-depth review and make some important changes. I consider this part now stable.
  • Despite this, Nuno had problems to make it work with his card. It is quite possible that it is a compilation problem. If the decklink part is in trunk, there will be automatically certified builds available and more people will be able to test.
  • The Decklink part is completely detached from the rest of the code: it does nothing if you don't use it.
  • Same is true for the gpu direct stuff that is currently only used by the Decklink objects.
  • None of the features in the branch affect Blender, everything is in the BGE and it is commonly agreed that updates to the BGE are less critical.
  • I've done all the coding with backward compatibility in mind so that the risk of impact on existing blends is null, even in the BGE. For example, I implemented the X11 alpha stuff such that it only works in the player and only via a command line option. A more integrated implementation would be in the user preferences, or in the game settings panel but I preferred not to do so because it would affect Blender.

In your earlier comment you said that you will review the code after a few missing features in the offscreen render are implemented. The features are in and you successfully tested them. Will you start the review?

@Benoit Bolsee (ben2610) Hello benoit, sorry for the delay in posting progress, but I was away for the hollydays...
Unfortunatly, not even the windows build worked for me... and that had me concerned... I had really High hope that the problem was in my linux setup... but i'm starting to think it is not... I'm still waiting to test this on a different machine at work, with a Decklink 4K, to see if it is card related... In the mean time (and that was the main reason it took me so long to post) I've been trying to debug where the problem may be... I've been looking at source/gameengine/VideoTexture/VideoDeckLink.cpp and added some debuging verbosity to try and figure it out, and it seems that the code condition at line 1026 always evaluates to false (I added an else statement to print something to the screen, and it always does... it enters the calc image function, but never completes whatever is inside that if...
So far that is all I've been able to figure out...

I don't think that it is related to my hardware, because I have a different application adapted from the capture sample provided on the decklink sdk, that captures video to a block of shared ram and I'm able to capture, view and retrieve that memory block on another app... The way I've been going about this is capturing the blender gameengine render to another block of shared memory, and mixing both on a separate opengl app, with a glsl chromakey... doing everything inside blender would greatly simplify the process...

I'm on a machine that has a regular geforce card, not a Quadro, also not amd, therefore I think that I have no pinnedMemory extentions, nor dvp... only plain opengl to me

any suggestions on where to proceed from here? how can I help debuging this any further?

thank you, and a happy new year.
Nuno Estanqueiro

@Nuno Estanqueiro (nuno.estanqueiro) Thanks for the feedback. The check on line 1026 always false means that no usable frame reaches the BGE base thread. The more interesting place to look at is at line 666: this is the entry point for the frames coming directly from the driver. If this function is not called at all, then the Driver is not sending any frame. The other possibility is that the function is called, but one of two if statements on line 668 or 673 fails. If you put print statements there you will be able to tell which of these conditions is happening.

I suggest you to try out the 'capture' application provided in the SDK. If you want I can provide you a Windows or a Linux binary. This application uses the same API then the BGE but with a lot of debug information.

@Benoit Bolsee (ben2610): I've been testing further. I compile the sample Capture app from the sdk, and it works fine, I can capture and read the raw video with mplayer, as usual... I can also use the card in other apps, like shotcut, so I know the card and the driver are working fine... that is out of the way... my actual problem is within blender, and I've been adding more debug prints to see if I can figure it out...

I've added around line 950

	mpCaptureDelegate = new CaptureDelegate(this);	
	std::cout << "defining delegate callback..." << std::endl;
	if (mDLInput->SetCallback(mpCaptureDelegate) != S_OK)
		THRWEXCP(DeckLinkInternalError, S_OK);
	else
		std::cout << "callback activated successfully..." << std::endl;
	// open base class
	VideoBase::openCam(format, camIdx);
	std::cout << "opening video..." << std::endl;

and it correctly defines de delegate callback and does not complain, then prints "opening video...", yet the callback is never called... I was trying to compare your code with the capture sample, but my programming skills are too low to figure it out.

I added some comments on the CaptureDelegate as you suggested to see if any of the conditions was failing, and added some print to indicate the arrival of a frame, but it seems frames don't arrive at all, although it does not complain about any failures setting the CaptureDelegate callback...

HRESULT	CaptureDelegate::VideoInputFrameArrived(IDeckLinkVideoInputFrame* inputFrame, IDeckLinkAudioInputPacket* /*audioPacket*/)
{
	if (!inputFrame)
	{
		std::cout << "null video input, maybe audio only... ignoring anyway" << std::endl;
		return S_OK;
	}
	if ((inputFrame->GetFlags() & bmdFrameHasNoInputSource) == bmdFrameHasNoInputSource)
	{
		std::cout << "empty video frame data... don't transfer..." << std::endl;
		return S_OK;
	}
	mpOwner->VideoFrameArrived(inputFrame);
	std::cout << "frame recieved!" << std::endl;
	return S_OK;
}

It seems VideoDeckLink::calcImage gets called at apparently 25 frames per second, and I read 25fps framerate, and all is indicating video is actually playing... so any ideas where to go from here? Thankyou

  • Merge remote-tracking branch 'origin/master' into decklink
  • Merge remote-tracking branch 'origin/master' into decklink

Updating D1618: This branch contains a set of features aiming at mixing video stream with

BGE scene as efficiently as possible.

  1. Video capture with DeckLink cards
  2. Video keying with DeckLink cards
  3. Offscreen render...

I updated the branch to the latest trunk and used the experimental buildbot to generate self-contained build. You can download them here:

Linux x86 64
Win64

I verified that they work out of the box on my Decklink 4K (no dependency on special libraries)
Note that if you have a nVidia Quadro and you intend to capture with Decklink on Windows, you will also have to copy dvp.dll to the bin directory of the windows build. This file is part of the Decklink 10.4 SDK, you will find it in the Win/Samples/LoopThroughWithOpenGLCompositing/NVIDIA_GPUDirect/bin/x64 subdirectory.
In the future I'll make sure that without dvp.dll, the VideoDecklink object simply falls back on OpenGL methods.

@Nuno Estanqueiro (nuno.estanqueiro): the fact that the capture test application works on you card indicates that the BGE is not doing the right thing to open your card. I'll review the capture source code to see where this might be.

@Nuno Estanqueiro (nuno.estanqueiro): I reviewed the Capture sample app source code and found almost no differences with VideoDecklink in the BGE. The only noticeable difference is that Capture opens the audio input while the BGE does not. So just in case, I modified the BGE to match exactly what Capture is doing. The branch is updated but I used the buildbot once again to make binaries, you can download them here:
Win64
Linux64

Suppose you run Capture with these options (first card, mode 1=pal):
Capture -d 0 -m 1
You can do exactly the same operation in the BGE with:

tex.source=vt.VideoDecklink("1/2vuy", 0)
tex.source.play()

Using a mode index instead of a mode name is a new feature but it is strictly equivalent. I just added it so that you can configure the BGE on par with Capture. If that still doesn't work for you, then I'm out of ideas.

@Benoit Bolsee (ben2610): I have now tested the new windows version, and it get's me the same results... So far I have tried 4 different cards, Decklink Quad, Decklink Extreme, Decklink Intensity Shuttle, and even a Decklink SDI 4K on various different machines. The Intensity Shuttle was tested on a windows7 machine with Nvidia Quadro (copied the dvp.dll to blender folder)... the decklink Quad was tested on Windows7 and Linux64 machine with an nvidia gtx 970 (plain ogl), the Decklink Extreme was only tested on Linux, and the 4K on windows7, but had to copy dvp.dll to system folder instead of blender folder... (noticed the fallback to plain OpenGL warning) all with the same result. From my point of view, the problem is not in the card initalization, because when I start the game engine, I see video output on my video monitor (because of duplex feature on the cards), that seems to be an indication that the card is correctly initialized, and is streaming the correct data... I will try to debug further, focusing on why the callback is not called, or on the transfer between the cached frame and the card's framebuffer... Is there anyone else testing this? Could this be Operating System related? All windows tests were on Windows7... I understand you're using windows8, correct?

@Nuno Estanqueiro (nuno.estanqueiro): I'm testing on Windows 8 and Linux Mint 17 (Ubuntu 14.04 LTS). So it boils down to the callback not being called while the initialization seems to work correctly. What makes no sense is that the Capture application in the SDK works for you but not the BGE: both use the same API.
Let us be sure that we use the same test application:


I wish I could test on other BMD devices. Unfortunately they are expensive. Perhaps we can call for testers in forums where there are BMD users?

@Benoit Bolsee (ben2610): I've been trying to debug this a bit further, but still no luck. I have at least made a successful build in windows (which was failing to me), but had to remove the DVP part from the code... it actually works just the same (meaning it does not work correctly) but allows me to compile it at work and have more time to dedicate to this... Can you please test my windows build to see if it works on your machine?
It is supposed to fallback to plain OpenGL even on windows.

the link is

http://we.tl/5D5GBPeiTX

I also tried a live ubuntu 14.04 distro, on which I installed the decklink drivers, and used your linux build, but yield the same result... green image on my texture, live video on the output of the card...

Thank you!

@Nuno Estanqueiro (nuno.estanqueiro) Thank you very much for pursuing the tests. I tried your build, it works fine with my 4K Extreme card on a PAL input. The only thing that remains different between our setups is the model of the card. Would it be possible to lend me one of your cards to see if I can reproduce this and hopefully fix it?

@Benoit Bolsee (ben2610): Most of the cards I tried were not actually mine. I work at a Portuguese TV broadcaster, therefore it is easy for me to have access to some different setups with decklink cards, and because blender runs off a pendrive, it is easy to try it without messing their configurations... I have a card of my own that I can lend to you, the Decklink Extreme II (a bit older, but works great nonetheless). It has all sort of analog inputs and outputs as well as HDMI IO. Would you need all the analog cables as well? they are quite heavy to send by mail, it would be great if you could work it out just with HDMI, because then I could send you just the card. How would you like to arrange this?

@Nuno Estanqueiro (nuno.estanqueiro)

Sounds great. I have a big cable (25 pin connector) with my Extreme 4K with full of analog inputs that might compatible with the Extreme II but even if not, I can use the HDMI or SDI input (for HDMI I have the converter to remove HDCP). So you can post the card alone.
Let's discuss the details offline. My email is benoit dot bolsee at online dot be

HI Benoit,

First, thanks for your work.

I tested your branch in our robotic simulator (the offscreen part and the "no render" mode), and so far, everything is working fine (well, I got some crash in depth buffer mode, but I suspect my code for moment). I get something working on the base of offscreen2.blend through it would be nice to have an example in the documentation about how to use the new offScreenCreate method). While here, I want to know if it is possible to attach some shader to process data directly on a bge.render.RAS_OFS_RENDER_BUFFER or is the only way is to attach the shader to the texture (and so to use bge.render.RAS_OFS_RENDER_TEXTURE) ?

Concerning the "no render" mode, it works fine too, and it is definitely useful for our need. Would it be difficult to have a true "background mode" for the BGE now we (almost) have "no render" mode.

I hope your changes will go rapidly in master, so all of our users can benefits of it. Thanks.

  1. /intern/decklink - shouldn't those files be in /extern/decklink? intern is for blender internal libraries. Or did you create those libs yourself?
  1. atomic git changes, really
  2. sure those are all related changes. But alpha, atomic changes and decklink should be committed (and in the ideal world, reviewed) separately).

... stopping it now, I will resume this later. Not to sound repetitive, but the patch is huge, it really hurts its review to have to look at chunks that could so easily be reviewed separately.

build_files/cmake/macros.cmake
296

please keep the patch as lean as possible, this looks like left over from early development

doc/python_api/examples/bge.texture.2.py
108

Are those hardcoded values specific to the decklink format, specific to your decklink card, or just eyeballed for an example? Hardcoded values should be followed by a small comment on where do they come from

140

I would include in the begin of this file an explanation that you need a material with a video.png image on it, as well as the decklink setup you are configuring this file to (HD1080...)

155

What kind of exception is one expecting here? Is the decklink module yelling Python exceptions? or just crashing it all?

doc/python_api/rst/bge.texture.rst
674

shouldn't that line be removed?

doc/python_api/rst/bge_types/bge.types.BL_Shader.rst
217

I find this convention (0.0, 0.5, 1.0) too arbitrary.

In my experience it's better if LEFT and MONO gets the same value. This way it's simpler to test without 3D without any changes, and switch to 3D only when needed.

Why not use 0 to left, and 1 to right (glUniform1i) or something similar? (0 and 1 it's what we use in bpy: stereo_3d_eye)

intern/atomic/atomic_ops.h
164 ↗(On Diff #5774)

Sure thing. But in this case this should be committed separately. Cherry-pick it in master, really. Git history is supposed to be as atomic as possible (no pun intended).

intern/decklink/DeckLinkAPI.h
45

How about OSX? The else statement above includes both Linux and Mac.

intern/ghost/GHOST_ISystem.h
2

For the final patch, remember to remove those "cleanup" changes

intern/ghost/intern/GHOST_ContextGLX.cpp
292–298

-True
+true

412

fix indentation

intern/ghost/intern/GHOST_ContextWGL.cpp
736

remove extra line

source/blender/python/generic/bgl.c
1560

aren't those left over from the initial offscreen view3d render patch? definitively safe to revert'em

Hi Arnaud,

Thanks for testing the branch. Here is a blend that demonstrates an offscreen render to buffer (as opposed to offscreen render to texture) :


Beware that this file will not work out of the box as it depends on the 'Pillow' library to convert the buffer to an image.
I installed Pillow in the Python 3.5 directory created by 'install_deps.sh' and added that directory to sys.path in the blend... awkward but it works.
This blend shows that you can extract images and process them externally.

Re the shader question, I believe render buffers are simple objects that can't be used with shaders. You need a texture to attach a shader and therefore use bge.render.RAS_OFS_RENDER_TEXTURE.

By true background you mean not even starting a window at all? This is not easy: Ghost (the OS independent Window library in Blender) needs a X server to create a window and creates the GL context upon it. Changing this means changing Ghost is no simple ways.
However, I managed to do something close: it is possible to run a X server on a headless Linux (i.e. without display) and the player will happily use this X server provided you set the 'DISPLAY' environment variable beforehand.

This is a trace that shows how to run the player from a ssh session. You can easily extrapolate by putting the player in a startup script :

ben@Benoit-DEV-Linux ~ $ ps -ef|grep X
root      1332  1312  0 22:41 tty8     00:00:00 /usr/bin/X :0 -audit 0 -auth /var/lib/mdm/:0.Xauth -nolisten tcp vt8

==> this shows that the X server is running even if no display is connected.

ben@Benoit-DEV-Linux ~ $ export DISPLAY=:0

==> This is the trick: it lets blender knows where is the X display

ben@Benoit-DEV-Linux ~ $ Documents/devel/build/bin/blenderplayer Documents/blends/fbo2.blend
Color management: using fallback mode for management argv[0] = 'Documents/devel/build/bin/blenderplayer'
Parsing command line arguments...
Num of arguments is: 0
Game data loaded from /home/ben/Documents/blends/fbo2.blend
Detected GL_ARB_texture_env_combine
Detected GL_ARB_texture_cube_map
Detected GL_ARB_multitexture
Detected GL_ARB_shader_objects
Detected GL_ARB_vertex_shader
Detected GL_ARB_fragment_shader
Detected GL_ARB_vertex_program
Detected GL_ARB_depth_texture
Detected GL_EXT_separate_specular_color
found bundled python: Documents/devel/build/bin/2.76/python
Start
fps =  12
fps =  14
Error Totblock: 1

==> This is the output of a blend that uses offscreen render and stops automatically after 2s.
Benoit Bolsee (ben2610) marked an inline comment as done.Feb 16 2016, 12:24 AM

@Dalai Felinto (dfelinto) Thanks for reviewing. Here is a first set of replies

doc/python_api/examples/bge.texture.2.py
108

They are specific to the pixel format (in this case 'v210'), which are standard across the video industry. These formats can be really complicated with color components scattered around multi words values.

But I realize that I don't use anymore these shaders because they are not compatible with Intel GMA (because of the usampler2D). I'll update this file with the new shaders and add comments.

155

This try/except was done to cover the case where play is called before init.

Otherwise the decklink module throws exceptions just like the VideoTexture module does in general.
Most of the exceptions are thrown when creating the objects. Examples:
"This video source only allows live capture"
"Invalid or unsupported capture format, should be <mode>/<pixel>[/3D]"
"Cannot open capture card, check if driver installed"

In the refresh function above, only one Decklink exception can be thrown:
"DVP API internal error, please report"

The Decklink module is not supposed to crash, it will throw instead of crashing.

doc/python_api/rst/bge.texture.rst
674

no, refresh can be called without argument or with one of 2 arguments as explained in the doc text below.

doc/python_api/rst/bge_types/bge.types.BL_Shader.rst
217

The doc text is incorrect: left eye or mono = 0.0, right eye = 0.5 (will be fixed in next push)
There is a good reason to choose 0.0 and 0.5. See from the shaders above:

ty = eye+gl_TexCoord[0].y*stereo;

(stereo is fixed uniform that must be set to 0.5 for stereo or 1.0 for mono for the shader to work)
eye selects the top or bottom part of the texture (in stereo, the left and right eyes are above/below)
Using eye = 0 or 1 would require an unnecessary int->float conversion.

Benoit Bolsee (ben2610) marked 8 inline comments as done.Feb 16 2016, 9:53 PM

@Dalai Felinto (dfelinto) All comments addressed. Next round please :)

intern/atomic/atomic_ops.h
164 ↗(On Diff #5774)

Ok clear. I saw the change in master already. Thanks.

intern/decklink/DeckLinkAPI.h
45

The reason this works is because OSX will not even reach this file, it's disabled in intern/decklink/CMakeLists.txt.
But fair enough, I'm adding an #error conditional compile for the APPLE case, just in case.

Dalai Felinto (dfelinto) requested changes to this revision.

A few more comments here and there. Overall the branch is fine by me (and an welcomed functionality), as long as the suggested changes are addressed.

doc/python_api/examples/bge.texture.2.py
155

In this case just do what you did in init():
if not "video" in obj:

Or even better, replace them both by:
if not hasattr(obj, "video"):

intern/ghost/intern/GHOST_ContextGLX.h
156

wrong indentation here (either that, or phabricator is handling them poorly, which it does sometimes).

source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_OpenGLRasterizer.cpp
1000

I still don't think it is worth to implement "scaling" based on the camera size. The scene gravity will still be 10m/s2, the collision bounds, ... And we get no feedback on the viewport of the final size. (negative scale is fine though)

m_camnegscale = ((scale[0] < 0.0f) ^ (scale[1] < 0.0f) ^ (scale[2] < 0.0f)) ? true : false;
if (m_camnegscale) {
    m_viewmatrix.tscale(-1.0f, -1.0f, -1.0f, 1.0f);
}

(also notice that we explicitly use f to prevent double promotion when not required).

And that said, this should be a separate patch/feature. To be merged separately.

source/gameengine/VideoTexture/ImageRender.h
108

remember to clean that up before final patch

This revision now requires changes to proceed.Feb 19 2016, 3:11 PM
Benoit Bolsee (ben2610) marked 3 inline comments as done.Feb 19 2016, 7:28 PM

Thanks Dalai. All comments addressed. I'll make a new version with the latest fix.

intern/ghost/intern/GHOST_ContextGLX.h
156

Identation is a mess in Blender: parameters declaration use spaces (sometimes) while function body use tabs (sometimes). The Qt editor that I'm using is not smart enough to make the difference: it puts tabs everywhere. I normal fix the differences by hand before committing but that one slipped through.

source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_OpenGLRasterizer.cpp
1000

I agree that the straight transfer of camera scale to view matrix creates the wrong impression that it is valid way to zoom. Only negation should be kept. However it is incorrect to negate each axis, the logic should be this one:

bool negX, negY, negZ;
negX = (scale[0] < 0.0f);
negY = (scale[1] < 0.0f);
negZ = (scale[2] < 0.0f);
if (negX || negY || negZ) {
    m_viewmatrix.tscale(((negX)?-1.0f:1.0f), ((negY)?-1.0f:1.0f), ((negZ)?-1.0f:1.0f), 1.0f);
}
m_camnegscale = negX ^ negY ^ negZ;
Benoit Bolsee (ben2610) marked an inline comment as done.
Benoit Bolsee (ben2610) updated this object.
Benoit Bolsee (ben2610) updated this revision to Diff 6151.

Patch cleanup after comments. Updated to current master.

  • VideoDecklink: accept mode index, test dvp.dll presence, enable audio just in case.
  • VideoDecklink: fix compilation warning in Windows.
  • Decklink: fix documentation to reflect the new texture transfer method in the Decklink module.
  • Decklink: revert unncessary change to cmake macros.
  • Decklink: OSX is not supported, add an error message just in case.
  • Decklink: patch cleanup before merge.
  • BGE: undo view matrix scaling but keep axis inverting.
  • Merge remote-tracking branch 'origin/master' into decklink
Benoit Bolsee (ben2610) updated this revision to Diff 6874.
  • Merge remote-tracking branch 'origin/master' into decklink
  • VideoDeclink: change default cache size to 4 frames.
  • Merge remote-tracking branch 'origin/master' into decklink
  • Fix again assembler version of atomic_add_uint32 and atomic_sub_uint32
  • Try to fix atomic_ops on buildbot by moving include to top
  • Merge remote-tracking branch 'origin/master' into decklink
  • Fix MSVC compilation error after merge
  • Merge remote-tracking branch 'origin/master' into decklink
  • Optimize ImageRender->Decklink output
  • Merge remote-tracking branch 'origin/master' into decklink
  • Decklink: support display mode/pixel format combination that use padding.
  • Decklink: Fix bug: Decklink.right attribute was not unitialized.
  • Decklink: fix output on some DeckLink cards.
  • Merge remote-tracking branch 'origin/master' into decklink
  • Turn on Decklink by default

Updating D1618: This branch contains a set of features aiming at mixing video stream with

BGE scene as efficiently as possible.

  1. Video capture with DeckLink cards
  2. Video keying with DeckLink cards
  3. Offscreen render...

Was talking in IRC, but better paste here to ensure topics arent lost:

Some review from going over the branch temp-decklink branch.

  • Are headers for all versions of declink needed? (would remove if not)
  • A while back I replied on tracker that it would be nice to have X11 alpha as command line arg for regular Blender. instead of build option. Was there some reason not to do this? ... not blocker. just preference for these kinds of options.
  • Building with C++11 is giving me warnings, (that I didn't see before), not sure if they should be addressed: http://hastebin.com/cuhacezipa.rb
  • Regarding alpha again, this branch enables WITH_X11_ALPHA by default, some time back I tested this and it gave some small performance hit ... this is part of reason would prefer command line flag

@ideasman42

Are headers for all versions of declink needed? (would remove if not)

No, all the headers aren't required. They're just happened to be in the DeckLink SDK but they can be removed. Will do that.

A while back I replied on tracker that it would be nice to have X11 alpha as command line arg for regular Blender. instead of build option. Was there some reason not to do this? ... not blocker. just preference for these kinds of options.

The reason I added the -a only to the player was to avoid touching blender in anyway, leaving to someone else to port the option to blender.

Building with C++11 is giving me warnings, (that I didn't see before), not sure if they should be addressed: http://hastebin.com/cuhacezipa.rb

Looks like throwing in a destructor with C++11 will cause immediate crash. I removed those 2 throws and replacde by printout on the console (in temp-decklink branch). They're just in case something abnormal happen in the DeckLink SDK.

Regarding alpha again, this branch enables WITH_X11_ALPHA by default, some time back I tested this and it gave some small performance hit ... this is part of reason would prefer command line flag

there are 2 features in the alpha patch:

  1. -a option to request a framebuffer with alpha channel on the player. It shouldn't change anything if you don't use the option. And in blender, the option is not active and the alpha framebuffer is never requested (at least that's how it's meant to be). Do you mean that blender gets a frame buffer with alpha channel by default??
  2. give preference to MSAA over swap copy when MSAA is requested. That could possibly lead to a different framebuffer setup, explaining the performance hit. But iirc, it would only be different on a Intel GPU.

So I don't see where this performance hit would come from.

The branch is finally merged in master!

Instead of merging the decklink branch, I took the big patch out of this branch, split it in 5 and applied them to a new branch temp-decklink.
Then after final review and minor code fixup from ideasman42, applied them to master (after squashing the fixup commit).
This branch will no longer be updated and should probably be deleted.
The temp-decklink branch can be deleted for sure.

Thanks to everyone reviewing this branch, especially Nuno who's help was determining.

Porteries Tristan (panzergame) added inline comments.
source/gameengine/Ketsji/BL_Shader.cpp
71

Sorry to annoy, but you cannot pass the rasterizer to the function instead include python stuff to acces to the ketsji engine and then the rasterizer ?

289

Why undo the cleanup done by HG1 ?

source/gameengine/Ketsji/KX_PythonInit.cpp
2502

@Campbell Barton (campbellbarton), @deflinto, @Mitchell Stokes (moguri): As you reviewed the patch why you didn't noticed that we can use a PyObjectPlus or CValue initialized in KX_PythonInitTypes ? This system was not created to avoid all the lines above ?

source/gameengine/Rasterizer/RAS_OpenGLRasterizer/RAS_OpenGLRasterizer.cpp
608

why split the definition ?

612

​ if (!ofs->Create(width, height, samples, (RAS_IOffScreen::RAS_OFS_RENDER_TARGET)target)) {

Hi Benoit,

I am trying to output the BGE over the decklink card. Unfortunately I am not getting it to work.
I have a Ultrastudio SDI USB3.0 on my laptop and a Decklink Card on a desktop machine and with both I get the error :

"DeckLink: exception when opening card 0: ÑËæB☺Cannot open card for output"

The input of the SDI Signal works in Blender and in BMD media express everything seems to be fine.
I am using windows 7, Blender 2.78 and BMD Desktop Video 10.5.2

I am not very experienced with python sctipting in BGE but here is my script

import bge
from bge import texture
from bge import logic

cont = logic.getCurrentController()
obj = cont.owner

if not hasattr(logic, 'source'):

    logic.video = texture.Decklink(0, 'HD1080i50') 
    logic.video.source = ImageViewport   
    
logic.video.refresh(True)

I dont know what I am doing wrong...