BGE: Various render improvements.

bge.logic.setRender(flag) to enable/disable render.
    The render pass is enabled by default but it can be disabled with
    bge.logic.setRender(False).
    Once disabled, the render pass is skipped and a new logic frame starts
    immediately. Note that VSync no longer limits the fps when render is off
    but the 'Use Frame Rate' option in the Render Properties still does.
    To run as many frames as possible, untick the option
    This function is useful when you don't need the default render, e.g.
    when doing offscreen render to an alternate device than the monitor.
    Note that without VSync, you must limit the frame rate by other means.

fbo = bge.render.offScreenCreate(width,height,[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
    Use this method to create an offscreen buffer of given size, with given MSAA
    samples and targetting either a render buffer (bge.render.RAS_OFS_RENDER_BUFFER)
    or a texture (bge.render.RAS_OFS_RENDER_TEXTURE). Use the former if you want to
    retrieve the frame buffer on the host and the latter if you want to pass the render
    to another context (texture are proper OGL object, render buffers aren't)
    The object created by this function can only be used as a parameter of the
    bge.texture.ImageRender() constructor to send the the render to the FBO rather
    than to the frame buffer. This is best suited when you want to create a render
    of specific size, or if you need an image with an alpha channel.

bge.texture.<imagetype>.refresh(buffer=None, format="RGBA", ts=-1.0)
    Without arg, the refresh method of the image objects is pretty much a no-op, it
    simply invalidates the image so that on next texture refresh, the image will
    be recalculated.
    It is now possible to pass an optional buffer object to transfer the image (and
    recalculate it if it was invalid) to an external object. The object must implement
    the 'buffer protocol'. The image will be transfered as "RGBA" or "BGRA" pixels
    depending on format argument (only those 2 formats are supported) and ts is an
    optional timestamp in the image depends on it (e.g. VideoFFmpeg playing a video file).
    With this function you don't need anymore to link the image object to a Texture
    object to use: the image object is self-sufficient.

bge.texture.ImageRender(scene, camera, fbo=None)
    Render to buffer is possible by passing a FBO object (see offScreenCreate).

bge.texture.ImageRender.render()
    Allows asynchronous render: call this method to render the scene but without
    extracting the pixels yet. The function returns as soon as the render commands
    have been send to the GPU. The render will proceed asynchronously in the GPU
    while the host can perform other tasks.
    To complete the render, you can either call refresh() directly of refresh the texture
    to which this object is the source. Asynchronous render is useful to achieve optimal
    performance: call render() on frame N and refresh() on frame N+1 to give as much as
    time as possible to the GPU to render the frame while the game engine can perform other tasks.

Support negative scale on camera.
    Camera scale was previously ignored in the BGE.
    It is now injected in the modelview matrix as a vertical or horizontal flip
    of the scene (respectively if scaleY<0 and scaleX<0).
    Note that the actual value of the scale is not used, only the sign.
    This allows to flip the image produced by ImageRender() without any performance
    degradation: the flip is integrated in the render itself.

Optimized image transfer from ImageRender to buffer.
    Previously, images that were transferred to the host were always going through
    buffers in VideoTexture. It is now possible to transfer ImageRender
    images to external buffer without intermediate copy (i.e. directly from OGL to buffer)
    if the attributes of the ImageRender objects are set as follow:
       flip=False, alpha=True, scale=False, depth=False, zbuff=False.
       (if you need to flip the image, use camera negative scale)
This commit is contained in:
Benoit Bolsee 2016-06-09 23:56:45 +02:00
parent 5b061ddf1e
commit 40f1c4f343
39 changed files with 1661 additions and 112 deletions

View File

@ -378,6 +378,28 @@ General functions
Render next frame (if Python has control)
.. function:: setRender(render)
Sets the global flag that controls the render of the scene.
If True, the render is done after the logic frame.
If False, the render is skipped and another logic frame starts immediately.
.. note::
GPU VSync no longer limits the number of frame per second when render is off,
but the *Use Frame Rate* option still regulates the fps. To run as many frames
as possible, untick this option (Render Properties, System panel).
:arg render: the render flag
:type render: bool
.. function:: getRender()
Get the current value of the global render flag
:return: The flag value
:rtype: bool
**********************
Time related functions
**********************

View File

@ -90,6 +90,48 @@ Constants
Right eye being used during stereoscopic rendering.
.. data:: RAS_OFS_RENDER_BUFFER
The pixel buffer for offscreen render is a RenderBuffer. Argument to :func:`offScreenCreate`
.. data:: RAS_OFS_RENDER_TEXTURE
The pixel buffer for offscreen render is a Texture. Argument to :func:`offScreenCreate`
*****
Types
*****
.. class:: RASOffScreen
An off-screen render buffer object.
Use :func:`offScreenCreate` to create it.
Currently it can only be used in the :class:`bge.texture.ImageRender`
constructor to render on a FBO rather than the default viewport.
.. attribute:: width
The width in pixel of the FBO
:type: integer
.. attribute:: height
The height in pixel of the FBO
:type: integer
.. attribute:: color
The underlying OpenGL bind code of the texture object that holds
the rendered image, 0 if the FBO is using RenderBuffer.
The choice between RenderBuffer and Texture is determined
by the target argument of :func:`offScreenCreate`.
:type: integer
*********
Functions
@ -362,3 +404,22 @@ Functions
Get the current vsync value
:rtype: One of VSYNC_OFF, VSYNC_ON, VSYNC_ADAPTIVE
.. function:: offScreenCreate(width,height[,samples=0][,target=bge.render.RAS_OFS_RENDER_BUFFER])
Create a Off-screen render buffer object.
:arg width: the width of the buffer in pixels
:type width: integer
:arg height: the height of the buffer in pixels
:type height: integer
:arg samples: the number of multisample for anti-aliasing (MSAA), 0 to disable MSAA
:type samples: integer
:arg target: the pixel storage: :data:`RAS_OFS_RENDER_BUFFER` to render on RenderBuffers (the default),
:data:`RAS_OFS_RENDER_TEXTURE` to render on texture.
The later is interesting if you want to access the texture directly (see :attr:`RASOffScreen.color`).
Otherwise the default is preferable as it's more widely supported by GPUs and more efficient.
If the GPU does not support MSAA+Texture (e.g. Intel HD GPU), MSAA will be disabled.
:type target: integer
:rtype: :class:`RASOffScreen`

View File

@ -173,14 +173,23 @@ Video classes
:return: Whether the video was playing.
:rtype: bool
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA", timestamp=-1.0)
Refresh video - get its status.
:value: see `FFmpeg Video and Image Status`_.
Refresh video - get its status and optionally copy the frame to an external buffer.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
:arg timestamp: An optional timestamp (in seconds from the start of the movie)
of the frame to be copied to the buffer.
:type timestamp: float
:return: see `FFmpeg Video and Image Status`_.
:rtype: int
*************
Image classes
*************
@ -244,12 +253,17 @@ Image classes
* :class:`FilterRGB24`
* :class:`FilterRGBA32`
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image, i.e. load it.
Refresh image, get its status and optionally copy the frame to an external buffer.
:value: see `FFmpeg Video and Image Status`_.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
:return: see `FFmpeg Video and Image Status`_.
:rtype: int
.. method:: reload(newname=None)
@ -411,9 +425,18 @@ Image classes
:type: :class:`~bgl.Buffer` or None
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image - invalidate its current content.
Refresh image - render and copy the image to an external buffer (optional)
then invalidate its current content.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is rendered and copied to the buffer,
which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
.. attribute:: scale
@ -498,9 +521,17 @@ Image classes
:type: :class:`~bgl.Buffer` or None
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image - invalidate its current content.
Refresh image - calculate and copy the image to an external buffer (optional) then invalidate its current content.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is calculated and copied to the buffer,
which must be big enough or an exception is thrown.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
.. attribute:: scale
@ -545,14 +576,18 @@ Image classes
:type: bool
.. class:: ImageRender(scene, camera)
.. class:: ImageRender(scene, camera, fbo=None)
Image source from render.
The render is done on a custom framebuffer object if fbo is specified,
otherwise on the default framebuffer.
:arg scene: Scene in which the image has to be taken.
:type scene: :class:`~bge.types.KX_Scene`
:arg camera: Camera from which the image has to be taken.
:type camera: :class:`~bge.types.KX_Camera`
:arg fbo: Off-screen render buffer object (optional)
:type fbo: :class:`~bge.render.RASOffScreen`
.. attribute:: alpha
@ -599,10 +634,6 @@ Image classes
:type: :class:`~bgl.Buffer` or None
.. method:: refresh()
Refresh image - invalidate its current content.
.. attribute:: scale
Fast scale of image (near neighbour).
@ -640,6 +671,42 @@ Image classes
:type: bool
.. method:: render()
Render the scene but do not extract the pixels yet.
The function returns as soon as the render commands have been send to the GPU.
The render will proceed asynchronously in the GPU while the host can perform other tasks.
To complete the render, you can either call :func:`refresh`
directly of refresh the texture of which this object is the source.
This method is useful to implement asynchronous render for optimal performance: call render()
on frame n and refresh() on frame n+1 to give as much as time as possible to the GPU
to render the frame while the game engine can perform other tasks.
:return: True if the render was initiated, False if the render cannot be performed (e.g. the camera is active)
:rtype: bool
.. method:: refresh()
.. method:: refresh(buffer, format="RGBA")
Refresh video - render and optionally copy the image to an external buffer then invalidate its current content.
The render may have been started earlier with the :func:`render` method,
in which case this function simply waits for the render operations to complete.
When called without argument, the pixels are not extracted but the render is guaranteed
to be completed when the function returns.
This only makes sense with offscreen render on texture target (see :func:`~bge.render.offScreenCreate`).
:arg buffer: An object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
The transfer to the buffer is optimal if no processing of the image is needed.
This is the case if ``flip=False, alpha=True, scale=False, whole=True, depth=False, zbuff=False``
and no filter is set.
:type buffer: any buffer type of sufficient size
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
:return: True if the render is complete, False if the render cannot be performed (e.g. the camera is active)
:rtype: bool
.. class:: ImageViewport
Image source from viewport.
@ -689,9 +756,19 @@ Image classes
:type: sequence of two ints
.. method:: refresh()
.. method:: refresh(buffer=None, format="RGBA")
Refresh image - invalidate its current content.
Refresh video - copy the viewport to an external buffer (optional) then invalidate its current content.
:arg buffer: An optional object that implements the buffer protocol.
If specified, the image is copied to the buffer, which must be big enough or an exception is thrown.
The transfer to the buffer is optimal if no processing of the image is needed.
This is the case if ``flip=False, alpha=True, scale=False, whole=True, depth=False, zbuff=False``
and no filter is set.
:type buffer: any buffer type
:arg format: An optional image format specifier for the image that will be copied to the buffer.
Only valid values are "RGBA" or "BGRA"
:type format: str
.. attribute:: scale

View File

@ -143,6 +143,16 @@ public:
m_el[3][0] *= x; m_el[3][1] *= y; m_el[3][2] *= z; m_el[3][3] *= w;
}
/**
* Scale the rows of this matrix with x, y, z, w respectively.
*/
void tscale(MT_Scalar x, MT_Scalar y, MT_Scalar z, MT_Scalar w) {
m_el[0][0] *= x; m_el[1][0] *= y; m_el[2][0] *= z; m_el[3][0] *= w;
m_el[0][1] *= x; m_el[1][1] *= y; m_el[2][1] *= z; m_el[3][1] *= w;
m_el[0][2] *= x; m_el[1][2] *= y; m_el[2][2] *= z; m_el[3][2] *= w;
m_el[0][3] *= x; m_el[1][3] *= y; m_el[2][3] *= z; m_el[3][3] *= w;
}
/**
* Return a column-scaled version of this matrix.
*/

View File

@ -341,6 +341,7 @@ extern "C" void StartKetsjiShell(struct bContext *C, struct ARegion *ar, rcti *c
ketsjiengine->SetUseFixedTime(usefixed);
ketsjiengine->SetTimingDisplay(frameRate, profile, properties);
ketsjiengine->SetRestrictAnimationFPS(restrictAnimFPS);
ketsjiengine->SetRender(true);
KX_KetsjiEngine::SetExitKey(ConvertKeyCode(startscene->gm.exitkey));
//set the global settings (carried over if restart/load new files)

View File

@ -678,6 +678,7 @@ bool GPG_Application::initEngine(GHOST_IWindow* window, const int stereoMode)
//set the global settings (carried over if restart/load new files)
m_ketsjiengine->SetGlobalSettings(m_globalSettings);
m_ketsjiengine->SetRender(true);
m_engineInitialized = true;
}

View File

@ -1600,7 +1600,7 @@ void KX_Dome::RotateCamera(KX_Camera* cam, int i)
MT_Transform camtrans(cam->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), cam->GetCameraData()->m_perspective);
cam->SetModelviewMatrix(viewmat);
// restore the original orientation
@ -2035,7 +2035,7 @@ void KX_Dome::RenderDomeFrame(KX_Scene* scene, KX_Camera* cam, int i)
MT_Transform camtrans(cam->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), 1.0f);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), 1.0f);
cam->SetModelviewMatrix(viewmat);
// restore the original orientation

View File

@ -108,7 +108,7 @@ double KX_KetsjiEngine::m_suspendeddelta = 0.0;
double KX_KetsjiEngine::m_average_framerate = 0.0;
bool KX_KetsjiEngine::m_restrict_anim_fps = false;
short KX_KetsjiEngine::m_exitkey = 130; // ESC Key
bool KX_KetsjiEngine::m_doRender = true;
/**
* Constructor of the Ketsji Engine
@ -173,6 +173,7 @@ KX_KetsjiEngine::KX_KetsjiEngine(KX_ISystem* system)
m_overrideFrameColorR(0.0f),
m_overrideFrameColorG(0.0f),
m_overrideFrameColorB(0.0f),
m_overrideFrameColorA(0.0f),
m_usedome(false)
{
@ -381,7 +382,7 @@ void KX_KetsjiEngine::RenderDome()
m_overrideFrameColorR,
m_overrideFrameColorG,
m_overrideFrameColorB,
1.0
m_overrideFrameColorA
);
}
else
@ -749,6 +750,9 @@ bool KX_KetsjiEngine::NextFrame()
scene->setSuspendedTime(m_clockTime);
m_logger->StartLog(tc_services, m_kxsystem->GetTimeInSeconds(), true);
// invalidates the shadow buffer from previous render/ImageRender because the scene has changed
scene->SetShadowDone(false);
}
// update system devices
@ -771,7 +775,7 @@ bool KX_KetsjiEngine::NextFrame()
// Start logging time spent outside main loop
m_logger->StartLog(tc_outside, m_kxsystem->GetTimeInSeconds(), true);
return doRender;
return doRender && m_doRender;
}
@ -805,7 +809,7 @@ void KX_KetsjiEngine::Render()
m_overrideFrameColorR,
m_overrideFrameColorG,
m_overrideFrameColorB,
1.0
m_overrideFrameColorA
);
}
else
@ -1133,6 +1137,8 @@ void KX_KetsjiEngine::RenderShadowBuffers(KX_Scene *scene)
cam->Release();
}
}
/* remember that we have a valid shadow buffer for that scene */
scene->SetShadowDone(true);
}
// update graphics
@ -1252,7 +1258,7 @@ void KX_KetsjiEngine::RenderFrame(KX_Scene* scene, KX_Camera* cam)
MT_Transform camtrans(cam->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(viewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), cam->GetCameraData()->m_perspective);
cam->SetModelviewMatrix(viewmat);
// The following actually reschedules all vertices to be
@ -1925,6 +1931,16 @@ short KX_KetsjiEngine::GetExitKey()
return m_exitkey;
}
void KX_KetsjiEngine::SetRender(bool render)
{
m_doRender = render;
}
bool KX_KetsjiEngine::GetRender()
{
return m_doRender;
}
void KX_KetsjiEngine::SetShowFramerate(bool frameRate)
{
m_show_framerate = frameRate;
@ -2023,19 +2039,21 @@ bool KX_KetsjiEngine::GetUseOverrideFrameColor(void) const
}
void KX_KetsjiEngine::SetOverrideFrameColor(float r, float g, float b)
void KX_KetsjiEngine::SetOverrideFrameColor(float r, float g, float b, float a)
{
m_overrideFrameColorR = r;
m_overrideFrameColorG = g;
m_overrideFrameColorB = b;
m_overrideFrameColorA = a;
}
void KX_KetsjiEngine::GetOverrideFrameColor(float& r, float& g, float& b) const
void KX_KetsjiEngine::GetOverrideFrameColor(float& r, float& g, float& b, float& a) const
{
r = m_overrideFrameColorR;
g = m_overrideFrameColorG;
b = m_overrideFrameColorB;
a = m_overrideFrameColorA;
}

View File

@ -129,6 +129,8 @@ private:
static short m_exitkey; /* Key used to exit the BGE */
static bool m_doRender; /* whether or not the scene should be rendered after the logic frame */
int m_exitcode;
STR_String m_exitstring;
@ -199,6 +201,8 @@ private:
float m_overrideFrameColorG;
/** Blue component of framing bar color. */
float m_overrideFrameColorB;
/** alpha component of framing bar color. */
float m_overrideFrameColorA;
/** Settings that doesn't go away with Game Actuator */
GlobalSettings m_globalsettings;
@ -209,7 +213,6 @@ private:
void RenderFrame(KX_Scene* scene, KX_Camera* cam);
void PostRenderScene(KX_Scene* scene);
void RenderDebugProperties();
void RenderShadowBuffers(KX_Scene *scene);
public:
KX_KetsjiEngine(class KX_ISystem* system);
@ -249,6 +252,7 @@ public:
///returns true if an update happened to indicate -> Render
bool NextFrame();
void Render();
void RenderShadowBuffers(KX_Scene *scene);
void StartEngine(bool clearIpo);
void StopEngine();
@ -400,6 +404,16 @@ public:
static short GetExitKey();
/**
* Activate or deactivates the render of the scene after the logic frame
* \param render true (render) or false (do not render)
*/
static void SetRender(bool render);
/**
* Get the current render flag value
*/
static bool GetRender();
/**
* \Sets the display for frame rate on or off.
*/
@ -485,7 +499,7 @@ public:
* \param g Green component of the override color.
* \param b Blue component of the override color.
*/
void SetOverrideFrameColor(float r, float g, float b);
void SetOverrideFrameColor(float r, float g, float b, float a);
/**
* Returns the color used for framing bar color instead of the one in the Blender file's scenes.
@ -493,7 +507,7 @@ public:
* \param g Green component of the override color.
* \param b Blue component of the override color.
*/
void GetOverrideFrameColor(float& r, float& g, float& b) const;
void GetOverrideFrameColor(float& r, float& g, float& b, float& a) const;
KX_Scene* CreateScene(const STR_String& scenename);
KX_Scene* CreateScene(Scene *scene, bool libloading=false);

View File

@ -104,6 +104,7 @@ extern "C" {
#include "BL_ArmatureObject.h"
#include "RAS_IRasterizer.h"
#include "RAS_ICanvas.h"
#include "RAS_IOffScreen.h"
#include "RAS_BucketManager.h"
#include "RAS_2DFilterManager.h"
#include "MT_Vector3.h"
@ -469,6 +470,21 @@ static PyObject *gPyGetExitKey(PyObject *)
return PyLong_FromLong(KX_KetsjiEngine::GetExitKey());
}
static PyObject *gPySetRender(PyObject *, PyObject *args)
{
int render;
if (!PyArg_ParseTuple(args, "i:setRender", &render))
return NULL;
KX_KetsjiEngine::SetRender(render);
Py_RETURN_NONE;
}
static PyObject *gPyGetRender(PyObject *)
{
return PyBool_FromLong(KX_KetsjiEngine::GetRender());
}
static PyObject *gPySetMaxLogicFrame(PyObject *, PyObject *args)
{
int frame;
@ -909,6 +925,8 @@ static struct PyMethodDef game_methods[] = {
{"setAnimRecordFrame", (PyCFunction) gPySetAnimRecordFrame, METH_VARARGS, (const char *)"Sets the current frame number used for animation recording"},
{"getExitKey", (PyCFunction) gPyGetExitKey, METH_NOARGS, (const char *)"Gets the key used to exit the game engine"},
{"setExitKey", (PyCFunction) gPySetExitKey, METH_VARARGS, (const char *)"Sets the key used to exit the game engine"},
{"setRender", (PyCFunction) gPySetRender, METH_VARARGS, (const char *)"Set the global render flag"},
{"getRender", (PyCFunction) gPyGetRender, METH_NOARGS, (const char *)"get the global render flag value"},
{"getUseExternalClock", (PyCFunction) gPyGetUseExternalClock, METH_NOARGS, (const char *)"Get if we use the time provided by an external clock"},
{"setUseExternalClock", (PyCFunction) gPySetUseExternalClock, METH_VARARGS, (const char *)"Set if we use the time provided by an external clock"},
{"getClockTime", (PyCFunction) gPyGetClockTime, METH_NOARGS, (const char *)"Get the last BGE render time. "
@ -1457,6 +1475,158 @@ static PyObject *gPyGetDisplayDimensions(PyObject *)
return result;
}
/* python wrapper around RAS_IOffScreen
* Should eventually gets its own file
*/
static void PyRASOffScreen__tp_dealloc(PyRASOffScreen *self)
{
if (self->ofs)
delete self->ofs;
Py_TYPE(self)->tp_free((PyObject *)self);
}
PyDoc_STRVAR(py_RASOffScreen_doc,
"RASOffscreen(width, height) -> new GPU Offscreen object"
"initialized to hold a framebuffer object of ``width`` x ``height``.\n"
""
);
PyDoc_STRVAR(RASOffScreen_width_doc, "Offscreen buffer width.\n\n:type: integer");
static PyObject *RASOffScreen_width_get(PyRASOffScreen *self, void *UNUSED(type))
{
return PyLong_FromLong(self->ofs->GetWidth());
}
PyDoc_STRVAR(RASOffScreen_height_doc, "Offscreen buffer height.\n\n:type: GLsizei");
static PyObject *RASOffScreen_height_get(PyRASOffScreen *self, void *UNUSED(type))
{
return PyLong_FromLong(self->ofs->GetHeight());
}
PyDoc_STRVAR(RASOffScreen_color_doc, "Offscreen buffer texture object (if target is RAS_OFS_RENDER_TEXTURE).\n\n:type: GLuint");
static PyObject *RASOffScreen_color_get(PyRASOffScreen *self, void *UNUSED(type))
{
return PyLong_FromLong(self->ofs->GetColor());
}
static PyGetSetDef RASOffScreen_getseters[] = {
{(char *)"width", (getter)RASOffScreen_width_get, (setter)NULL, RASOffScreen_width_doc, NULL},
{(char *)"height", (getter)RASOffScreen_height_get, (setter)NULL, RASOffScreen_height_doc, NULL},
{(char *)"color", (getter)RASOffScreen_color_get, (setter)NULL, RASOffScreen_color_doc, NULL},
{NULL, NULL, NULL, NULL, NULL} /* Sentinel */
};
static int PyRASOffScreen__tp_init(PyRASOffScreen *self, PyObject *args, PyObject *kwargs)
{
int width, height, samples, target;
const char *keywords[] = {"width", "height", "samples", "target", NULL};
samples = 0;
target = RAS_IOffScreen::RAS_OFS_RENDER_BUFFER;
if (!PyArg_ParseTupleAndKeywords(args, kwargs, "ii|ii:RASOffscreen", (char **)keywords, &width, &height, &samples, &target)) {
return -1;
}
if (width <= 0) {
PyErr_SetString(PyExc_ValueError, "negative 'width' given");
return -1;
}
if (height <= 0) {
PyErr_SetString(PyExc_ValueError, "negative 'height' given");
return -1;
}
if (samples < 0) {
PyErr_SetString(PyExc_ValueError, "negative 'samples' given");
return -1;
}
if (target != RAS_IOffScreen::RAS_OFS_RENDER_BUFFER && target != RAS_IOffScreen::RAS_OFS_RENDER_TEXTURE)
{
PyErr_SetString(PyExc_ValueError, "invalid 'target' given, can only be RAS_OFS_RENDER_BUFFER or RAS_OFS_RENDER_TEXTURE");
return -1;
}
if (!gp_Rasterizer)
{
PyErr_SetString(PyExc_SystemError, "no rasterizer");
return -1;
}
self->ofs = gp_Rasterizer->CreateOffScreen(width, height, samples, target);
if (!self->ofs) {
PyErr_SetString(PyExc_SystemError, "creation failed");
return -1;
}
return 0;
}
PyTypeObject PyRASOffScreen_Type = {
PyVarObject_HEAD_INIT(NULL, 0)
"RASOffScreen", /* tp_name */
sizeof(PyRASOffScreen), /* tp_basicsize */
0, /* tp_itemsize */
/* methods */
(destructor)PyRASOffScreen__tp_dealloc, /* tp_dealloc */
NULL, /* tp_print */
NULL, /* tp_getattr */
NULL, /* tp_setattr */
NULL, /* tp_compare */
NULL, /* tp_repr */
NULL, /* tp_as_number */
NULL, /* tp_as_sequence */
NULL, /* tp_as_mapping */
NULL, /* tp_hash */
NULL, /* tp_call */
NULL, /* tp_str */
NULL, /* tp_getattro */
NULL, /* tp_setattro */
NULL, /* tp_as_buffer */
Py_TPFLAGS_DEFAULT, /* tp_flags */
py_RASOffScreen_doc, /* Documentation string */
NULL, /* tp_traverse */
NULL, /* tp_clear */
NULL, /* tp_richcompare */
0, /* tp_weaklistoffset */
NULL, /* tp_iter */
NULL, /* tp_iternext */
NULL, /* tp_methods */
NULL, /* tp_members */
RASOffScreen_getseters, /* tp_getset */
NULL, /* tp_base */
NULL, /* tp_dict */
NULL, /* tp_descr_get */
NULL, /* tp_descr_set */
0, /* tp_dictoffset */
(initproc)PyRASOffScreen__tp_init, /* tp_init */
(allocfunc)PyType_GenericAlloc, /* tp_alloc */
(newfunc)PyType_GenericNew, /* tp_new */
(freefunc)0, /* tp_free */
NULL, /* tp_is_gc */
NULL, /* tp_bases */
NULL, /* tp_mro */
NULL, /* tp_cache */
NULL, /* tp_subclasses */
NULL, /* tp_weaklist */
(destructor) NULL /* tp_del */
};
static PyObject *gPyOffScreenCreate(PyObject *UNUSED(self), PyObject *args)
{
int width;
int height;
int samples;
int target;
samples = 0;
if (!PyArg_ParseTuple(args, "ii|ii:offScreenCreate", &width, &height, &samples, &target))
return NULL;
return PyObject_CallObject((PyObject *) &PyRASOffScreen_Type, args);
}
PyDoc_STRVAR(Rasterizer_module_documentation,
"This is the Python API for the game engine of Rasterizer"
);
@ -1511,6 +1681,7 @@ static struct PyMethodDef rasterizer_methods[] = {
{"showProperties",(PyCFunction) gPyShowProperties, METH_VARARGS, "show or hide the debug properties"},
{"autoDebugList",(PyCFunction) gPyAutoDebugList, METH_VARARGS, "enable or disable auto adding debug properties to the debug list"},
{"clearDebugList",(PyCFunction) gPyClearDebugList, METH_NOARGS, "clears the debug property list"},
{"offScreenCreate", (PyCFunction) gPyOffScreenCreate, METH_VARARGS, "create an offscreen buffer object, arguments are width and height in pixels"},
{ NULL, (PyCFunction) NULL, 0, NULL }
};
@ -2330,6 +2501,8 @@ PyMODINIT_FUNC initRasterizerPythonBinding()
PyObject *m;
PyObject *d;
PyType_Ready(&PyRASOffScreen_Type);
m = PyModule_Create(&Rasterizer_module_def);
PyDict_SetItemString(PySys_GetObject("modules"), Rasterizer_module_def.m_name, m);
@ -2357,6 +2530,11 @@ PyMODINIT_FUNC initRasterizerPythonBinding()
KX_MACRO_addTypesToDict(d, LEFT_EYE, RAS_IRasterizer::RAS_STEREO_LEFTEYE);
KX_MACRO_addTypesToDict(d, RIGHT_EYE, RAS_IRasterizer::RAS_STEREO_RIGHTEYE);
/* offscreen render */
KX_MACRO_addTypesToDict(d, RAS_OFS_RENDER_BUFFER, RAS_IOffScreen::RAS_OFS_RENDER_BUFFER);
KX_MACRO_addTypesToDict(d, RAS_OFS_RENDER_TEXTURE, RAS_IOffScreen::RAS_OFS_RENDER_TEXTURE);
// XXXX Add constants here
// Check for errors

View File

@ -172,6 +172,7 @@ KX_Scene::KX_Scene(class SCA_IInputDevice* keyboarddevice,
m_activity_culling = false;
m_suspend = false;
m_isclearingZbuffer = true;
m_isShadowDone = false;
m_tempObjectList = new CListValue();
m_objectlist = new CListValue();
m_parentlist = new CListValue();

View File

@ -171,6 +171,11 @@ protected:
*/
bool m_isclearingZbuffer;
/**
* Does the shadow buffer needs calculing
*/
bool m_isShadowDone;
/**
* The name of the scene
*/
@ -572,6 +577,8 @@ public:
bool IsSuspended();
bool IsClearingZBuffer();
void EnableZBufferClearing(bool isclearingZbuffer);
bool IsShadowDone() { return m_isShadowDone; }
void SetShadowDone(bool b) { m_isShadowDone = b; }
// use of DBVT tree for camera culling
void SetDbvtCulling(bool b) { m_dbvt_culling = b; }
bool GetDbvtCulling() { return m_dbvt_culling; }

View File

@ -65,6 +65,8 @@ set(SRC
RAS_IPolygonMaterial.h
RAS_IRasterizer.h
RAS_ILightObject.h
RAS_IOffScreen.h
RAS_ISync.h
RAS_MaterialBucket.h
RAS_MeshObject.h
RAS_ObjectColor.h

View File

@ -0,0 +1,84 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file RAS_IOffScreen.h
* \ingroup bgerast
*/
#ifndef __RAS_OFFSCREEN_H__
#define __RAS_OFFSCREEN_H__
#include "EXP_Python.h"
class RAS_ICanvas;
class MT_Transform;
struct Image;
class RAS_IOffScreen
{
public:
enum RAS_OFS_BIND_MODE {
RAS_OFS_BIND_RENDER = 0,
RAS_OFS_BIND_READ,
};
enum RAS_OFS_RENDER_TARGET {
RAS_OFS_RENDER_BUFFER = 0, // use render buffer as render target
RAS_OFS_RENDER_TEXTURE, // use texture as render target
};
int m_width;
int m_height;
int m_samples;
int m_color; // if used, holds the texture object, 0 if not used
virtual ~RAS_IOffScreen() {}
virtual bool Create(int width, int height, int samples, RAS_OFS_RENDER_TARGET target) = 0;
virtual void Destroy() = 0;
virtual void Bind(RAS_OFS_BIND_MODE mode) = 0;
virtual void Blit() = 0;
virtual void Unbind() = 0;
virtual void MipMap() = 0;
virtual int GetWidth() { return m_width; }
virtual int GetHeight() { return m_height; }
virtual int GetSamples() { return m_samples; }
virtual int GetColor() { return m_color; }
};
#ifdef WITH_PYTHON
typedef struct {
PyObject_HEAD
RAS_IOffScreen *ofs;
} PyRASOffScreen;
extern PyTypeObject PyRASOffScreen_Type;
#endif
#endif /* __RAS_OFFSCREEN_H__ */

View File

@ -55,6 +55,8 @@ class RAS_IPolyMaterial;
class RAS_MeshSlot;
class RAS_ILightObject;
class SCA_IScene;
class RAS_IOffScreen;
class RAS_ISync;
typedef vector<unsigned short> KX_IndexArray;
typedef vector<RAS_TexVert> KX_VertexArray;
@ -257,6 +259,18 @@ public:
virtual void SetFocalLength(const float focallength) = 0;
virtual float GetFocalLength() = 0;
/**
* Create an offscreen render buffer that can be used as target for render.
* For the time being, it is only used in VideoTexture for custom render.
*/
virtual RAS_IOffScreen *CreateOffScreen(int width, int height, int samples, int target) = 0;
/**
* Create a sync object
* For use with offscreen render
*/
virtual RAS_ISync *CreateSync(int type) = 0;
/**
* SwapBuffers swaps the back buffer with the front buffer.
*/
@ -287,7 +301,7 @@ public:
* Sets the modelview matrix.
*/
virtual void SetViewMatrix(const MT_Matrix4x4 &mat, const MT_Matrix3x3 &ori,
const MT_Point3 &pos, bool perspective) = 0;
const MT_Point3 &pos, const MT_Vector3 &scale, bool perspective) = 0;
/**
*/

View File

@ -0,0 +1,48 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
/** \file RAS_ISync.h
* \ingroup bgerast
*/
#ifndef __RAS_ISYNC_H__
#define __RAS_ISYNC_H__
class RAS_ISync
{
public:
enum RAS_SYNC_TYPE {
RAS_SYNC_TYPE_FENCE = 0,
};
virtual ~RAS_ISync() {}
virtual bool Create(RAS_SYNC_TYPE type) = 0;
virtual void Destroy() = 0;
virtual void Wait() = 0;
};
#endif /* __RAS_ISYNC_H__ */

View File

@ -51,6 +51,8 @@ set(INC_SYS
set(SRC
RAS_ListRasterizer.cpp
RAS_OpenGLLight.cpp
RAS_OpenGLOffScreen.cpp
RAS_OpenGLSync.cpp
RAS_OpenGLRasterizer.cpp
RAS_StorageVA.cpp
RAS_StorageVBO.cpp
@ -58,6 +60,8 @@ set(SRC
RAS_IStorage.h
RAS_ListRasterizer.h
RAS_OpenGLLight.h
RAS_OpenGLOffScreen.h
RAS_OpenGLSync.h
RAS_OpenGLRasterizer.h
RAS_StorageVA.h
RAS_StorageVBO.h

View File

@ -242,7 +242,7 @@ void RAS_OpenGLLight::BindShadowBuffer(RAS_ICanvas *canvas, KX_Camera *cam, MT_T
RAS_IRasterizer::StereoMode stereomode = m_rasterizer->GetStereoMode();
m_rasterizer->SetStereoMode(RAS_IRasterizer::RAS_STEREO_NOSTEREO);
m_rasterizer->SetProjectionMatrix(projectionmat);
m_rasterizer->SetViewMatrix(modelviewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(modelviewmat, cam->NodeGetWorldOrientation(), cam->NodeGetWorldPosition(), cam->NodeGetLocalScaling(), cam->GetCameraData()->m_perspective);
m_rasterizer->SetStereoMode(stereomode);
}

View File

@ -0,0 +1,347 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#include "glew-mx.h"
#include <stdio.h>
#include "RAS_OpenGLOffScreen.h"
#include "RAS_ICanvas.h"
RAS_OpenGLOffScreen::RAS_OpenGLOffScreen(RAS_ICanvas *canvas)
:m_canvas(canvas), m_depthrb(0), m_colorrb(0), m_depthtx(0), m_colortx(0),
m_fbo(0), m_blitfbo(0), m_blitrbo(0), m_blittex(0), m_target(RAS_OFS_RENDER_BUFFER), m_bound(false)
{
m_width = 0;
m_height = 0;
m_samples = 0;
m_color = 0;
}
RAS_OpenGLOffScreen::~RAS_OpenGLOffScreen()
{
Destroy();
}
bool RAS_OpenGLOffScreen::Create(int width, int height, int samples, RAS_OFS_RENDER_TARGET target)
{
GLenum status;
GLuint glo[2], fbo;
GLint max_samples;
GLenum textarget;
if (m_fbo) {
printf("RAS_OpenGLOffScreen::Create(): buffer exists already, destroy first\n");
return false;
}
if (target != RAS_IOffScreen::RAS_OFS_RENDER_BUFFER &&
target != RAS_IOffScreen::RAS_OFS_RENDER_TEXTURE)
{
printf("RAS_OpenGLOffScreen::Create(): invalid offscren target\n");
return false;
}
if (!GLEW_EXT_framebuffer_object) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer not supported\n");
return false;
}
if (samples) {
if (!GLEW_EXT_framebuffer_multisample ||
!GLEW_EXT_framebuffer_blit)
{
samples = 0;
}
}
if (samples && target == RAS_OFS_RENDER_TEXTURE) {
// we need this in addition if we use multisample textures
if (!GLEW_ARB_texture_multisample ||
!GLEW_EXT_framebuffer_multisample_blit_scaled)
{
samples = 0;
}
}
if (samples) {
max_samples = 0;
glGetIntegerv(GL_MAX_SAMPLES_EXT , &max_samples);
if (samples > max_samples)
samples = max_samples;
}
m_target = target;
fbo = 0;
glGenFramebuffersEXT(1, &fbo);
if (fbo == 0) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer creation failed: %d\n", (int)glGetError());
return false;
}
m_fbo = fbo;
glo[0] = glo[1] = 0;
if (target == RAS_OFS_RENDER_TEXTURE) {
glGenTextures(2, glo);
if (glo[0] == 0 || glo[1] == 0) {
printf("RAS_OpenGLOffScreen::Create(): texture creation failed: %d\n", (int)glGetError());
goto L_ERROR;
}
m_depthtx = glo[0];
m_color = m_colortx = glo[1];
if (samples) {
textarget = GL_TEXTURE_2D_MULTISAMPLE;
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, m_depthtx);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, samples, GL_DEPTH_COMPONENT, width, height, true);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, m_colortx);
glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, samples, GL_RGBA8, width, height, true);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, 0);
}
else {
textarget = GL_TEXTURE_2D;
glBindTexture(GL_TEXTURE_2D, m_depthtx);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, m_colortx);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, textarget, m_depthtx, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textarget, m_colortx, 0);
}
else {
glGenRenderbuffersEXT(2, glo);
if (glo[0] == 0 || glo[1] == 0) {
printf("RAS_OpenGLOffScreen::Create(): render buffer creation failed: %d\n", (int)glGetError());
goto L_ERROR;
}
m_depthrb = glo[0];
m_colorrb = glo[1];
glBindRenderbufferEXT(GL_RENDERBUFFER, m_depthrb);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER, samples, GL_DEPTH_COMPONENT, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER, m_colorrb);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER, samples, GL_RGBA8, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER, 0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER, m_depthrb);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER, m_colorrb);
}
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer incomplete: %d\n", (int)status);
goto L_ERROR;
}
m_width = width;
m_height = height;
if (samples > 0) {
GLuint blit_tex;
GLuint blit_fbo;
// create a secondary FBO to blit to before the pixel can be read
/* write into new single-sample buffer */
glGenFramebuffersEXT(1, &blit_fbo);
if (!blit_fbo) {
printf("RAS_OpenGLOffScreen::Create(): failed creating a FBO for multi-sample offscreen buffer\n");
goto L_ERROR;
}
m_blitfbo = blit_fbo;
blit_tex = 0;
if (target == RAS_OFS_RENDER_TEXTURE) {
glGenTextures(1, &blit_tex);
if (!blit_tex) {
printf("RAS_OpenGLOffScreen::Create(): failed creating a texture for multi-sample offscreen buffer\n");
goto L_ERROR;
}
// m_color is the texture where the final render goes, the blit texture in this case
m_color = m_blittex = blit_tex;
glBindTexture(GL_TEXTURE_2D, m_blittex);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, m_blitfbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, m_blittex, 0);
}
else {
/* create render buffer for new 'fbo_blit' */
glGenRenderbuffersEXT(1, &blit_tex);
if (!blit_tex) {
printf("RAS_OpenGLOffScreen::Create(): failed creating a render buffer for multi-sample offscreen buffer\n");
goto L_ERROR;
}
m_blitrbo = blit_tex;
glBindRenderbufferEXT(GL_RENDERBUFFER, m_blitrbo);
glRenderbufferStorageMultisampleEXT(GL_RENDERBUFFER, 0, GL_RGBA8, width, height);
glBindRenderbufferEXT(GL_RENDERBUFFER, 0);
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, m_blitfbo);
glFramebufferRenderbufferEXT(GL_DRAW_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER, m_blitrbo);
}
status = glCheckFramebufferStatus(GL_DRAW_FRAMEBUFFER_EXT);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_EXT, 0);
if (status != GL_FRAMEBUFFER_COMPLETE) {
printf("RAS_OpenGLOffScreen::Create(): frame buffer for multi-sample offscreen buffer incomplete: %d\n", (int)status);
goto L_ERROR;
}
// remember that multisample is enabled
m_samples = 1;
}
return true;
L_ERROR:
Destroy();
return false;
}
void RAS_OpenGLOffScreen::Destroy()
{
GLuint globj;
Unbind();
if (m_fbo) {
globj = m_fbo;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
if (m_target == RAS_OFS_RENDER_TEXTURE) {
GLenum textarget = (m_samples) ? GL_TEXTURE_2D_MULTISAMPLE : GL_TEXTURE_2D;
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, textarget, 0, 0);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, textarget, 0, 0);
}
else {
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, 0);
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, 0);
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &globj);
m_fbo = 0;
}
if (m_depthrb) {
globj = m_depthrb;
glDeleteRenderbuffers(1, &globj);
m_depthrb = 0;
}
if (m_colorrb) {
globj = m_colorrb;
glDeleteRenderbuffers(1, &globj);
m_colorrb = 0;
}
if (m_depthtx) {
globj = m_depthtx;
glDeleteTextures(1, &globj);
m_depthtx = 0;
}
if (m_colortx) {
globj = m_colortx;
glDeleteTextures(1, &globj);
m_colortx = 0;
}
if (m_blitfbo) {
globj = m_blitfbo;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_blitfbo);
if (m_target == RAS_OFS_RENDER_TEXTURE) {
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, 0, 0);
}
else {
glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_RENDERBUFFER_EXT, 0);
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glDeleteFramebuffersEXT(1, &globj);
m_blitfbo = 0;
}
if (m_blitrbo) {
globj = m_blitrbo;
glDeleteRenderbuffers(1, &globj);
m_blitrbo = 0;
}
if (m_blittex) {
globj = m_blittex;
glDeleteTextures(1, &globj);
m_blittex = 0;
}
m_width = 0;
m_height = 0;
m_samples = 0;
m_color = 0;
m_target = RAS_OFS_RENDER_BUFFER;
}
void RAS_OpenGLOffScreen::Bind(RAS_OFS_BIND_MODE mode)
{
if (m_fbo) {
if (mode == RAS_OFS_BIND_RENDER) {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
glViewport(0, 0, m_width, m_height);
glDisable(GL_SCISSOR_TEST);
}
else if (!m_blitfbo) {
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
}
else {
glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, m_blitfbo);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
}
m_bound = true;
}
}
void RAS_OpenGLOffScreen::Unbind()
{
if (!m_bound)
return;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glEnable(GL_SCISSOR_TEST);
glReadBuffer(GL_BACK);
glDrawBuffer(GL_BACK);
m_bound = false;
}
void RAS_OpenGLOffScreen::MipMap()
{
if (m_color) {
glBindTexture(GL_TEXTURE_2D, m_color);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
}
void RAS_OpenGLOffScreen::Blit()
{
if (m_bound && m_blitfbo) {
// set the draw target to the secondary FBO, the read target is still the multisample FBO
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER, m_blitfbo);
// sample the primary
glBlitFramebufferEXT(0, 0, m_width, m_height, 0, 0, m_width, m_height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
// make sure the next glReadPixels will read from the secondary buffer
glBindFramebufferEXT(GL_READ_FRAMEBUFFER, m_blitfbo);
}
}

View File

@ -0,0 +1,65 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __RAS_OPENGLOFFSCREEN__
#define __RAS_OPENGLOFFSCREEN__
#include "RAS_IOffScreen.h"
#include "GPU_extensions.h"
class RAS_ICanvas;
class RAS_OpenGLOffScreen : public RAS_IOffScreen
{
RAS_ICanvas *m_canvas;
// these are GL objects
unsigned int m_depthrb;
unsigned int m_colorrb;
unsigned int m_depthtx;
unsigned int m_colortx;
unsigned int m_fbo;
unsigned int m_blitfbo;
unsigned int m_blitrbo;
unsigned int m_blittex;
RAS_OFS_RENDER_TARGET m_target;
bool m_bound;
public:
RAS_OpenGLOffScreen(RAS_ICanvas *canvas);
~RAS_OpenGLOffScreen();
bool Create(int width, int height, int samples, RAS_OFS_RENDER_TARGET target);
void Destroy();
void Bind(RAS_OFS_BIND_MODE mode);
void Blit();
void Unbind();
void MipMap();
};
#endif /* __RAS_OPENGLOFFSCREEN__ */

View File

@ -46,6 +46,8 @@
#include "MT_CmMatrix4x4.h"
#include "RAS_OpenGLLight.h"
#include "RAS_OpenGLOffScreen.h"
#include "RAS_OpenGLSync.h"
#include "RAS_StorageVA.h"
#include "RAS_StorageVBO.h"
@ -92,6 +94,7 @@ RAS_OpenGLRasterizer::RAS_OpenGLRasterizer(RAS_ICanvas* canvas, RAS_STORAGE_TYPE
m_time(0.0f),
m_campos(0.0f, 0.0f, 0.0f),
m_camortho(false),
m_camnegscale(false),
m_stereomode(RAS_STEREO_NOSTEREO),
m_curreye(RAS_STEREO_LEFTEYE),
m_eyeseparation(0.0f),
@ -207,7 +210,7 @@ void RAS_OpenGLRasterizer::SetBackColor(float color[3])
m_redback = color[0];
m_greenback = color[1];
m_blueback = color[2];
m_alphaback = 1.0f;
m_alphaback = 0.0f;
}
void RAS_OpenGLRasterizer::SetFog(short type, float start, float dist, float intensity, float color[3])
@ -600,6 +603,31 @@ float RAS_OpenGLRasterizer::GetFocalLength()
return m_focallength;
}
RAS_IOffScreen *RAS_OpenGLRasterizer::CreateOffScreen(int width, int height, int samples, int target)
{
RAS_IOffScreen *ofs;
ofs = new RAS_OpenGLOffScreen(m_2DCanvas);
if (!ofs->Create(width, height, samples, (RAS_IOffScreen::RAS_OFS_RENDER_TARGET)target)) {
delete ofs;
return NULL;
}
return ofs;
}
RAS_ISync *RAS_OpenGLRasterizer::CreateSync(int type)
{
RAS_ISync *sync;
sync = new RAS_OpenGLSync();
if (!sync->Create((RAS_ISync::RAS_SYNC_TYPE)type)) {
delete sync;
return NULL;
}
return sync;
}
void RAS_OpenGLRasterizer::SwapBuffers()
{
@ -924,6 +952,7 @@ MT_Matrix4x4 RAS_OpenGLRasterizer::GetOrthoMatrix(
void RAS_OpenGLRasterizer::SetViewMatrix(const MT_Matrix4x4 &mat,
const MT_Matrix3x3 & camOrientMat3x3,
const MT_Point3 & pos,
const MT_Vector3 &scale,
bool perspective)
{
m_viewmatrix = mat;
@ -966,6 +995,12 @@ void RAS_OpenGLRasterizer::SetViewMatrix(const MT_Matrix4x4 &mat,
}
}
bool negX = (scale[0] < 0.0f);
bool negY = (scale[0] < 0.0f);
bool negZ = (scale[0] < 0.0f);
if (negX || negY || negZ) {
m_viewmatrix.tscale((negX)?-1.0f:1.0f, (negY)?-1.0f:1.0f, (negZ)?-1.0f:1.0f, 1.0);
}
m_viewinvmatrix = m_viewmatrix;
m_viewinvmatrix.invert();
@ -976,6 +1011,7 @@ void RAS_OpenGLRasterizer::SetViewMatrix(const MT_Matrix4x4 &mat,
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(glviewmat);
m_campos = pos;
m_camnegscale = negX ^ negY ^ negZ;
}
@ -1108,6 +1144,9 @@ void RAS_OpenGLRasterizer::SetAlphaBlend(int alphablend)
void RAS_OpenGLRasterizer::SetFrontFace(bool ccw)
{
if (m_camnegscale)
ccw = !ccw;
if (m_last_frontface == ccw)
return;

View File

@ -96,6 +96,7 @@ class RAS_OpenGLRasterizer : public RAS_IRasterizer
MT_Matrix4x4 m_viewinvmatrix;
MT_Point3 m_campos;
bool m_camortho;
bool m_camnegscale;
StereoMode m_stereomode;
StereoEye m_curreye;
@ -180,7 +181,8 @@ public:
virtual float GetEyeSeparation();
virtual void SetFocalLength(const float focallength);
virtual float GetFocalLength();
virtual RAS_IOffScreen *CreateOffScreen(int width, int height, int samples, int target);
virtual RAS_ISync *CreateSync(int type);
virtual void SwapBuffers();
virtual void IndexPrimitives(class RAS_MeshSlot &ms);
@ -189,7 +191,12 @@ public:
virtual void SetProjectionMatrix(MT_CmMatrix4x4 &mat);
virtual void SetProjectionMatrix(const MT_Matrix4x4 &mat);
virtual void SetViewMatrix(const MT_Matrix4x4 &mat, const MT_Matrix3x3 &ori, const MT_Point3 &pos, bool perspective);
virtual void SetViewMatrix(
const MT_Matrix4x4 &mat,
const MT_Matrix3x3 &ori,
const MT_Point3 &pos,
const MT_Vector3 &scale,
bool perspective);
virtual const MT_Point3& GetCameraPosition();
virtual bool GetCameraOrtho();

View File

@ -0,0 +1,82 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#include "glew-mx.h"
#include <stdio.h>
#include "RAS_OpenGLSync.h"
RAS_OpenGLSync::RAS_OpenGLSync()
:m_sync(NULL)
{
}
RAS_OpenGLSync::~RAS_OpenGLSync()
{
Destroy();
}
bool RAS_OpenGLSync::Create(RAS_SYNC_TYPE type)
{
if (m_sync) {
printf("RAS_OpenGLSync::Create(): sync already exists, destroy first\n");
return false;
}
if (type != RAS_SYNC_TYPE_FENCE) {
printf("RAS_OpenGLSync::Create(): only RAS_SYNC_TYPE_FENCE are currently supported\n");
return false;
}
if (!GLEW_ARB_sync) {
printf("RAS_OpenGLSync::Create(): ARB_sync extension is needed to create sync object\n");
return false;
}
m_sync = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
if (!m_sync) {
printf("RAS_OpenGLSync::Create(): glFenceSync() failed");
return false;
}
return true;
}
void RAS_OpenGLSync::Destroy()
{
if (m_sync) {
glDeleteSync(m_sync);
m_sync = NULL;
}
}
void RAS_OpenGLSync::Wait()
{
if (m_sync) {
// this is needed to ensure that the sync is in the GPU
glFlush();
// block until the operation have completed
glWaitSync(m_sync, 0, GL_TIMEOUT_IGNORED);
}
}

View File

@ -0,0 +1,50 @@
/*
* ***** BEGIN GPL LICENSE BLOCK *****
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software Foundation,
* Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*
* The Original Code is Copyright (C) 2015, Blender Foundation
* All rights reserved.
*
* The Original Code is: all of this file.
*
* Contributor(s): Blender Foundation.
*
* ***** END GPL LICENSE BLOCK *****
*/
#ifndef __RAS_OPENGLSYNC__
#define __RAS_OPENGLSYNC__
#include "RAS_ISync.h"
struct __GLsync;
class RAS_OpenGLSync : public RAS_ISync
{
private:
struct __GLsync *m_sync;
public:
RAS_OpenGLSync();
~RAS_OpenGLSync();
virtual bool Create(RAS_SYNC_TYPE type);
virtual void Destroy();
virtual void Wait();
};
#endif /* __RAS_OPENGLSYNC__ */

View File

@ -213,6 +213,7 @@ void registerAllExceptions(void)
ImageSizesNotMatchDesc.registerDesc();
ImageHasExportsDesc.registerDesc();
InvalidColorChannelDesc.registerDesc();
InvalidImageModeDesc.registerDesc();
SceneInvalidDesc.registerDesc();
CameraInvalidDesc.registerDesc();
ObserverInvalidDesc.registerDesc();
@ -223,4 +224,18 @@ void registerAllExceptions(void)
MirrorTooSmallDesc.registerDesc();
SourceVideoEmptyDesc.registerDesc();
SourceVideoCreationDesc.registerDesc();
OffScreenInvalidDesc.registerDesc();
#ifdef WITH_DECKLINK
AutoDetectionNotAvailDesc.registerDesc();
DeckLinkBadDisplayModeDesc.registerDesc();
DeckLinkBadPixelFormatDesc.registerDesc();
DeckLinkOpenCardDesc.registerDesc();
DeckLinkBadFormatDesc.registerDesc();
DeckLinkInternalErrorDesc.registerDesc();
SourceVideoOnlyCaptureDesc.registerDesc();
VideoDeckLinkBadFormatDesc.registerDesc();
VideoDeckLinkOpenCardDesc.registerDesc();
VideoDeckLinkDvpInternalErrorDesc.registerDesc();
VideoDeckLinkPinMemoryErrorDesc.registerDesc();
#endif
}

View File

@ -46,7 +46,7 @@
throw Exception (err, macroHRslt, __FILE__, __LINE__); \
}
#define THRWEXCP(err,hRslt) throw Exception (err, hRslt, __FILE__, __LINE__);
#define THRWEXCP(err,hRslt) throw Exception (err, hRslt, __FILE__, __LINE__)
#if defined WIN32
@ -209,9 +209,11 @@ extern ExpDesc MaterialNotAvailDesc;
extern ExpDesc ImageSizesNotMatchDesc;
extern ExpDesc ImageHasExportsDesc;
extern ExpDesc InvalidColorChannelDesc;
extern ExpDesc InvalidImageModeDesc;
extern ExpDesc SceneInvalidDesc;
extern ExpDesc CameraInvalidDesc;
extern ExpDesc ObserverInvalidDesc;
extern ExpDesc OffScreenInvalidDesc;
extern ExpDesc MirrorInvalidDesc;
extern ExpDesc MirrorSizeInvalidDesc;
extern ExpDesc MirrorNormalInvalidDesc;
@ -219,7 +221,19 @@ extern ExpDesc MirrorHorizontalDesc;
extern ExpDesc MirrorTooSmallDesc;
extern ExpDesc SourceVideoEmptyDesc;
extern ExpDesc SourceVideoCreationDesc;
extern ExpDesc DeckLinkBadDisplayModeDesc;
extern ExpDesc DeckLinkBadPixelFormatDesc;
extern ExpDesc AutoDetectionNotAvailDesc;
extern ExpDesc DeckLinkOpenCardDesc;
extern ExpDesc DeckLinkBadFormatDesc;
extern ExpDesc DeckLinkInternalErrorDesc;
extern ExpDesc SourceVideoOnlyCaptureDesc;
extern ExpDesc VideoDeckLinkBadFormatDesc;
extern ExpDesc VideoDeckLinkOpenCardDesc;
extern ExpDesc VideoDeckLinkDvpInternalErrorDesc;
extern ExpDesc VideoDeckLinkPinMemoryErrorDesc;
extern ExceptionID InvalidImageMode;
void registerAllExceptions(void);
#endif

View File

@ -44,6 +44,13 @@
#define VT_A(v) ((unsigned char*)&v)[3]
#define VT_RGBA(v,r,g,b,a) VT_R(v)=(unsigned char)r, VT_G(v)=(unsigned char)g, VT_B(v)=(unsigned char)b, VT_A(v)=(unsigned char)a
#ifdef __BIG_ENDIAN__
# define VT_SWAPBR(i) ((((i) >> 16) & 0xFF00) + (((i) & 0xFF00) << 16) + ((i) & 0xFF00FF))
#else
# define VT_SWAPBR(i) ((((i) & 0xFF) << 16) + (((i) >> 16) & 0xFF) + ((i) & 0xFF00FF00))
#endif
// forward declaration
class FilterBase;

View File

@ -81,6 +81,30 @@ protected:
}
};
/// class for BGRA32 conversion
class FilterBGRA32 : public FilterBase
{
public:
/// constructor
FilterBGRA32 (void) {}
/// destructor
virtual ~FilterBGRA32 (void) {}
/// get source pixel size
virtual unsigned int getPixelSize (void) { return 4; }
protected:
/// filter pixel, source byte buffer
virtual unsigned int filter(
unsigned char *src, short x, short y,
short * size, unsigned int pixSize, unsigned int val)
{
VT_RGBA(val,src[2],src[1],src[0],src[3]);
return val;
}
};
/// class for BGR24 conversion
class FilterBGR24 : public FilterBase
{

View File

@ -32,7 +32,6 @@
extern "C" {
#include "bgl.h"
}
#include "glew-mx.h"
#include <vector>
#include <string.h>
@ -50,6 +49,14 @@ extern "C" {
// ImageBase class implementation
ExceptionID ImageHasExports;
ExceptionID InvalidColorChannel;
ExceptionID InvalidImageMode;
ExpDesc ImageHasExportsDesc(ImageHasExports, "Image has exported buffers, cannot resize");
ExpDesc InvalidColorChannelDesc(InvalidColorChannel, "Invalid or too many color channels specified. At most 4 values within R, G, B, A, 0, 1");
ExpDesc InvalidImageModeDesc(InvalidImageMode, "Invalid image mode, only RGBA and BGRA are supported");
// constructor
ImageBase::ImageBase (bool staticSrc) : m_image(NULL), m_imgSize(0),
m_avail(false), m_scale(false), m_scaleChange(false), m_flip(false),
@ -111,6 +118,28 @@ unsigned int * ImageBase::getImage (unsigned int texId, double ts)
return m_avail ? m_image : NULL;
}
bool ImageBase::loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts)
{
unsigned int *d, *s, v, len;
if (getImage(0, ts) != NULL && size >= getBuffSize()) {
switch (format) {
case GL_RGBA:
memcpy(buffer, m_image, getBuffSize());
break;
case GL_BGRA:
len = (unsigned int)m_size[0] * m_size[1];
for (s=m_image, d=buffer; len; len--) {
v = *s++;
*d++ = VT_SWAPBR(v);
}
break;
default:
THRWEXCP(InvalidImageMode,S_OK);
}
return true;
}
return false;
}
// refresh image source
void ImageBase::refresh (void)
@ -179,11 +208,18 @@ void ImageBase::setFilter (PyFilter * filt)
m_pyfilter = filt;
}
ExceptionID ImageHasExports;
ExceptionID InvalidColorChannel;
void ImageBase::swapImageBR()
{
unsigned int size, v, *s;
ExpDesc ImageHasExportsDesc(ImageHasExports, "Image has exported buffers, cannot resize");
ExpDesc InvalidColorChannelDesc(InvalidColorChannel, "Invalid or too many color channels specified. At most 4 values within R, G, B, A, 0, 1");
if (m_avail) {
size = 1 * m_size[0] * m_size[1];
for (s=m_image; size; size--) {
v = *s;
*s++ = VT_SWAPBR(v);
}
}
}
// initialize image data
void ImageBase::init (short width, short height)
@ -500,10 +536,57 @@ PyObject *Image_getSize (PyImage *self, void *closure)
}
// refresh image
PyObject *Image_refresh (PyImage *self)
PyObject *Image_refresh (PyImage *self, PyObject *args)
{
Py_buffer buffer;
bool done = true;
char *mode = NULL;
double ts = -1.0;
unsigned int format;
memset(&buffer, 0, sizeof(buffer));
if (PyArg_ParseTuple(args, "|s*sd:refresh", &buffer, &mode, &ts)) {
if (buffer.buf) {
// a target buffer is provided, verify its format
if (buffer.readonly) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be writable");
}
else if (!PyBuffer_IsContiguous(&buffer, 'C')) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be contiguous in memory");
}
else if (((intptr_t)buffer.buf & 3) != 0) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be aligned to 4 bytes boundary");
}
else {
// ready to get the image into our buffer
try {
if (mode == NULL || !strcmp(mode, "RGBA"))
format = GL_RGBA;
else if (!strcmp(mode, "BGRA"))
format = GL_BGRA;
else
THRWEXCP(InvalidImageMode,S_OK);
done = self->m_image->loadImage((unsigned int *)buffer.buf, buffer.len, format, ts);
}
catch (Exception & exp) {
exp.report();
}
}
PyBuffer_Release(&buffer);
if (PyErr_Occurred()) {
return NULL;
}
}
}
else {
return NULL;
}
self->m_image->refresh();
Py_RETURN_NONE;
if (done)
Py_RETURN_TRUE;
Py_RETURN_FALSE;
}
// get scale

View File

@ -40,6 +40,7 @@
#include "FilterBase.h"
#include "glew-mx.h"
// forward declarations
struct PyImage;
@ -104,6 +105,13 @@ public:
/// calculate size(nearest power of 2)
static short calcSize(short size);
/// calculate image from sources and send it to a target buffer instead of a texture
/// format is GL_RGBA or GL_BGRA
virtual bool loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts);
/// swap the B and R channel in-place in the image buffer
void swapImageBR();
/// number of buffer pointing to m_image, public because not handled by this class
int m_exports;
@ -348,7 +356,7 @@ PyObject *Image_getImage(PyImage *self, char *mode);
// get image size
PyObject *Image_getSize(PyImage *self, void *closure);
// refresh image - invalidate current content
PyObject *Image_refresh(PyImage *self);
PyObject *Image_refresh(PyImage *self, PyObject *args);
// get scale
PyObject *Image_getScale(PyImage *self, void *closure);

View File

@ -156,7 +156,7 @@ static PyMethodDef imageMixMethods[] = {
{"getWeight", (PyCFunction)getWeight, METH_VARARGS, "get image source weight"},
{"setWeight", (PyCFunction)setWeight, METH_VARARGS, "set image source weight"},
// methods from ImageBase class
{"refresh", (PyCFunction)Image_refresh, METH_NOARGS, "Refresh image - invalidate its current content"},
{"refresh", (PyCFunction)Image_refresh, METH_VARARGS, "Refresh image - invalidate its current content"},
{NULL}
};
// attributes structure

View File

@ -43,6 +43,8 @@
#include "RAS_CameraData.h"
#include "RAS_MeshObject.h"
#include "RAS_Polygon.h"
#include "RAS_IOffScreen.h"
#include "RAS_ISync.h"
#include "BLI_math.h"
#include "ImageRender.h"
@ -51,11 +53,12 @@
#include "Exception.h"
#include "Texture.h"
ExceptionID SceneInvalid, CameraInvalid, ObserverInvalid;
ExceptionID SceneInvalid, CameraInvalid, ObserverInvalid, OffScreenInvalid;
ExceptionID MirrorInvalid, MirrorSizeInvalid, MirrorNormalInvalid, MirrorHorizontal, MirrorTooSmall;
ExpDesc SceneInvalidDesc(SceneInvalid, "Scene object is invalid");
ExpDesc CameraInvalidDesc(CameraInvalid, "Camera object is invalid");
ExpDesc ObserverInvalidDesc(ObserverInvalid, "Observer object is invalid");
ExpDesc OffScreenInvalidDesc(OffScreenInvalid, "Offscreen object is invalid");
ExpDesc MirrorInvalidDesc(MirrorInvalid, "Mirror object is invalid");
ExpDesc MirrorSizeInvalidDesc(MirrorSizeInvalid, "Mirror has no vertex or no size");
ExpDesc MirrorNormalInvalidDesc(MirrorNormalInvalid, "Cannot determine mirror plane");
@ -63,12 +66,15 @@ ExpDesc MirrorHorizontalDesc(MirrorHorizontal, "Mirror is horizontal in local sp
ExpDesc MirrorTooSmallDesc(MirrorTooSmall, "Mirror is too small");
// constructor
ImageRender::ImageRender (KX_Scene *scene, KX_Camera * camera) :
ImageViewport(),
ImageRender::ImageRender (KX_Scene *scene, KX_Camera * camera, PyRASOffScreen * offscreen) :
ImageViewport(offscreen),
m_render(true),
m_done(false),
m_scene(scene),
m_camera(camera),
m_owncamera(false),
m_offscreen(offscreen),
m_sync(NULL),
m_observer(NULL),
m_mirror(NULL),
m_clip(100.f),
@ -81,6 +87,10 @@ ImageRender::ImageRender (KX_Scene *scene, KX_Camera * camera) :
m_engine = KX_GetActiveEngine();
m_rasterizer = m_engine->GetRasterizer();
m_canvas = m_engine->GetCanvas();
// keep a reference to the offscreen buffer
if (m_offscreen) {
Py_INCREF(m_offscreen);
}
}
// destructor
@ -88,6 +98,9 @@ ImageRender::~ImageRender (void)
{
if (m_owncamera)
m_camera->Release();
if (m_sync)
delete m_sync;
Py_XDECREF(m_offscreen);
}
// get background color
@ -121,30 +134,41 @@ void ImageRender::setBackgroundFromScene (KX_Scene *scene)
// capture image from viewport
void ImageRender::calcImage (unsigned int texId, double ts)
void ImageRender::calcViewport (unsigned int texId, double ts, unsigned int format)
{
if (m_rasterizer->GetDrawingMode() != RAS_IRasterizer::KX_TEXTURED || // no need for texture
m_camera->GetViewport() || // camera must be inactive
m_camera == m_scene->GetActiveCamera())
{
// no need to compute texture in non texture rendering
m_avail = false;
return;
}
// render the scene from the camera
Render();
// get image from viewport
ImageViewport::calcImage(texId, ts);
// restore OpenGL state
m_canvas->EndFrame();
if (!m_done) {
if (!Render()) {
return;
}
}
else if (m_offscreen) {
m_offscreen->ofs->Bind(RAS_IOffScreen::RAS_OFS_BIND_READ);
}
// wait until all render operations are completed
WaitSync();
// get image from viewport (or FBO)
ImageViewport::calcViewport(texId, ts, format);
if (m_offscreen) {
m_offscreen->ofs->Unbind();
}
}
void ImageRender::Render()
bool ImageRender::Render()
{
RAS_FrameFrustum frustum;
if (!m_render)
return;
if (!m_render ||
m_rasterizer->GetDrawingMode() != RAS_IRasterizer::KX_TEXTURED || // no need for texture
m_camera->GetViewport() || // camera must be inactive
m_camera == m_scene->GetActiveCamera())
{
// no need to compute texture in non texture rendering
return false;
}
if (!m_scene->IsShadowDone())
m_engine->RenderShadowBuffers(m_scene);
if (m_mirror)
{
@ -164,7 +188,7 @@ void ImageRender::Render()
MT_Scalar observerDistance = mirrorPlaneDTerm - observerWorldPos.dot(mirrorWorldZ);
// if distance < 0.01 => observer is on wrong side of mirror, don't render
if (observerDistance < 0.01)
return;
return false;
// set camera world position = observerPos + normal * 2 * distance
MT_Point3 cameraWorldPos = observerWorldPos + (MT_Scalar(2.0)*observerDistance)*mirrorWorldZ;
m_camera->GetSGNode()->SetLocalPosition(cameraWorldPos);
@ -215,7 +239,15 @@ void ImageRender::Render()
RAS_Rect area = m_canvas->GetWindowArea();
// The screen area that ImageViewport will copy is also the rendering zone
m_canvas->SetViewPort(m_position[0], m_position[1], m_position[0]+m_capSize[0]-1, m_position[1]+m_capSize[1]-1);
if (m_offscreen) {
// bind the fbo and set the viewport to full size
m_offscreen->ofs->Bind(RAS_IOffScreen::RAS_OFS_BIND_RENDER);
// this is needed to stop crashing in canvas check
m_canvas->UpdateViewPort(0, 0, m_offscreen->ofs->GetWidth(), m_offscreen->ofs->GetHeight());
}
else {
m_canvas->SetViewPort(m_position[0], m_position[1], m_position[0]+m_capSize[0]-1, m_position[1]+m_capSize[1]-1);
}
m_canvas->ClearColor(m_background[0], m_background[1], m_background[2], m_background[3]);
m_canvas->ClearBuffer(RAS_ICanvas::COLOR_BUFFER|RAS_ICanvas::DEPTH_BUFFER);
m_rasterizer->BeginFrame(m_engine->GetClockTime());
@ -292,17 +324,18 @@ void ImageRender::Render()
MT_Transform camtrans(m_camera->GetWorldToCamera());
MT_Matrix4x4 viewmat(camtrans);
m_rasterizer->SetViewMatrix(viewmat, m_camera->NodeGetWorldOrientation(), m_camera->NodeGetWorldPosition(), m_camera->GetCameraData()->m_perspective);
m_rasterizer->SetViewMatrix(viewmat, m_camera->NodeGetWorldOrientation(), m_camera->NodeGetWorldPosition(), m_camera->NodeGetLocalScaling(), m_camera->GetCameraData()->m_perspective);
m_camera->SetModelviewMatrix(viewmat);
// restore the stereo mode now that the matrix is computed
m_rasterizer->SetStereoMode(stereomode);
if (stereomode == RAS_IRasterizer::RAS_STEREO_QUADBUFFERED) {
// In QUAD buffer stereo mode, the GE render pass ends with the right eye on the right buffer
// but we need to draw on the left buffer to capture the render
// TODO: implement an explicit function in rasterizer to restore the left buffer.
m_rasterizer->SetEye(RAS_IRasterizer::RAS_STEREO_LEFTEYE);
}
if (m_rasterizer->Stereo()) {
// stereo mode change render settings that disturb this render, cancel them all
// we don't need to restore them as they are set before each frame render.
glDrawBuffer(GL_BACK_LEFT);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDisable(GL_POLYGON_STIPPLE);
}
m_scene->CalculateVisibleMeshes(m_rasterizer,m_camera);
@ -314,8 +347,48 @@ void ImageRender::Render()
// restore the canvas area now that the render is completed
m_canvas->GetWindowArea() = area;
m_canvas->EndFrame();
// In case multisample is active, blit the FBO
if (m_offscreen)
m_offscreen->ofs->Blit();
// end of all render operations, let's create a sync object just in case
if (m_sync) {
// a sync from a previous render, should not happen
delete m_sync;
m_sync = NULL;
}
m_sync = m_rasterizer->CreateSync(RAS_ISync::RAS_SYNC_TYPE_FENCE);
// remember that we have done render
m_done = true;
// the image is not available at this stage
m_avail = false;
return true;
}
void ImageRender::Unbind()
{
if (m_offscreen)
{
m_offscreen->ofs->Unbind();
}
}
void ImageRender::WaitSync()
{
if (m_sync) {
m_sync->Wait();
// done with it, deleted it
delete m_sync;
m_sync = NULL;
}
if (m_offscreen) {
// this is needed to finalize the image if the target is a texture
m_offscreen->ofs->MipMap();
}
// all rendered operation done and complete, invalidate render for next time
m_done = false;
}
// cast Image pointer to ImageRender
inline ImageRender * getImageRender (PyImage *self)
@ -337,11 +410,13 @@ static int ImageRender_init(PyObject *pySelf, PyObject *args, PyObject *kwds)
PyObject *scene;
// camera object
PyObject *camera;
// offscreen buffer object
PyRASOffScreen *offscreen = NULL;
// parameter keywords
static const char *kwlist[] = {"sceneObj", "cameraObj", NULL};
static const char *kwlist[] = {"sceneObj", "cameraObj", "ofsObj", NULL};
// get parameters
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO",
const_cast<char**>(kwlist), &scene, &camera))
if (!PyArg_ParseTupleAndKeywords(args, kwds, "OO|O",
const_cast<char**>(kwlist), &scene, &camera, &offscreen))
return -1;
try
{
@ -357,11 +432,16 @@ static int ImageRender_init(PyObject *pySelf, PyObject *args, PyObject *kwds)
// throw exception if camera is not available
if (cameraPtr == NULL) THRWEXCP(CameraInvalid, S_OK);
if (offscreen) {
if (Py_TYPE(offscreen) != &PyRASOffScreen_Type) {
THRWEXCP(OffScreenInvalid, S_OK);
}
}
// get pointer to image structure
PyImage *self = reinterpret_cast<PyImage*>(pySelf);
// create source object
if (self->m_image != NULL) delete self->m_image;
self->m_image = new ImageRender(scenePtr, cameraPtr);
self->m_image = new ImageRender(scenePtr, cameraPtr, offscreen);
}
catch (Exception & exp)
{
@ -372,6 +452,55 @@ static int ImageRender_init(PyObject *pySelf, PyObject *args, PyObject *kwds)
return 0;
}
static PyObject *ImageRender_refresh(PyImage *self, PyObject *args)
{
ImageRender *imageRender = getImageRender(self);
if (!imageRender) {
PyErr_SetString(PyExc_TypeError, "Incomplete ImageRender() object");
return NULL;
}
if (PyArg_ParseTuple(args, "")) {
// refresh called with no argument.
// For other image objects it simply invalidates the image buffer
// For ImageRender it triggers a render+sync
// Note that this only makes sense when doing offscreen render on texture
if (!imageRender->isDone()) {
if (!imageRender->Render()) {
Py_RETURN_FALSE;
}
// as we are not trying to read the pixels, just unbind
imageRender->Unbind();
}
// wait until all render operations are completed
// this will also finalize the texture
imageRender->WaitSync();
Py_RETURN_TRUE;
}
else {
// fallback on standard processing
PyErr_Clear();
return Image_refresh(self, args);
}
}
// refresh image
static PyObject *ImageRender_render(PyImage *self)
{
ImageRender *imageRender = getImageRender(self);
if (!imageRender) {
PyErr_SetString(PyExc_TypeError, "Incomplete ImageRender() object");
return NULL;
}
if (!imageRender->Render()) {
Py_RETURN_FALSE;
}
// we are not reading the pixels now, unbind
imageRender->Unbind();
Py_RETURN_TRUE;
}
// get background color
static PyObject *getBackground (PyImage *self, void *closure)
@ -410,7 +539,8 @@ static int setBackground(PyImage *self, PyObject *value, void *closure)
// methods structure
static PyMethodDef imageRenderMethods[] =
{ // methods from ImageBase class
{"refresh", (PyCFunction)Image_refresh, METH_NOARGS, "Refresh image - invalidate its current content"},
{"refresh", (PyCFunction)ImageRender_refresh, METH_VARARGS, "Refresh image - invalidate its current content after optionally transferring its content to a target buffer"},
{"render", (PyCFunction)ImageRender_render, METH_NOARGS, "Render scene - run before refresh() to performs asynchronous render"},
{NULL}
};
// attributes structure
@ -601,7 +731,9 @@ static PyGetSetDef imageMirrorGetSets[] =
ImageRender::ImageRender (KX_Scene *scene, KX_GameObject *observer, KX_GameObject *mirror, RAS_IPolyMaterial *mat) :
ImageViewport(),
m_render(false),
m_done(false),
m_scene(scene),
m_offscreen(NULL),
m_observer(observer),
m_mirror(mirror),
m_clip(100.f)

View File

@ -39,6 +39,8 @@
#include "DNA_screen_types.h"
#include "RAS_ICanvas.h"
#include "RAS_IRasterizer.h"
#include "RAS_IOffScreen.h"
#include "RAS_ISync.h"
#include "ImageViewport.h"
@ -48,7 +50,7 @@ class ImageRender : public ImageViewport
{
public:
/// constructor
ImageRender(KX_Scene *scene, KX_Camera *camera);
ImageRender(KX_Scene *scene, KX_Camera *camera, PyRASOffScreen *offscreen);
ImageRender(KX_Scene *scene, KX_GameObject *observer, KX_GameObject *mirror, RAS_IPolyMaterial * mat);
/// destructor
@ -63,16 +65,30 @@ public:
float getClip (void) { return m_clip; }
/// set whole buffer use
void setClip (float clip) { m_clip = clip; }
/// render status
bool isDone() { return m_done; }
/// render frame (public so that it is accessible from python)
bool Render();
/// in case fbo is used, method to unbind
void Unbind();
/// wait for render to complete
void WaitSync();
protected:
/// true if ready to render
bool m_render;
/// is render done already?
bool m_done;
/// rendered scene
KX_Scene * m_scene;
/// camera for render
KX_Camera * m_camera;
/// do we own the camera?
bool m_owncamera;
/// if offscreen render
PyRASOffScreen *m_offscreen;
/// object to synchronize render even if no buffer transfer
RAS_ISync *m_sync;
/// for mirror operation
KX_GameObject * m_observer;
KX_GameObject * m_mirror;
@ -91,15 +107,15 @@ protected:
KX_KetsjiEngine* m_engine;
/// background color
float m_background[4];
float m_background[4];
/// render 3d scene to image
virtual void calcImage (unsigned int texId, double ts);
virtual void calcImage (unsigned int texId, double ts) { calcViewport(texId, ts, GL_RGBA); }
/// render 3d scene to image
virtual void calcViewport (unsigned int texId, double ts, unsigned int format);
void Render();
void SetupRenderFrame(KX_Scene *scene, KX_Camera* cam);
void RenderFrame(KX_Scene* scene, KX_Camera* cam);
void setBackgroundFromScene(KX_Scene *scene);
void SetWorldSettings(KX_WorldInfo* wi);
};

View File

@ -45,14 +45,22 @@
// constructor
ImageViewport::ImageViewport (void) : m_alpha(false), m_texInit(false)
ImageViewport::ImageViewport (PyRASOffScreen *offscreen) : m_alpha(false), m_texInit(false)
{
// get viewport rectangle
RAS_Rect rect = KX_GetActiveEngine()->GetCanvas()->GetWindowArea();
m_viewport[0] = rect.GetLeft();
m_viewport[1] = rect.GetBottom();
m_viewport[2] = rect.GetWidth();
m_viewport[3] = rect.GetHeight();
if (offscreen) {
m_viewport[0] = 0;
m_viewport[1] = 0;
m_viewport[2] = offscreen->ofs->GetWidth();
m_viewport[3] = offscreen->ofs->GetHeight();
}
else {
RAS_Rect rect = KX_GetActiveEngine()->GetCanvas()->GetWindowArea();
m_viewport[0] = rect.GetLeft();
m_viewport[1] = rect.GetBottom();
m_viewport[2] = rect.GetWidth();
m_viewport[3] = rect.GetHeight();
}
//glGetIntegerv(GL_VIEWPORT, m_viewport);
// create buffer for viewport image
@ -60,7 +68,7 @@ ImageViewport::ImageViewport (void) : m_alpha(false), m_texInit(false)
// float (1 float = 4 bytes per pixel)
m_viewportImage = new BYTE [4 * getViewportSize()[0] * getViewportSize()[1]];
// set attributes
setWhole(false);
setWhole((offscreen) ? true : false);
}
// destructor
@ -126,25 +134,26 @@ void ImageViewport::setPosition (GLint pos[2])
// capture image from viewport
void ImageViewport::calcImage (unsigned int texId, double ts)
void ImageViewport::calcViewport (unsigned int texId, double ts, unsigned int format)
{
// if scale was changed
if (m_scaleChange)
// reset image
init(m_capSize[0], m_capSize[1]);
// if texture wasn't initialized
if (!m_texInit) {
if (!m_texInit && texId != 0) {
// initialize it
loadTexture(texId, m_image, m_size);
m_texInit = true;
}
// if texture can be directly created
if (texId != 0 && m_pyfilter == NULL && m_capSize[0] == calcSize(m_capSize[0])
&& m_capSize[1] == calcSize(m_capSize[1]) && !m_flip && !m_zbuff && !m_depth)
if (texId != 0 && m_pyfilter == NULL && m_size[0] == m_capSize[0] &&
m_size[1] == m_capSize[1] && !m_flip && !m_zbuff && !m_depth)
{
// just copy current viewport to texture
glBindTexture(GL_TEXTURE_2D, texId);
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1]);
glBindTexture(GL_TEXTURE_2D, 0);
// image is not available
m_avail = false;
}
@ -176,11 +185,33 @@ void ImageViewport::calcImage (unsigned int texId, double ts)
// get frame buffer data
if (m_alpha) {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], GL_RGBA,
GL_UNSIGNED_BYTE, m_viewportImage);
// filter loaded data
FilterRGBA32 filt;
filterImage(filt, m_viewportImage, m_capSize);
// as we are reading the pixel in the native format, we can read directly in the image buffer
// if we are sure that no processing is needed on the image
if (m_size[0] == m_capSize[0] &&
m_size[1] == m_capSize[1] &&
!m_flip &&
!m_pyfilter)
{
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], format,
GL_UNSIGNED_BYTE, m_image);
m_avail = true;
}
else if (!m_pyfilter) {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], format,
GL_UNSIGNED_BYTE, m_viewportImage);
FilterRGBA32 filt;
filterImage(filt, m_viewportImage, m_capSize);
}
else {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], GL_RGBA,
GL_UNSIGNED_BYTE, m_viewportImage);
FilterRGBA32 filt;
filterImage(filt, m_viewportImage, m_capSize);
if (format == GL_BGRA) {
// in place byte swapping
swapImageBR();
}
}
}
else {
glReadPixels(m_upLeft[0], m_upLeft[1], (GLsizei)m_capSize[0], (GLsizei)m_capSize[1], GL_RGB,
@ -188,12 +219,46 @@ void ImageViewport::calcImage (unsigned int texId, double ts)
// filter loaded data
FilterRGB24 filt;
filterImage(filt, m_viewportImage, m_capSize);
if (format == GL_BGRA) {
// in place byte swapping
swapImageBR();
}
}
}
}
}
}
bool ImageViewport::loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts)
{
unsigned int *tmp_image;
bool ret;
// if scale was changed
if (m_scaleChange) {
// reset image
init(m_capSize[0], m_capSize[1]);
}
// size must be identical
if (size < getBuffSize())
return false;
if (m_avail) {
// just copy
return ImageBase::loadImage(buffer, size, format, ts);
}
else {
tmp_image = m_image;
m_image = buffer;
calcViewport(0, ts, format);
ret = m_avail;
m_image = tmp_image;
// since the image was not loaded to our buffer, it's not valid
m_avail = false;
}
return ret;
}
// cast Image pointer to ImageViewport
@ -336,7 +401,7 @@ int ImageViewport_setCaptureSize(PyImage *self, PyObject *value, void *closure)
// methods structure
static PyMethodDef imageViewportMethods[] =
{ // methods from ImageBase class
{"refresh", (PyCFunction)Image_refresh, METH_NOARGS, "Refresh image - invalidate its current content"},
{"refresh", (PyCFunction)Image_refresh, METH_VARARGS, "Refresh image - invalidate its current content"},
{NULL}
};
// attributes structure

View File

@ -35,6 +35,7 @@
#include "Common.h"
#include "ImageBase.h"
#include "RAS_IOffScreen.h"
/// class for viewport access
@ -42,7 +43,7 @@ class ImageViewport : public ImageBase
{
public:
/// constructor
ImageViewport (void);
ImageViewport (PyRASOffScreen *offscreen=NULL);
/// destructor
virtual ~ImageViewport (void);
@ -67,6 +68,9 @@ public:
/// set position in viewport
void setPosition (GLint pos[2] = NULL);
/// capture image from viewport to user buffer
virtual bool loadImage(unsigned int *buffer, unsigned int size, unsigned int format, double ts);
protected:
/// frame buffer rectangle
GLint m_viewport[4];
@ -89,7 +93,10 @@ protected:
bool m_texInit;
/// capture image from viewport
virtual void calcImage (unsigned int texId, double ts);
virtual void calcImage (unsigned int texId, double ts) { calcViewport(texId, ts, GL_RGBA); }
/// capture image from viewport
virtual void calcViewport (unsigned int texId, double ts, unsigned int format);
/// get viewport size
GLint * getViewportSize (void) { return m_viewport + 2; }

View File

@ -393,9 +393,10 @@ static PyObject *Texture_refresh(Texture *self, PyObject *args)
}
// load texture for rendering
loadTexture(self->m_actTex, texture, size, self->m_mipmap);
// refresh texture source, if required
if (refreshSource) self->m_source->m_image->refresh();
}
// refresh texture source, if required
if (refreshSource) {
self->m_source->m_image->refresh();
}
}
}

View File

@ -137,8 +137,53 @@ PyObject *Video_getStatus(PyImage *self, void *closure)
}
// refresh video
PyObject *Video_refresh(PyImage *self)
PyObject *Video_refresh(PyImage *self, PyObject *args)
{
Py_buffer buffer;
char *mode = NULL;
unsigned int format;
double ts = -1.0;
memset(&buffer, 0, sizeof(buffer));
if (PyArg_ParseTuple(args, "|s*sd:refresh", &buffer, &mode, &ts)) {
if (buffer.buf) {
// a target buffer is provided, verify its format
if (buffer.readonly) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be writable");
}
else if (!PyBuffer_IsContiguous(&buffer, 'C')) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be contiguous in memory");
}
else if (((intptr_t)buffer.buf & 3) != 0) {
PyErr_SetString(PyExc_TypeError, "Buffers passed in argument must be aligned to 4 bytes boundary");
}
else {
// ready to get the image into our buffer
try {
if (mode == NULL || !strcmp(mode, "RGBA"))
format = GL_RGBA;
else if (!strcmp(mode, "BGRA"))
format = GL_BGRA;
else
THRWEXCP(InvalidImageMode,S_OK);
if (!self->m_image->loadImage((unsigned int *)buffer.buf, buffer.len, format, ts)) {
PyErr_SetString(PyExc_TypeError, "Could not load the buffer, perhaps size is not compatible");
}
}
catch (Exception & exp) {
exp.report();
}
}
PyBuffer_Release(&buffer);
if (PyErr_Occurred())
return NULL;
}
}
else
{
return NULL;
}
getVideo(self)->refresh();
return Video_getStatus(self, NULL);
}

View File

@ -190,7 +190,7 @@ void Video_open(VideoBase *self, char *file, short captureID);
PyObject *Video_play(PyImage *self);
PyObject *Video_pause(PyImage *self);
PyObject *Video_stop(PyImage *self);
PyObject *Video_refresh(PyImage *self);
PyObject *Video_refresh(PyImage *self, PyObject *args);
PyObject *Video_getStatus(PyImage *self, void *closure);
PyObject *Video_getRange(PyImage *self, void *closure);
int Video_setRange(PyImage *self, PyObject *value, void *closure);

View File

@ -1203,7 +1203,7 @@ static PyMethodDef videoMethods[] =
{"play", (PyCFunction)Video_play, METH_NOARGS, "Play (restart) video"},
{"pause", (PyCFunction)Video_pause, METH_NOARGS, "pause video"},
{"stop", (PyCFunction)Video_stop, METH_NOARGS, "stop video (play will replay it from start)"},
{"refresh", (PyCFunction)Video_refresh, METH_NOARGS, "Refresh video - get its status"},
{"refresh", (PyCFunction)Video_refresh, METH_VARARGS, "Refresh video - get its status"},
{NULL}
};
// attributes structure
@ -1326,7 +1326,7 @@ static PyObject *Image_reload(PyImage *self, PyObject *args)
// methods structure
static PyMethodDef imageMethods[] =
{ // methods from VideoBase class
{"refresh", (PyCFunction)Video_refresh, METH_NOARGS, "Refresh image, i.e. load it"},
{"refresh", (PyCFunction)Video_refresh, METH_VARARGS, "Refresh image, i.e. load it"},
{"reload", (PyCFunction)Image_reload, METH_VARARGS, "Reload image, i.e. reopen it"},
{NULL}
};