available. But this just checks if the fbo functions can be called.
It doesn't check if the OpenGL renderer supports fbos. For indirect
rendering on linux the client side capability may be different from
the display server, which can lead to mipmapped textures failing to
render. I've added a fbo extension check.
"The attached file contains:
- a per-context read counter in GLBufferObject::BufferEntry
- a global client counter in BufferData
- the glue between Texture* and Image client counter
"
parameter in osg::Image. To support this Image::setData(..) now has a new optional rowLength parameter which
defaults to 0, which provides the original behaviour, Image::setRowLength(int) and int Image::getRowLength() are also provided.
With the introduction of RowLength support in osg::Image it is now possible to create a sub image where
the t size of the image are smaller than the row length, useful for when you have a large image on the CPU
and which to use a small portion of it on the GPU. However, when these sub images are created the data
within the image is no longer contiguous so data access can no longer assume that all the data is in
one block. The new method Image::isDataContiguous() enables the user to check whether the data is contiguous,
and if not one can either access the data row by row using Image::data(column,row,image) accessor, or use the
new Image::DataIterator for stepping through each block on memory assocatied with the image.
To support the possibility of non contiguous osg::Image usage of image objects has had to be updated to
check DataContiguous and handle the case or use access via the DataIerator or by row by row. To achieve
this a relatively large number of files has had to be modified, in particular the texture classes and
image plugins that doing writing.
same behaivour across platforms, something that can be achieved now thanks to the integrated GLU library.
Corrected the default of the ResizeNonPowerOfTwoHint to true to reflect the actual default setting set by the
Texture default constructor.
_isTextureBorderClampSupported is set to "TRUE" in Texture.cpp, because of the version number check (GL VERSION >= 1.3).
This leads to an invalid enum error, when GL_TEXTURE_BORDER_COLOR is tried to set.
"
CID 12263: Missing break in switch (MISSING_BREAK)
This case (value 8) is not terminated by a 'break' statement.
CID 12262: Missing break in switch (MISSING_BREAK)
This case (value 7) is not terminated by a 'break' statement.
CID 12261: Missing break in switch (MISSING_BREAK)
This case (value 6) is not terminated by a 'break' statement.
CID 11588: Resource leak (RESOURCE_LEAK)
Calling allocation function "operator new[](unsigned long long)".
Assigning: "dataPtr" = storage returned from "new unsigned char[newTotalSize]".
The pvr format which can be used as a wrapper for different compressed and uncompressed formats supports this compression algorithm. The original pvr compression uses the pvrtc format. The handling of pvrtc is already implemented in the pvr plugin. PVR provides wrapper functionality for some formats, e.g. etc or even dxt/dds.
Our target system (gles2) is able to use the etc compression format. With minor changes in the submitted files, there is no need to write a separate plugin. However the original pvr texture compression formats are not supported on our target, which is the reason for this extension.
The changes mainly consist in the definition on new enum values in the classes and headers of ReaderWriterPVR,Image and Texture. I also found some locations where the handling of the original pvr textures was not implemented. These are also part of this submission."
* support for NPOT-textures on IOS
* support for FBOs (only renderToTexture for now) on IOS (should work
for other OpenGL ES 1/2 targets, too)
* FileUtils-support for IOS"
In osg::isGLExtensionOrVersionSupported in src/osg/GLExtensions.cpp when
using indirect X11 rendering,
glGetIntegerv( GL_NUM_EXTENSIONS, &numExt );
is leaving numExt uninitilized causing the following glGetStringi to
return NULL when the extension number isn't present. Passing NULL to
std::string() then crashes. This is with the following nVidia driver.
OpenGL version string: 3.3.0 NVIDIA 256.35
I went ahead and initialized some of the other variables before
glGetInitegerv in other files as well. I don't know for sure
which ones can fail, so I don't know which are strictly required.
"
changed extensions from .c to .cpp and got compiling as C files as part of the osg core library.
Updated and cleaned up the rest of the OSG to use the new internal GLU.
Texture.cpp:applyTexImage2D_subload:
<code>
unsigned char* data = = (unsigned char*)image->data();
if (needImageRescale) {
// allocates rescale buffer
data = new unsigned char[newTotalSize];
// calls gluScaleImage into the data buffer
}
const unsigned char* dataPtr = image->data();
// subloads 'dataPtr'
// deletes 'data'
</code>
In effect, the scaled data would never be used.
I've also replaced bits of duplicate code in Texture1D/2D/2DArray/3D/Cubemap/Rectangle
that checks if the texture image can/should be unref'd with common functionality in
Texture.cpp.
"
seems that the number of current active texture is wrong. It's because
of the line in Texture::TextureObjectSet::flushDeletedTextureObjects
_parent->getNumberActiveTextureObjects() += numDeleted;"
Texture2DMultismaple as name suggests provides means to directly access subsamples of rendered FBO target. (GLSL 1.5 texelFetch call).
Recently I was working on deferred renderer with OSG, during that I noticed there is no support for multisampled textures (GL_ARB_texture_multisample extension). After consultations with Paul Martz and Wojtek Lewandowski I added Texture2DMultisample class and made few necessary changes around osg::FrameBufferObject, osg::Texture and osgUtil::RenderStage classes."
and from follow email:
"Fixed. According to ARB_texture_multisample extension specification multisample textures don't need TexParameters since they can only be fetched with texelFetch."
I create a compositeviewer with two views that share a context. One view is deleted. Texture::TextureObjectSet::discardAllTextureObjects is called, but this does not reset _tail. Now the texture object is again created and addToBack is called from Texture::TextureObjectSet::takeOrGenerate. In addToBack (line 612) _tail is now not 0 (although the list should be empty) and a loop is created from the texture object to itself. Then when the second view is deleted, Texture::TextureObjectSet::deleteAllTextureObjects loops forever. Setting _tail to 0 fixes it for me (see attached)."
was able to follow the problem until following addition to Texture.cpp:
// GLES doesn't cope with internal formats of 1,2,3 and 4 so map them to
the appropriate equivilants.
if (_internalFormat==1) _internalFormat = GL_ALPHA;
if (_internalFormat==2) _internalFormat = GL_LUMINANCE_ALPHA;
if (_internalFormat==3) _internalFormat = GL_RGB;
if (_internalFormat==4) _internalFormat = GL_RGBA;
The problem is that internal format "1" corresponds to GL_LUMINANCE, not
GL_ALPHA. I double checked this from the Red Book. Fixed version is
attached to the email."