* support for NPOT-textures on IOS
* support for FBOs (only renderToTexture for now) on IOS (should work
for other OpenGL ES 1/2 targets, too)
* FileUtils-support for IOS"
In osg::isGLExtensionOrVersionSupported in src/osg/GLExtensions.cpp when
using indirect X11 rendering,
glGetIntegerv( GL_NUM_EXTENSIONS, &numExt );
is leaving numExt uninitilized causing the following glGetStringi to
return NULL when the extension number isn't present. Passing NULL to
std::string() then crashes. This is with the following nVidia driver.
OpenGL version string: 3.3.0 NVIDIA 256.35
I went ahead and initialized some of the other variables before
glGetInitegerv in other files as well. I don't know for sure
which ones can fail, so I don't know which are strictly required.
"
changed extensions from .c to .cpp and got compiling as C files as part of the osg core library.
Updated and cleaned up the rest of the OSG to use the new internal GLU.
Texture.cpp:applyTexImage2D_subload:
<code>
unsigned char* data = = (unsigned char*)image->data();
if (needImageRescale) {
// allocates rescale buffer
data = new unsigned char[newTotalSize];
// calls gluScaleImage into the data buffer
}
const unsigned char* dataPtr = image->data();
// subloads 'dataPtr'
// deletes 'data'
</code>
In effect, the scaled data would never be used.
I've also replaced bits of duplicate code in Texture1D/2D/2DArray/3D/Cubemap/Rectangle
that checks if the texture image can/should be unref'd with common functionality in
Texture.cpp.
"
seems that the number of current active texture is wrong. It's because
of the line in Texture::TextureObjectSet::flushDeletedTextureObjects
_parent->getNumberActiveTextureObjects() += numDeleted;"
Texture2DMultismaple as name suggests provides means to directly access subsamples of rendered FBO target. (GLSL 1.5 texelFetch call).
Recently I was working on deferred renderer with OSG, during that I noticed there is no support for multisampled textures (GL_ARB_texture_multisample extension). After consultations with Paul Martz and Wojtek Lewandowski I added Texture2DMultisample class and made few necessary changes around osg::FrameBufferObject, osg::Texture and osgUtil::RenderStage classes."
and from follow email:
"Fixed. According to ARB_texture_multisample extension specification multisample textures don't need TexParameters since they can only be fetched with texelFetch."
I create a compositeviewer with two views that share a context. One view is deleted. Texture::TextureObjectSet::discardAllTextureObjects is called, but this does not reset _tail. Now the texture object is again created and addToBack is called from Texture::TextureObjectSet::takeOrGenerate. In addToBack (line 612) _tail is now not 0 (although the list should be empty) and a loop is created from the texture object to itself. Then when the second view is deleted, Texture::TextureObjectSet::deleteAllTextureObjects loops forever. Setting _tail to 0 fixes it for me (see attached)."
was able to follow the problem until following addition to Texture.cpp:
// GLES doesn't cope with internal formats of 1,2,3 and 4 so map them to
the appropriate equivilants.
if (_internalFormat==1) _internalFormat = GL_ALPHA;
if (_internalFormat==2) _internalFormat = GL_LUMINANCE_ALPHA;
if (_internalFormat==3) _internalFormat = GL_RGB;
if (_internalFormat==4) _internalFormat = GL_RGBA;
The problem is that internal format "1" corresponds to GL_LUMINANCE, not
GL_ALPHA. I double checked this from the Red Book. Fixed version is
attached to the email."
Fixed to osg::Texture for GLES support.
Added automatic GLenum mode mappings in osg::PrimitiveSet to provide a fallback for non support glDrawArray/glDrawElement modes.
Added finer gained error checking during StateSet::compile().
Removed EXT postfix of FrameBufferObject functions, and added support for checking non EXT versions frame buffer object GL functions.
Introduced usage of OSG_GL*_FEATURES to avoid some #if #else #endif code blocks.
Using a submissions from Paul Martz as a guide added perliminary GL3 support to a range of OSG classes
The Texture Pool can be enabled by setting the env var OSG_TEXTURE_POOL_SIZE=size_in_bytes.
Note, setting a size of 1 will result in the TexturePool allocating the minimum number of
textures it can without having to reuse TextureObjects from within the same frame.
- osg::Texture sets GL_MAX_TEXTURE_LEVEL if image uses fewer mipmaps than
number from computeNumberOfMipmaps (and it works!)
- DDS fix to read only available mipmaps
- DDS fixes to read / save 3D textures with mipmaps ( packing == 1 is
required)
- Few cosmetic DDS modifications and comments to make code cleaner (I hope)
Added _isTextureMaxLevelSupported variable to texture extensions. It
could be removed if OSG requires OpenGL version 1.2 by default.
Added simple ComputeImageSizeInBytes function in DDSReaderWrites. In
my opinion it would be better if similar static method was defined for
Image. Then it could be used not only in DDS but other modules as well (I
noticed that Texture/Texture2D do similar computations).
Also attached is an example test.osg model with DDS without last mipmaps to
demonstrate the problem. When loaded into Viewer with current code and moved
far away, so that cube occupies 4 pixels, cube becomes red due to the issue
I described in earlier post. When you patch DDS reader writer with attched
code but no osg::Texture yet, cube becomes blank (at least on my
Windows/NVidia) When you also merge osg::Texture patch cube will look right
and mipmaps will be correct."