The Texture Pool can be enabled by setting the env var OSG_TEXTURE_POOL_SIZE=size_in_bytes.
Note, setting a size of 1 will result in the TexturePool allocating the minimum number of
textures it can without having to reuse TextureObjects from within the same frame.
- osg::Texture sets GL_MAX_TEXTURE_LEVEL if image uses fewer mipmaps than
number from computeNumberOfMipmaps (and it works!)
- DDS fix to read only available mipmaps
- DDS fixes to read / save 3D textures with mipmaps ( packing == 1 is
required)
- Few cosmetic DDS modifications and comments to make code cleaner (I hope)
Added _isTextureMaxLevelSupported variable to texture extensions. It
could be removed if OSG requires OpenGL version 1.2 by default.
Added simple ComputeImageSizeInBytes function in DDSReaderWrites. In
my opinion it would be better if similar static method was defined for
Image. Then it could be used not only in DDS but other modules as well (I
noticed that Texture/Texture2D do similar computations).
Also attached is an example test.osg model with DDS without last mipmaps to
demonstrate the problem. When loaded into Viewer with current code and moved
far away, so that cube occupies 4 pixels, cube becomes red due to the issue
I described in earlier post. When you patch DDS reader writer with attched
code but no osg::Texture yet, cube becomes blank (at least on my
Windows/NVidia) When you also merge osg::Texture patch cube will look right
and mipmaps will be correct."
returns true if (the extension string is supported or GL version is greater than or equal to a specified version) and
non extension disable is used. This makes it possible to disable extensions that are now
available as parts of the core OpenGL spec.
Updated Texture.cpp is use this method.
nvidia 6 months ago and the issue is still "in progress". I've given up
waiting for them!
Platform - various Intel Windows XP SP2 PCs with various nvidia cards
including GeForce 8800 GTS and Quadro FX 4500, and various driver
versions including the latest WHQL 175.16.
I investigated your concerns about glGenerateMipmapEXT being slower than
GL_GENERATE_MIPMAP_SGIS, and for power-of-two textures, to my surprise
it is. For a 512*512 texture, glGenerateMipmapEXT takes on average 10ms,
while GL_GENERATE_MIPMAP_SGIS takes on average 6ms. Therefore I have
modified the code to only use glGenerateMipmapEXT if the texture has a
non-power-of-two width or height. I am resubmitting all the files
previously submitted (only "Texture.cpp" has significant changes since
my previous submission, I've also replaced tabs with spaces in
"Texture").
"
GL_GENERATE_MIPMAP_SGIS is very slow (over half a second for a 720*576
texture). However, glGenerateMipmapEXT() performs well (16ms for the
same texture), so I have modified the attached files to use
Texture::generateMipmap() if glGenerateMipmapEXT is supported, instead
of enabling & disabling GL_GENERATE_MIPMAP_SGIS."
Notes, from Robert Osfield, I've tested the out of the previous path using
GL_GENERATE_MIPMAP_SGIS and non power of two textures on NVidia 7800GT and
Nvidia linux drivers with the image size 720x576 and only get compile times
of 56ms, so the above half second speed looks to be a driver bug. With
Muchael's changes the cost goes done to less than 5ms, so it's certainly
an effective change, even given that Michael's poor expereiences with
GL_GENERATE_MIP_SGIS do look to be a driver bug.
- Implementation of integer textures as in EXT_texture_integer
- setBorderColor(Vec4) changed to setBorderColor(Vec4d) to pass double values
as border color. (Probably we have to provide an overloading function to
still support Vec4f ?)
- new method Texture::getInternalFormatType() added. Gives information if the
internal format normalized, float, signed integer or unsigned integer. Can
help people to write better code ;-)
"
Futher changes to this submission by Robert Osfield, changed the dirty mipmap
flag into a buffer_value<> vector to ensure safe handling of multiple contexts.
local function pointer to avoid compiler warnings related to case void*.
Moved various OSG classes across to using setGLExtensions instead of getGLExtensions,
and changed them to use typedef declarations in the headers rather than casts in
the .cpp.
Updated wrappers
"I was experiencing hard crashes of my application when using PBO's on
machines that don't support PBO's. I think osg incorrectly checks if
PBO's are supported.
I added a new method to the BufferObject::Extensions class which
returns if the "GL_ARB_pixel_buffer_object" string is supported. This
fixes the problem on my end. Machines without PBO support will
continue to work and machines with PBO support will still be able to
use it."
in incorrect texture assignment. Solution was to a compareTextureObjects() test to the Texture*::compare(..) method that
the osgUtil::Optimizer::StateSetVisitor uses to determine uniqueness.
"If a texture is used that is not a multiple of four, and compression
was requested through the texture's internal format, the texture's
internal format reverts to a non-compressed type and a NOTICE is given.
At present, compressed textures must have a multiple of four in each
dimension."
textures. Near the top of the function that implements texture
subloading, osg::Texture::applyTexImage2D_subload(), the local variable
compressed_image examines the _image's_ pixel format to see if it is
compressed. However, further on, in calls to getCompressedSize() the
_texture's_ pixel format is used. In my application's Texture2D class,
I use osg::Texture::USE_ARB_COMPRESSION to
osg::Texture2D::setInternalFormatMode(), which causes the internal format
to become one of the generic ARB_COMPRESSED types, which do not have a
specific size. Thus the recent warning message added to
osg::Texture::getCompressedSize() is triggered. The correct behavior is
to use the format mode from the Image class instead of the Texture class
within the subload implementation, and then the size is calculated
correctly."
based on the internal format. There is similar code in the Texture class
but it does not account for the ARB types. I modified the Texture class
implementation to show a warning when an incomplete internal format is
used to calculate the image size."
improve the backwards compatibility of peformance on systems that have OpenGL2.0
drivers but without hardware that can't handle non power of two textures.
along with support for this Texture1D, 2D, 3D, TextureCubeMap and TextureRectangle. The
new SourceFormat and SourceType parameters are only used when no osg::Image is assigned to
an osg::Texture, and main use is for render to texture effects.
Added support for --hdr option in osgprerender, which utilises the new Texture::setSourceFormat/Type() methods.