Friday, September 26, 2008

glTexSubImage2D is too slow on iPod Touch

I tested the performance of changing texture data in every frame, using glTexSubImage2D. 
The resolution of texture image is 320x240. Since OpenGL ES does not support non-2^n size of texture, I created texture of 512x512, then just changed part of it. 

The texture is created as: 

glGenTextures(1, &bg_texture) ;

glBindTexture(GL_TEXTURE_2D, bg_texture) ;

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 

tex_size, tex_size, 

0, GL_RGBA, GL_UNSIGNED_BYTE, 

nil);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);


and updated in renderScene() function : 

glBindTexture(GL_TEXTURE_2D, 0) ;

glBindTexture(GL_TEXTURE_2D, bg_texture) ; 

glTexSubImage2D(GL_TEXTURE_2D, 0,0,0

320, 240, 

GL_BGRA, GL_UNSIGNED_BYTE

pixelData); 


However, the performance is very poor. Calling glTexSubImage2D makes the program hardly be able to run in real-time. Some applications on mobile phones show that more than 20 fps while rendering video. Thus, I think the problem may be texture format. Threre are several texture formats, like :

  • GL_UNSIGNED_SHORT_4_4_4_4
  • GL_UNSIGNED_SHORT_5_5_5_1
  • GL_UNSIGNED_SHORT_5_6_5


Maybe there will be a format that iPhone's hardware prefer.  


8 comments:

  1. yes ive seen this too... just using glTexImage2D with the full 512 using your own buffer is faster

    ReplyDelete
  2. Hello,
    I just discovered your blog. It is very interesting and helpful. I was wondering if you could assist me with a problem. In some of your older posts you mention displaying background video using opengl textures. I know this is outdated for the new iphones because you have access now to the camera. However, I want to display video from a remote camera.

    I am using RTSP to receive the video and ffmpeg to decode the video, giving me decoded frames. I have not been able to figure out how to use opengl textures.

    I was wondering if you might have an old code sample that could show me how this works. I have tried many opengl texture tutorials, but I just can't seem to figure out how to create the textures and have it display as video. It sounds like you did this a long time ago. If you could help, I would greatly appreciate it.

    Thank you for your time and consideration.
    -Estelle
    ewpaus@pobox.com

    ReplyDelete
  3. Hello, Estelle.

    You can find video texturing examples in WWDC2010 sample codes. GLVideoFrame example is what you exactly want to do. You can also see tutorials in NeHe's OpenGL web site.

    Hope this helps.

    ReplyDelete
  4. Hi Again.
    I've been studying GLVideoFrame as you suggested. Would you say that the code I am looking for is in MyVideoBuffer.mm? In particular, I am looking at CaptureOutput. It looks like it gets an uncompressed from from AVCaptureDevice and uses a CMSampleBufferRef. Can I just look at this function as getting an image buffer and creating a texture? In which case, can I just use the code involving creating/binding the texture? I can't tell what is important for what I need to do, and what is more relevant to the camera? Am I going in the right direction?
    I'd appreciate any input.
    Thanks,
    Estelle

    ReplyDelete
  5. Hey
    Do you know how to convert an NSData to a CMSampleBufferRef? My decoded frame is an NSData.
    If you have any insight, I'd appreciate it. I feel like I am so near, yet so far.
    -Estelle

    ReplyDelete
  6. I hate to bother you again. But, I've been studying this sample you suggested, GLVideoFrame. I need to animate images that I have created. These images are not captured from the camera using AVCaptureOutput. The sample code, GLVideoFrame uses the function captureOutput which takes advantage of the AVCaptureOutput to get frames directly from the camera.

    Since I am not using AVCaptureOutput, at what point do I call my own "captureOutput" which draws my image into an opengl texture?

    I can find no place in the code that "calls" captureOutput.
    If you have any insight, I'd appreciate it.
    Thank you.
    -Estelle

    ReplyDelete
  7. Hello, Estelle.

    If you problem is just to draw the image data on the OpenGL view, you can create a thread or timer to call drawing functions and to update the screen periodically.

    The captureOutput is called internally by the class, and usually you don't need to call it specifically. By the way, what the function does is updating the OpenGL texture to a new video frame.

    As I understand, what you have to do is to put the codes that update OpenGL textures with your image data to OpenGL drawing functions.

    You can refer one of NEHE's OpenGL tutorials (Google it), where you can find how to render a video file (like *.avi) in OpenGL window.

    ReplyDelete