So, what we have to do for video texturing are as follows :
- Create a texture which has 2^n width and height.
- Copy pixel data from the current frame image
- Update the texture data partially (depending on the video resolution)
- Render the texture on the screen
In step 1, the texture should have 2^n width and height since OpenGL ES does not support other sizes. The texture can be rectangular, but the width and height should be 2^n.
If you use 320x240 video, the texture resolution becomes 512x256.
The function glTexSubImage2D is used for step 3.
The problem is updating texture takes much time and the application becomes too slow. It is critical in AR applications since real-time video rendering is required. Thus, performance should be measured.
Rendering a texture can be achieved in two ways. One is rendering a full-screen quad and applying the texture on it, and the other is using glDrawTexiOES function, which is an OpenGL ES extension. After I tested several times on a few mobile phones, both methods showed almost same performances.