1.
The most effective technique to change the pixels in the texture is called Render-to-Texture and can be done in OpenGL/OpenGL ES viaFBOs. On desktop OpenGL you can use pixel buffer objects (PBOs) to manipulate pixel data directly on GPU (but OpenGL ES does not support this yet).
On unextended OpenGL you can change the pixels in system memory and then update texture with glTexImage2D/glTexSubImage2D - but this is inefficient last resort solution and should be avoided if possible.glTexSubImage2D is usually much faster since it only updates pixel inside the existing texture, while glTexImage2D creates entirely new texture (as a benefit you can change the size and pixel format of the texture). On the other side, glTexSubImage2D allows to update only parts of the texture.
You say that you want it to work with OpenGL ES, so I would propose to do the following steps:
- replace glTexImage2D() with glTexSubImage2D() - if you gain enough performance that's it, just let it be;
- implement render-to-texture with FBOs and shaders - it will require far more work to rewrite your code, but will give even better performance.
For FBOs the code can look like this:
// setup FBO
glGenFramebuffers( 1, &FFrameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, FFrameBuffer );
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, YourTextureID, 0 );
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
// render to FBO
glBindFramebuffer( GL_FRAMEBUFFER, FFrameBuffer );
glViewport( 0, 0, YourTextureWidth, YourTextureHeight );
your rendering code goes here - it will draw directly into the texture
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
// cleanup
glBindFramebuffer( GL_FRAMEBUFFER, 0 );
glDeleteFramebuffers( 1, &FFrameBuffer );
2.
Rock Player is an open source player for android (it's official site). You can get the source from it's source code download page. They use the ffmpeg which is a LGPL library. Pock Player developers do extra efforts to write some asm making decoding faster.
3.
I'm going to attempt to elaborate on and consolidate the answers here based on my own experiences.
Why openGL
When people think of rendering video with openGL, most are attempting to exploit the GPU to do color space conversion and alpha blending.
For instance converting YV12 video frames to RGB. Color space conversions like YV12 -> RGB require that you calculate the value of each pixel individually. Imagine for a frame of 1280 x 720 pixels how many operations this ends up being.
What I've just described is really what SIMD was made for - performing the same operation on multiple pieces of data in parallel. The GPU is a natural fit for color space conversion.
Why !openGL
The downside is the process by which you get texture data into the GPU. Consider that for each frame you have to Load the texture data into memory (CPU operation) and then you have to Copy this texture data into the GPU (CPU operation). It is this Load/Copy that can make using openGL slower than alternatives.
If you are playing low resolution videos then I suppose it's possible you won't see the speed difference because your CPU won't bottleneck. However, if you try with HD you will more than likely hit this bottleneck and notice a significant performance hit.
The way this bottleneck has been traditionally worked around is by using Pixel Buffer Objects (allocating GPU memory to store texture Loads). Unfortunately GLES2 does not have Pixel Buffer Objects.
Other Options
For the above reasons, many have chosen to use software-decoding combined with available CPU extensions like NEON for color space conversion. An implementation of YUV 2 RGB for NEON exists here. The means by which you draw the frames, SDL vs openGL should not matter for RGB since you are copying the same number of pixels in both cases.
You can determine if your target device supports NEON enhancements by running
cat /proc/cpuinfo
from adb shell and looking for NEON in the features output.
4.
http://tianxiaolin.blog.51cto.com/1810342/415803
http://blog.youkuaiyun.com/jwy1224/archive/2009/10/29/4744965.aspx
http://www.songho.ca/opengl/gl_fbo.html
5.
The Frame Buffer object is not actually a buffer, but an aggregator object that contains one or more attachments, which by their turn, are the actual buffers. You can understand theFrame Buffer as C structure where every member is a pointer to a buffer. Without any attachment, aFrame Buffer object has very low footprint.
Now each buffer attached to a Frame Buffer can be a Render Buffer or atexture.
The Render Buffer is an actual buffer (an array of bytes, or integers, or pixels). TheRender Buffer stores pixel values in native format, so it's optimized for offscreen rendering. In other words, drawing to aRender Buffer can be much faster than drawing to a texture. The drawback is that pixels uses a native, implementation-dependent format, so that reading from aRender Buffer is much harder than reading from a texture. Nevertheless, once aRender Buffer has been painted, one can copy its content directly to screen (or to otherRender Buffer, I guess), very quickly using pixel transfer operations. This means that aRender Buffer can be used to efficiently implement the double buffer pattern that you mentioned.
Render Buffers are a relatively new concept. Before them, a Frame Buffer was used to render to a texture, which can be slower because a texture uses a standard format. It is still possible to render to a texture, and that's quite useful when one needs to perform multiple passes over each pixel to build a scene, or to draw a scene on a surface of another scene!
The OpenGL wiki has this page that shows more details and links.