The vertex processor is responsible for running the vertex shaders. The input for a vertex shader is the vertex data, namely its position, color, normals, etc, depending on what the OpenGL application sends.
The following OpenGL code would send to the vertex processor a color and a vertex position for each vertex.
glBegin(...); glColor3f(0.2,0.4,0.6); glVertex3f(-1.0,1.0,2.0); glColor3f(0.2,0.4,0.8); glVertex3f(1.0,-1.0,2.0); glEnd();
In a vertex shader you can write code for tasks such as:
- Vertex position transformation using the modelview and projection matrices
- Normal transformation, and if required its normalization
- Texture coordinate generation and transformation
- Lighting per vertex or computing values for lighting per pixel
- Color computation
There is no requirement to perform all the operations above, your application may not use lighting for instance. However, once you write a vertex shader you are replacing the full functionality of the vertex processor, hence you can’t perform normal transformation and expect the fixed functionality to perform texture coordinate generation. When a vertex shader is used it becomes responsible for replacing all the needed functionality of this stage of the pipeline.
As can be seen in the previous subsection the vertex processor has no information regarding connectivity, hence operations that require topological knowledge can’t be performed in here. For instance it is not possible for a vertex shader to perform back face culling, since it operates on vertices and not on faces. The vertex processor processes vertices individually and has no clue of the remaining vertices.
The vertex shader is responsible for at least writing a variable: gl_Position, usually transforming the vertex with the modelview and projection matrices.
A vertex processor has access to OpenGL state, so it can perform operations that involve lighting for instance, and use materials. It can also access textures (only available in the newest hardware). There is no access to the frame buffer.