OpenGL ES编程模型:模拟
OpenGL ES 总的来说是一个3D绘图编程API,就以它有一个非常容易理解的编程模型而言,我们用简单的图解去模拟它。
OpenGL ES 工作方式类似于照相机。如果你想要拍一张某个地点的照片,你就必须先到拍照地点去。一个场景包涵很多对象,比如说有张桌子,桌子上面放着很多东西。它们都有一个相对于你相机的坐标和方向,还有不同的材质和纹理。比如说玻璃是透明的,带一点反射,桌子可能是由木头做成的,杂志上可能会有某个政客的最新照片等等。有些对象也可能会在场景中移动(e.g你不能摆脱一只飞行的果蝇..)。你的相机也会有它的属性,比如说焦距,视野,图片分辨率和将来取得照片的大小,还有相机在场景中得坐标和方向(相对于某个原点)。即使场景中的对象和相机都是移动的,在你按下快门的那一瞬以也会得到一张静止的照片(在这里我们忽视快门的速度,快门速度可能导致图片模糊)。在按下快门的一瞬间,所有对象都是静止不动的,然后对象的坐标,方向,纹理,材质,光照等配置属性会映射在图片上。图7-1显示了一个静止的场景,场景中包含了相机,灯光,和三个不同材质的对象。
每个对象都有它相对于场景原点的坐标,方向。相机在图中用眼睛表示,同样的相机也有相对于场景原点的坐标。在图中的三棱锥被称作为视野容积或者是视野平头三棱锥,他们表示相机能容纳多少场景,和相机相对于场景原点的坐标。
我们可以直接的把这些场景映射到OpenGL ES当中,但是在这之前我们必须定义一些东西:
1、Object(又叫做模型):他们通常包含2-4个内容:它们的几何形状,以及他们的颜色,纹理和材质。几何形状由三角形(OpenGL的几何主要是用三角形拼出来的)的集合组成。每一个三角形在3D空间中都有3个定点,所以我们要有x,y和z坐标轴来定义相对于坐标系统原点的坐标,就像图7-1。注意z轴的正方向是指向我们的。颜色我们通常使用RGB模式。纹理和材质就有些复杂了,稍后我们会讨论它。
2、Lights:OpenGL ES 允许我们使用不同的属性去定义各种不同的灯光效果。他们只是一些数学上的对象和/或在3D空间中的方向,加上类似于颜色等属性所达到的效果。
3、Camera:相机也是一个在3D空间中包含了坐标和方向的数学对象。另外它可以使用参数去定义有我们可以看到的图像有多大(包含场景的内容多少),就像一个真正的相机。所有东西加在一起就成为了一个视野平头锥形(视野区域如图7-1)。透过相机我们可以看到任何在三角锥里的对象。同样的我们不可以看到在相机外的事物。
4、Viewport:它定义了最终呈现给我们的图片的大小和像素。可以把它想象为放入相机中的胶片,或者用数码相机拍摄后最终得到的图片的像素。
鉴于这一切,OpenGL ES能够在Camera中构造一个场景的2Dbitmap。注意我们定义的所有东西都是在3D空间中的。那么我们如何才能够把OpenGL ES使用在二维世界中呢?
接下来是Projections(投影)
附上原文:
The Programming Model: An Analogy
OpenGL ES is in general a 3D graphics programming API. As such it has a pretty nice
and (hopefully) easy-to-understand programming model that we can illustrate with a
simple analogy.
Think of OpenGL ES as working like a camera. To take a picture you have to first go to
the scene you want to photograph. Your scene is composed of objects—say, a table
with more objects on it. They all have a position and orientation relative to your camera,
as well as different materials and textures. Glass is translucent and a little reflective, a
table is probably made out of wood, a magazine has the latest photo of some politician
on it, and so on. Some of the objects might even move around (e.g., a fruit fly you can’t
get rid of). Your camera also has some properties, such as focal length, field of view,
image resolution and size the photo will be taken at, and its own position and orientation
within the world (relative to some origin). Even if both objects and the camera are
moving, when you press the button to take the photo you catch a still image of the
scene (for now we’ll neglect the shutter speed, which might cause a blurry image). For that infinitely small moment everything stands still and is well defined, and the picture
reflects exactly all those configurations of positions, orientations, textures, materials,
and lighting. Figure 7–1 shows an abstract scene with a camera, a light, and three
objects with different materials.
Each object has a position and orientation relative to the scene’s origin. The camera,
indicated by the eye, also has a position in relation to the scene’s origin. The pyramid in
Figure 7–1 is the so-called view volume or view frustum, which shows how much of the
scene the camera captures and how the camera is oriented. The little white ball with the
rays is our light source in the scene, which also has a position relative to the origin.
We can directly map this scene to OpenGL ES, but to do so we need to define a couple
of things:
Objects (aka models): These are generally composed of two four: their
geometry, as well as their color, texture, and material. The geometry is
specified as a set of triangles. Each triangle is composed of three
points in 3D space, so we have x-, y-, and z coordinates defined
relative to the coordinate system origin, as in Figure 7–1. Note that the
z-axis points toward us. The color is usually specified as an RGBtriple, as we are already used to. Textures and materials are little bit
more involved. We’ll get to those later on.
Lights: OpenGL ES offers us a couple of different light types with
various attributes. They are just mathematical objects with a position
and/or direction in 3D space, plus attributes such as color.
Camera: This is also a mathematical object that has a position and
orientation in 3D space. Additionally it has parameters that govern how
much of the image we see, similar to a real camera. All this things
together define a view volume, or view frustum (indicated as the
pyramid with the top cut off in Figure 7–1). Anything inside this
pyramid can be seen by the camera; anything outside will not make it
into the final picture.
Viewport: This defines the size and resolution of the final image. Think
of it as the type of film you put into your analog camera or the image
resolution you get for pictures taken with your digital camera.
Given all this, OpenGL ES can construct a 2D bitmap of our scene from the point of view
of the camera. Notice that we define everything in 3D space. So how can OpenGL ES
map that to two dimensions?