Opengl SuperBible 7th摘抄

本文摘自OpenGL SuperBible,主要讲解第五章关于数据的操作,包括缓冲对象的创建、绑定、内存分配、数据加载和更新。重点介绍了glBufferStorage、glNamedBufferStorage等函数,以及glMapBuffer和glMapBufferRange用于直接内存映射。同时提到了OpenGL中数据如何从缓冲对象传输到顶点属性,以及统一块的概念。在第四章中,涉及3D图形的数学知识,如向量、归一化、点乘、叉乘,以及矩阵变换、投影等概念。第三章阐述了OpenGL管线的工作原理,从顶点着色器、细分着色器、几何着色器到光栅化和片段着色器的流程,以及视口设置、裁剪等。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >



OpenGL SuperBible学习
第五章
Data

1. void glCreateBuffers(GLsizei n, GLuint* buffers);
buffers is the address of the variable or variables that will be used to store the names of the buffer objects.

2. void glBindBuffer(GLenum target, GLuint buffer);

3. The functions that are used to allocate memory using a buffer object are glBufferStorage() and glNamedBufferStorage(). Their protetypes are
void glBufferStorage(GLenum target, GLsizeptr size, const void* data, GLbitfield flags);
void glNamedBufferStorage(GLuint uffer, GLsizeptr size, const void* data, GLbitfield flags);

4. To be clear, the contents of the buffer object's data store can be changed, but its size or usage flags may not.

5. There are a handful of ways to get dat into the buffer object.

6. Had we instead supplied a pointer to some data, that data would have been used to initialize the buffer object. Using this pointer ,however, allow us to set only the initial data to be stored in the buffer.

7. void glBufferSubData(GLenum target, GLintptr offset, GLsizeiptr size, const GLvoid* data);
void glNameBufferSubData(Gluint buffer, GLintptr offset, GLsizeiptr size, const void* data);

8. void* glMapBuffer(GLenum target, GLenum usage);
void* glMapNamedBuffer(GLuint buffer, GLenum usage);
one that affects the buffer bound to one fo the targets of the current context, and one that operates directly on a buffer whose name you specify.

9. If you map a buffer, you can simply read the contents of the file directly into the mapped buffer.

10. void* glMapBufferRange(GLenum target, GLintptr offset, GLsizeptr length, GLbitfield access);
void* glMapNamedBufferRange(GLuint buffer, GLintptr offset, GLsizeiptr length, GLbitfield access);
These functions, rather than mapping the entire buffer object map only a specific range of the buffer object.

11. However, because of the additional control and stronger contract provided by glMapBufferRange() and glMapNamedBufferRange(), it is generally preferred to call these functions rather than glMapNamedBuffer().


12. glClearBufferSubData -- glClearNamedBufferSubData
glCopyBufferSubData -- glCopyNamedBufferSubData

13. To tell OpenGL which buffer object our data is in and where in that buffer object the data resides, we use the glVertexArrayVertexBuffer() function to bind a buffer to one of the vertex buffer bindings. We use the glVertexArrayAttribFormat() function to describe the layout and format of the data, and finally we enable automatic filling of the attribute by calling glEnableVertexAttribArray().

14. OpenGL allows you to combine a group of uniforms into a uniform block and store the whole block in a buffer object.

15. To tell OpenGL that you want to use the standard layout, you need to declare the uniform block with a layout qualifier.




第四章
Math for 3D Graphics

1. A vector is first, and most simply, a direction from the origin toward a point in space.

2. Normalizing a vector scales it such that its length becomes 1 and the vector is then said to be normalized.

3. The w coordinate is added to make the vector homogeneous but is typically set to 1.0.

4. The dot product between two (three-component) vectors returns a scalar (just one value) that is the cosine of the angle between the two vectors acaled by the product of their lengths.

5. The cross product between two vectors is a third vector that is perpendicular to the plane in which the first two vectors lie.

6. A scalar is just an ordinary single number used to represent a magnitude or a specific quantity.

7. Multiplying a point (represented by a vector) by a matrix (representing a transformation) yields a new transformed point (another vector).

8. We refer to the projection whenever we want to describe the type of transformation (orthographic or perspective) that occurs during vertex processing, but projection is only one of the types of transformations that occur in OpenGL.

9. Model space -- World space -- View space -- Clip space -- Normalized device coordinate (NDC) space -- window space

10. In object space, positions of vertices are interpreted relative to a local origin.

11. Once in world space, all objects exist in a common frame. Often, this is the space in which lighting and physics calculations are performed.

12. Clearly, ifthe resulting w component of a clip space coordinate is 1.0, then clip space and NDC space become identical.

13. Gimbal lock occurs when a rotation by one angle reorients one of the axes to be aligned with another of the axes.

14. A sequence of rotations can be represented by a series of quaternions multiplied together, producing a single resulting quaternion that encodes the whole lot in one go.

15. Once your vertices are in view space, we need to get them into clip space, which we do by applying our projection matrix, which may represent a perspective or orthographic projection.

16. Thus, the integer part of t determines the curve segment along which we are interpolating and the fractional part of t is used to interpolate along that segment.

第三章
Following the Pipeline

1. In GLSL, the mechanism for getting data in and out of shaders is to declare global variables with the in and out storage qualifiers.

2. Vertex attributes are how vertex data is introduced into the OpenGL pipeline.

3. void glVertexAttrib4fv(GLuint index, const GLfloat* v);
the parameter index is used to reference the attribute and v is a pointer to the new data to put into the attribute.

4. Anything you write to an output variable in one shader is sent to a similarly named variable declared with the in keyword in the subsequent stage.

5. To achieve this, we can group together a number of variables into an interface block.

6. Tessellation is the process of breaking a high-order primitive (which is known as a patch in OpenGL) into many smaller, simpler primitives such as triangles for rendering.

7. Logically, the tessellation phase sits directly after the vertex shading stage in the OpenGL pipeline and is made up of three parts: the tessellation control shader, the fixed-function tessellation engine, and the tessellation evaluation shader.

8. The tessellation control shader takes its input from the vertex shader and is primarily responsible form two things: the determination of the level of tessellation that will be sent to the tessellation engine, and the generation of data that will be sent to the tessellation evaluation shader that is run after tessellation has occurred.

9. glPatchParameteri(GLenum pname, GLint value);
panme set to GL_PATCH_VERTICES and value set to the number of control points that will be used to construct each patch.

10. That is, vertices are used as control points and the result of the vertex shader is passed in batches to the tessellation control shader as its input.

11. The output tessellation factors are written to the gl_TessLevelInner and gl_TessLevelOuter built-in output variables, whereas any other data that is passed down the pipeline is written to user-defined output variables (those declared using the out keyword, or the special built-in gl_out array) as normal.

12. The built-in variable gl_InvocationID is used as an index into the gl_in and gl_out arrays.

13. Before the tessellation engine receives a patch, the tessellation control shader processes the incoming control points and sets tessellation factors that are used to break down the patch.

14. At the beginning of the shader is a layout qualifier that sets the tessellation mode.

15. The first is gl_TessCoord, which is the barycentric coordinate(质心坐标) of the vertex generated by the tessellator.

16. void glPolygonMode(GLenum face, GLenum mode);
The face parameter specifies which type of polygons we want to affect.

17. The geometry shader runs once per primitive and has access to all of the input vertex data for all of the vertices that make up the primitive being processed.

18. Geometry shaders, in contrast, include two functions -- EmitVertex() and EndPrimitive() -- that explicitly produce vertices that are sent to primitive assembly and rasterization.

19. The homogeneous coordinate system is used in projective geometry because much of the math ends up being simpler in homogeneous coordinate space than it does in regular Cartesian space.

20. After the projective division, the resulting position is in normalized device space.

21. void glViewport(GLint x, GLint y, GLsizei width, GLsizei height);
void glDepthRange(GLdouble nearVal, GLdouble farVal);

22. The sense of this computation can be reversed by calling glFrontFace() with dir set to either GL_CW or GL_CCW.

23. To turn on culling, call glEnable() with cap set to GL_CULL_FACE.

24. To change which types of triangles are culled, call glCullFace() with face set to GL_FRONT, GL_BACK, or GL_FRONT_AND_BACK.

25. Rasterization is the process of detemining which fragments might be covered by a primitive such as a line or a triangle.

26. This stage is responsible for detemining the color of each fragment before it is sent to the framebuffer for possible composition into the window.

27. In a real-world application, the fragment shader would normally be substantially more complex and be responsible for performing calculations related to lighting, applying materials, and even determining the depth of the fragment.

28. In short, OpenGL is capable of using a wide range of functions that take components of the output of your fragment shader and of the current content of the framebuffer and calculate new values that are writtten back to the framebuffer.

29. Each compute shader operates on a single unit of work known as a work item; these items are, in turn, collected together into small groups called local workgroups.

30. ARB extensions are an official part of OpenGL because they are approved by the OpenGL governing body, the Architecture Review Board(ARB).

31. const GLubyte glGetStringi(glenum name, GLuint index);
you should pass GL_EXTENSIONS as the name parameter, and a value between 0 and 1 less than the number of supported extension in index.

第二章
Our First OpenGL Program

1. void glClearBufferfv(GLenum buffer, GLint drawBuffer, const GLfloat* value);
tells OpenGL to clear the buffer specified the first parameter to the value specified in its third parameter.

2. The source code for your shader is placed into a shader object and compiled, and then multiple shader objects can be linked together to form a program object.

3. All variables that start with gl_ are part of OpenGL and connect shaders to each other or to the various parts of fixed functionality in OpenGL.

4. glCreateShader -- glShaderSource -- glCompileShader -- glCreateProgram -- glAttachShader -- glLinkProgram -- glDeleteShader.

5. One final thing that we need to do before we can draw anything is to create a vertex array object object(VAO), which is an object that represents the vertex fetch stage of the OpenGL pipeline and is used to supply input to the vertex shader.

6. void glCreateVertexArrays(GLsizei n, GLuint* array);
void glBindVertexArray(GLuint array);

7. void glPointSize(GLfloat size);
sets the diameter of the point in pixels to the value you specify in size.

8. The gl_VertexID input starts counting from the value given by the first parameter of glDrawArrays() and counts upward one vertex at a time for count vertices.

第一章
Introduction

1. OpenGL is an interface that your application can use to access and control the graphics subsystem of the device on which it runs.

2. Through a combination of pipelining and parallelism, incredible performance of modern graphics processors is realized.

3. The goal of OpenGL is to provide an abstraction layer between your application and the underlying graphics subsystem, which is often a hardware accelerator made up of one or more custom, high-performance processors with dedicated memory, display outputs, and so on.

4. Current GPUs consist of large number of small programmable processors called shader cores that run mini-programs called shaders.

5. Vertex Fetch -> Vertex Shader -> Tessellation Control Shader -> Tessellation -> Tessellation Evaluation Shader -> Geometry Shader -> Rasterization -> Fragment Shader -> Framebuffer Operations.

6. The first is the modern, core profile, which removes a number of legacy features, leaving only those that are truly accelerated by current graphics hardware.

7. The fundamental unit of rendering in OpenGL is known as the primitive. OpenGL supports many types of primitives, but the three basic renderable primitive types are points, lines, and triangles.

8. The rasterizer is dedicated(专注于) hardware that converts the three-dimensional representation of a triangle into a series of pixels that need to be drawn onto the screen.

9. The graphics pipeline is broken down into two major parts. The first part, often known as the front end, process vertices and primitives, eventually forming them into the points, lines, and triangles that will be handed off to the rasterizer. This is known as primitive assemly. After going through the rasterizer, the geometry has been converted from waht is essentially a vector representation into a large number of independent pixels. These are handed off to the back end, which includes depth and stencil testing, fragment shading, blending, and updating of the output image.
OpenGL SuperBible: Comprehensive Tutorial and Reference 5th Edition 1008 pages Publisher: Addison-Wesley Professional; 5 edition (August 2, 2010) Language: English ISBN-10: 0321712617 ISBN-13: 978-0321712615 OpenGL® SuperBible, Fifth Edition is the definitive programmer’s guide, tutorial, and reference for the world’s leading 3D API for real-time computer graphics, OpenGL 3.3. The best all-around introduction to OpenGL for developers at all levels of experience, it clearly explains both the API and essential associated programming concepts. Readers will find up-to-date, hands-on guidance on all facets of modern OpenGL development, including transformations, texture mapping, shaders, advanced buffers, geometry management, and much more. Fully revised to reflect ARB’s latest official specification (3.3), this edition also contains a new start-to-finish tutorial on OpenGL for the iPhone, iPod touch, and iPad. Coverage includes • A practical introduction to the essentials of real-time 3D graphics • Core OpenGL 3.3 techniques for rendering, transformations, and texturing • Writing your own shaders, with examples to get you started • Cross-platform OpenGL: Windows (including Windows 7), Mac OS X, GNU/Linux, UNIX, and embedded systems • OpenGL programming for iPhone, iPod touch, and iPad: step-by-step guidance and complete example programs • Advanced buffer techniques, including full-definition rendering with floating point buffers and textures • Fragment operations: controlling the end of the graphics pipeline • Advanced shader usage and geometry management • A fully updated API reference, now based on the official ARB (Core) OpenGL 3.3 manual pages • New bonus materials and sample code on a companion Web site, www.starstonesoftware.com/OpenGL Part of the OpenGL Technical Library–The official knowledge resource for OpenGL developers The OpenGL Technical Library provides tutorial and reference books for OpenGL. The Library enables programmers to gain a
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值