opengl_tips

Many faces of opengl and graphics basic knowledge

GLFW GLEW GLAD and OpenGL

glfw is just used to create every thing related to the window, such as the properties of the window, setting the mouse and the keyboard event etc. It can be used to render results generated by opengl.

GLAD is an important step to make sure the gl api can be used normally. It helps us to load the necessary OpenGL functions and extensions. If we do not glad, some opengl api can not be detected. This is example about how the glad is used. The commonly used glad.h and glad.c is generated by a specific web service here. Without setting glad, we might get error messages such as undefined reference to <specific opengl library>

OpenGL functions are used to render graphics within the main loop.

Differences between EGL and OpenGL

maybe using vtk as an exmaple here,there are three types of rendering pattern osmesa, egl and web version

Building depedency

We may refer to paraview build docs for depedencies related to opengl, such as mesa related things. This doc provides detailed instructions under different operating system.

Run by offscreen mode.

There are two important thing for offscreen mode, one is to simulate the screen, we can use this tool

apt-get install xvfb
xvfb-run -a "your command"

The next thing is saving the results from the opengl into the image, there are multiple libraries can help us to save buffer into an image, this is one example to save buffer results as tga image.

The program run by cpu or gpu

Here is a exmaple to show the vendor of the opengl, we might think the opengl as a kind of standard interface (which is similar to MPI), so there are different vendors to impliment these standard apis. This is example to show the vendor information of opengl

Rendering pipeline

Introduction of rendering pipeline

https://www.khronos.org/opengl/wiki/Rendering_Pipeline_Overview

https://www.youtube.com/watch?v=kpA5X6eI6fM&list=PLvv0ScY6vfd9zlZkIIqGDeG5TUWswkMox&index=4

graphic primitives:

how to map point, line, triangles from 3d data to 2d screen

shader: programmable part of pipeline (we can write program on gpu to control the graphics pipeline)

vertex shader, execute on each vertex positioning that vertex.

VBO and VAO

when we get a series of points, the first step is to send them to opengl through vbo (vertex buffer object), the meaning is simple to explain, it is just a buffer on dedicated device. But opengl does not know how to render these points. That part is responsible by vao. The shader can be attached to vao.

Simply speaking, VAB is aimed to explain how to parse the content in VBO. The contents in VBO can be quite flexible, for each element, it can contain the coordinates (x,y,z) and other information such as color and texture etc.

Shader

This is a good tutorial for shader
https://www.youtube.com/watch?v=j_h3GdMtO0M

simply speaking, the flexibility of the GPU allow user to program it. It is similar to the concept of worklet or a kernel executed on accelarator for each data partition.

Two important component in the graphic pipline is the vertex shader and the fragment shader. The complexity of that part is to compile and link that shader program by a series of opengl api. The shader code is programmed by the GLSL language

For the vertex shader, it is used to determine the position of the vertex, where we should put the coordinates onto the screen.

For the fragment shader, it is used to determine the color of a pixel. So in principle, there are a lot of ways to compute the color if each pixel. It is easier to understand it if we use this concept: A fragment in OpenGL is all the data required for OpenGL to render a single pixel.

Ensentially, the opengl is used to figure out the color of pixels. This process can be viewd as a kind of computation process.

Vertex shader is called on each vertex
Fragment shader is called once per pixel, the reason that we call it the fragment is that it represents the piece of triangle covered by a pixel.

Some times, it is easy to be confused about the parameter used in fragment shader comes from where.
The output of the vertex buffer can be the input variable used in by the fragment buffer !!!

Understand the idea of “Bind”

It might be easy to get confused about the glbind api. We may think the opengl is just a large pipeline with several fixed slot (the context of opengl). We can change things associated with these slots. For example, we can have multiple VBOs, but we can only bind one VBO onto this large opengl pipeline (context). When all things are binded properly, we execute one draw operation. Then the opengl pipeline will be executed automatically to output one rendered result. The bind can be thought as a kind of “attach” operation. We attach the memory buffer to the slot on opengl pipeline (opengl context)

One thing is a little bit counterintuitive,we need to bind an address to opengl and then execute allocate data operation such as glBufferData. Why we not allocate data ok and then bind that address to opengl?

Here are some potential solution:

bind-and-then-allocate pattern let the opengl to manage the memory allocation on device. After binding (associate) the memory id (Although i’m not sure, but this id might not be the actual memory address on device, it can be just a unique id for managing the object within the scope of opengl). It might easier to think this genBuffer API does not allocate memory, it just allocate some meta data for that memory buffer. We can use that to manage the memory on device in future when we actually execute the glBufferData operation.

Camera projection

Forward projection (world coords => pixel coords)

Backward projection (pixel coords => world coords)

https://www.cse.psu.edu/~rtc12/CSE486/lecture12.pdf

The metaphore of window is a good representation

https://www.youtube.com/watch?v=U0_ONQQ5ZNM

其次坐标本质上利用单一矩阵的变化实现了物体在坐标希中的各种评议伸缩变换

MVP matrix

Ensentially, we just need MVP matrix to map the object from the local space to the projection space.

There are detailed API to help to create specific m, v, p matrix, we do not list details here, this is a good tutorial:

https://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

Just be careful about the defination of the different matrix and space

In local space, we use local coordinates or world coordinates, the origin is center of object.

We use Model Matrix to convert verticies from local space to World space. In world space, we use world coordinates. The origin is absolute coordinate (0,0,0).

We then use View matrix to convert world space to view space (camera space), for view coordinates (eye coordinates, film coordinates, camera coordinates), the origin is the position of camera, we need to provide camera pos, camera target to create associated view matrix.

We then use projection matrix to convert view matrix into the projection coordinates (some documents use clip space and screen space here), the origin is the center of the screen.

Uniform variable

A uniform is a global Shader variable declared with the “uniform” storage qualifier. These act as parameters that the user of a shader program can pass to that program. Their values are stored in a program object.

The value keep same across multiple rendering calls.

GLSL language

Shader

https://www.youtube.com/watch?v=kCDBzuMWTYA

This is

good tutorial

https://www.youtube.com/watch?v=uwzEqeMd7uQ&list=PLFky-gauhF452rW98W4cyZ8_2fXBjfGOT

How do we know opengl language is executed on cpu or gpu

https://stackoverflow.com/questions/9149628/how-do-i-know-if-a-code-is-executed-using-gpu-or-cpu

Debug and common errors

This question provides a good sample to debug the code

we might add a debug function after each code segment of opengl

inline void die_on_gl_error(const char* location) {
GLenum error = GL_NO_ERROR;
error = glGetError();
if (GL_NO_ERROR != error) {
printf("GL Error %x encountered in %s.\n", error, location);
exit(1);
}
}

Since there are multiple low level apis in opengl and it is error prone, so adding this error message can help to detect the error quickly.

Here are some typical error I got:

When finding the location of a variable from the program, I should fill in the id of the linked shader program, but I fill in the id of compiled of vertex shader code, which causes an error.

When draw the specific shape, I should use GL_POINTS, but I use GL_POINT, so nothing is drawn on the screen, which takes me a lot of time to find the root issue.

Can I use opengl without glfw?

https://discourse.glfw.org/t/off-screen-rendering-and-x-windows/784

It seems that glfw and opengl are binded togeter.
we need to use the context generated by the glfw window, so glfw is one way to do that, we might have other options. we can use glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE) to hide the window if we do not want to use it.

How to use the

扫描线算法 scan line

important algorithm in rasterization

Be careful about the node sequence
https://blog.csdn.net/ChenglinBen/article/details/90372510
整个边表的排列顺序是按照边的最下面的坐标来的,在节点中xmin所表示的是边的最下面的顶点的横坐标,ymax表示的是边的最上面的顶点的纵坐标

good explanation:
https://zhuanlan.zhihu.com/p/405105092
本质上是fill一个多边形,最自然的表达多边形的方式是点和线。scan line 算法的本质就是如何合适地填充这个多边形。

https://www.techfak.uni-bielefeld.de/ags/wbski/lehre/digiSA/WS0607/3DVRCG/Vorlesung/13.RT3DCGVR-vertex-2-fragment.pdf

Format of image

Even for the 3d visualization, the output is still an image. Understanding the key properties of image is important. This might be some cv related information, the first is the sorage of the data format for image, typical things is RGB and RGBA, RGB is three channel data format and RGBA is for channel. Each pixel value is from 0 to 255, which use 8 bit in binary number, one byte per rgb value. For one pixel value, there is usually 4B in this case.

It is interesting that some libraries may store the image in a different way, such as RRRR…GGGG…BBB…, be careful about these different. The simple file format for image is ppm, which only contains rgb without the alpha value.

Camera settings

There are several parameters to set make sure the camera position. They include camera position, focuse point (after making sure these two parameters, the camera can still roll around the axis), we need another vector such as view up vector to make sure the complete position of camera. In one project, if we configure the second display and set it as the same position of the first one, we just need these three parameters (namely position, focuse point and a view up vector) after making sure this view up vector, we can also compute the three axis direction. Checking vtk documents and opengl camera documents to get more ideas.

Using vtk as an example, the vtkmapper is in charge of map data set into opengl primitives such as point and faces etc. The vtkactor is in charge of changing graphics elements from local space to the world space.

The next step is setting the camera, after transform the data into the world space, we need to configure camera properly (position in world space, focal point and view up vector) then we can fix the camera’s position, then we can get the results in view space by using the lookat algorithm provided by camera to map the data into the view space. When render, we then map the view space (-1 to 1) into the display space, such as 512 by 512 pixels display.

Z buffer and depth buffer

References

OpenGL matricies

https://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

Classical tutorial about the pipeline

https://learnopengl.com/Getting-started/Hello-Triangle

Good online tool to show the transformation matrix

https://www.mathsisfun.com/algebra/matrix-transform.html

推荐文章