Learning OpenGL from a C programmers perspective.

By | November 16, 2017

In recent years there has been great developments with cross platform libraries, many of which are a great fit for the C language.  Lets start off by running though a few libraries, first off the obvious one – OpenGL, there has been some upheavals with OpenGL over the years, but all in all, once you really get to grips with it, there are a wide variety of options if only because of its lower level nature.  Because OpenGL now leaves a number of things to the programmer its helpful to have some other libraries to support it.  For the maths I favour kazmath which is simple and allows you to take just the parts you need (for convenience I like to compile it into a static library).  Another very useful library for use with OpenGL is GLFW this helps with window creation and input, it works well across platforms and is reasonably straight forward to use.  Along the way I’ve picked up LodePNG which is just enough code to load an image, added to some code I wrote a good five years ago when I put together gles2framework it leads to an easy way to get a texture from a PNG image file.  Finally there is glLoadGen which provides some code to load function pointers allowing you to access all the functionality you need, once the code and header has been generated its as simple as calling a function after a GL context has been created.

  I’ve put together a project to show the basics of using OpenGL (3.3 core profile) and its worth just looking into the structure and briefly explaining how the build system works.  As I touched on earlier I build kazmath into a static library, this is handled by the dependencies in the Makefile, generally you will only build the kazmath library once, subsequently its simply linked into the rest of the code.  The Makefile allows you to just throw new C source code files into the src directory and they will be automagically compiled and linked into the project next time you build it.  Its worth noting that changing header files will not trigger a rebuild as generally you don’t often modify them and when you do, odds on you’ve modified the accompanying C source.  I’ve targeted C99 and I have enabled extra warnings on the basis that even if you think its pedantic, listening to your compiler (properly) can often save you a bunch of heartache later…

I’ve kept the majority of code in the main.c file, there are a few functions in utils.c but they aren’t pertinent to learning OpenGL and are fairly self evident.  I’ve kept some OpenGL related utility functions out of utils.c so they are all in the main.c source file.

Anyway shockingly we’re finally getting to look at some code 😮 !

if (!glfwInit()) return -1;

glfwWindowHint( GLFW_CLIENT_API, GLFW_OPENGL_API );
glfwWindowHint( GLFW_CONTEXT_VERSION_MAJOR, 3 );
glfwWindowHint( GLFW_CONTEXT_VERSION_MINOR, 3 );
glfwWindowHint( GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

window = glfwCreateWindow(width, height, "Tut1", NULL, NULL);
if (!window) {
    glfwTerminate();
    printf("Window creation failed ?? !\n");
    return -1;
}

glfwMakeContextCurrent(window);
glfwSetWindowSizeCallback(window, window_size_callback);
glfwSetInputMode(window, GLFW_STICKY_KEYS, 1); // polls show us key states instead of events
glfwSwapInterval(1);

if(!ogl_LoadFunctions()) {
    printf("extention loader fubar\n");
    exit(-1);
}

This is all really boiler plate code, just a few things to note I’ve selected core profile GL 3.3 and set the keyboard into “sticky keys” mode which makes it easier to poll the state of the keyboard.  The windows size callback is also worth noting, as it lets us keep a 16:9 aspect ratio with either top or side black borders if needed, I’ll let you figure out how it works, its not complicated but well worth implementing in your own projects.

 glDisable(GL_CULL_FACE);
 glEnable(GL_DEPTH_TEST);
 glEnable(GL_SCISSOR_TEST);

OpenGL has a number of “states” you can set, because I want to show both sides of the “mesh” I don’t cull polygons that are facing away from the view point.  Doing a depth test makes sure that even if its drawn last, something that nearer to the view point will always obscure more distant details. Finally the scissor test is used for the aspect ratio respecting black borders (go on try resizing the the window – I bet you’ve downloaded the code already 😉 )

 GLuint vert = createShader("data/shaders/simple.vert",GL_VERTEX_SHADER);
 GLuint frag = createShader("data/shaders/simple.frag",GL_FRAGMENT_SHADER);
 program = linkShader(vert, frag);
 u_mvp = glGetUniformLocation(program, "u_mvp");
 GLuint u_tex = glGetUniformLocation(program, "u_tex");
 glUniform1i(u_tex, 0);
 
 glDeleteShader(frag);
 glDeleteShader(vert);

The createShader function is about nothing much more than error reporting than anything else, the shader or more accurately the shader program is made out of a vertex shader and a fragment shader, variables in the shader can be located and accessed but I will talk about this later.  There is only one texture unit in use so that uniform is set once and just left as is, just as we only once set the shader unit in use ( glActiveTexture(GL_TEXTURE0) ) .  Once we have created the shader program we can delete the constituent shaders.

// vertex shader
uniform mat4 u_mvp;
layout(location = 0) in vec3 i_vert;
layout(location = 1) in vec2 i_uvCoord;
out vec2 v_uvCoord;

void main() {
 v_uvCoord = i_uvCoord;
 gl_Position = u_mvp * vec4(i_vert, 1) ;
}

The vertex shader is executed for every vertex of the mesh, i_vert and i_uvCoord are set from the appropriate buffer as we’ll see later v_uvCoord is used to communicate the UV data to the fragment shader.  The u_mvp uniform is a four by four matrix which is used to manipulate the position of each vertex, its calculated to give the effect of orientation and perspective on the mesh.

// fragment shader
uniform sampler2D u_tex;
in vec2 v_uvCoord;
out vec4 o_colour;

void main() {
 o_colour = texture(u_tex, v_uvCoord);
}

u_tex tells the shader which texture unit to use (here we’re only ever using the same one). The received UV coordinate (v_uvCoord) is used to look up the colour (from the texture) that the fragment needs to be.

At first look the VAO (vertex array object) looks a little complicated, but bare with it, its basically just a way to group buffers together.

glGenVertexArrays(1, &vao);
glBindVertexArray(vao);

glGenBuffers(1, &vboVerts);
glBindBuffer(GL_ARRAY_BUFFER, vboVerts);
glEnableVertexAttribArray(SQR_INDEX_V);
glBufferData(GL_ARRAY_BUFFER, sizeof(verts), verts, GL_STATIC_DRAW);
glVertexAttribPointer(SQR_INDEX_V, 3, GL_FLOAT, GL_FALSE, 0, (void*)0);

glGenBuffers(1, &vboUvs);
glBindBuffer(GL_ARRAY_BUFFER, vboUvs);
glEnableVertexAttribArray(SQR_INDEX_U);
glBufferData(GL_ARRAY_BUFFER, sizeof(uvs), uvs, GL_STATIC_DRAW);
glVertexAttribPointer(SQR_INDEX_U, 2, GL_FLOAT, GL_FALSE, 0, 0);

glBindVertexArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);

Basically while the VAO is bound any vertex buffer that bound and its current properties are attached to the VAO, for this mesh there is a vertex buffer and a separate UV buffer (texture coordinates).  I would strongly recommend you look up each of the GL functions and make sense of how each of the parameters are being used here in particular glBufferData and glVertexAttribPointer I have found docs.gl to be a very useful reference, don’t forget to look at the source code itself not just this fragment so you get plenty of context… (the odd comment in there might just give you the clue you need)  From the vertex shader can you see why SQR_INDEX_V and SQR_INDEX_U have the values that they do ?

Just before entering the main loop, some matrices are set up for the view and projection matrices, once set up they are multiplied together, which combines them.  This can later be used with the orientation of the shape you want to render to place them correctly in your scene.  Going into too much detail about matrices is well beyond the scope of this tutorial and indeed its a long-un already!  Suffice to say there are a few things to bare in mind…

Matrix multiplication is “non commutative” all this means is A*B != B*A so you can’t for example mess up the order when you combine a rotation and translation.

You’ll notice from the code that I’m using a “LookAt” function, its worth noting that this is for convenience only and a more usual way, especially in a game might be to just to keep the 3d coordinates and a quaternion for orientation, its fairly easy to create the matrices you need while making it easier to manipulate and keep track of the views orientation and position.

You don’t have to understand in detail exactly how matrices work mathematically, once you have something rendering its worth experimenting with the chain of matrix multiplications, for example what happens if you put a scaling or rotation matrix into the chain of multiplications a various different places… defiantly worth experimenting with matrices as they are really powerful, also don’t forget to read as many tutorials as you can, just avoid the ones the dwell on the mechanics of matrix manipulations, all said and done you have a library for that! (kazmath)

kmMat4 m,t,r,mvp;
kmMat4Translation(&t, 0,1.5,0);
kmMat4RotationYawPitchRoll(&r, ta/3, -ta/4, ta/5);
kmMat4Multiply(&m,&t,&r);
kmMat4Multiply(&mvp,&vp,&m);

glUseProgram(program);
glBindTexture(GL_TEXTURE_2D, Tex);
glUniformMatrix4fv(u_mvp, 1, GL_FALSE, (const GLfloat *)&mvp);
glBindVertexArray(vao);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

To render a mesh, we start off with a translation matrix (position) and a rotation matrix, multiplying these together gives you what is commonly referred to as the model matrix.  We can then use this with our previously calculated View / Projection matrix, this product can then be passed directly to the shader.  We have only to bind the VAO and all the VBO’s and their states are made available for use with the glDrawArrays call.  At my expense I’ll tell you why on and off for the last few days I just couldn’t get anything working – nothing was drawn on the screen, I pared everything down, no texture, and orthographic matrix for ages, I stared at the code and could I see the obvious… just remember the glDrawArrays function needs to know the number of vertices to operate on not the number of primitives (ahem) so do look carefully at docs.gl if you get stuck! (I really should know better!)

You can download the code here, the keys W,A,S,D slide the camera along the world axis while R,F change the height all the while the view is turned to point to 0,0,0.

Just as a point of interest how good are QR codes even my elderly phone can scan the rotating QR !

Oh btw the windows version of the makefile is ages old (it did work with a more complex physics example), it was intended to work with the Linux for window subsystem, I run windows very infrequently so your mileage may well vary…

Leave a Reply

Your email address will not be published. Required fields are marked *