Often a “get you started” example for LWJGL will simply show a single triangle and if you’re lucky you might get a very simple shader. This is a good start but an even better foundation is a broader set of utilities and math routines. Its unlikely that any one tutorial will teach you how to use GL, but a bunch of usable routines you can see used in a working example, that you can experiment with, should definitely be a useful aid. Lets start with a run down of the files in the project
Vec3.java Mat4.java Quat.java Print.java Util.java Shapes.java Test.java
As you’d no doubt expect Vec3 Mat4 and Quat our out math routines, they are fairly straight forward with a slight interdependence for example you can set a Mat4 with a Quat (Quaternion) and it will turn the matrix into rotational matrix based on the quaternions rotation. The math routines are far from extensive but they are clear enough that by looking at for example a C library you should be easily able to port whenever new math function you might need. There is however more than enough functionality to implement even a quite complex application.
Print.java is our first interesting bit, while you can output values to the console to help you debug some issue, what is often far more useful is to see that value rapidly change while you look at some (mis)behaviour of some on-screen effect.
The print routine is monospace for ease of layout, but also is resolution independent, if you set it up for example with 16×16 pixel character size then these characters will always be displayed on a 40×30 character grid, regardless of resolution – even as a window is being resized. This can be very handy for controlling how your screen layout looks. You can however start printing from any x/y offset on a scaled 640×480 “screen”.
Util.java is simply a collection of static functions which a number of other classes make use of, I’ll touch on the functionality as needed to explain the main demonstration class.
Shapes.java is just a place to dump a bunch of float arrays rather than have them in the body of the code, usually you’d have some kind of object loader to retrieve shapes from files or even some kind of archive.
Looking at the mainLoop of the Test class, you immediately hit some trickery that needs explanation! you can see more about this here, the short version – much wider compatibility and robustness. Not too mention much wider reuse code including from WebGL, and as a slight bonus its just a little bit easier to learn!
Lets look first at the updateProjection method, which is called whenever the windows size changes (also once at set up) this calculates our view and perspective transforms, we also need to do this if the camera changes position.
Textures are obviously important and loading them is a common need so there is a simple routine in the Util class to return a texture handle from an image file name. Its actual use doesn’t need explaining but at a later date you might want to look at the code and look up the GLES calls and see if you can work how it works.
The texture loader will only issue a console error message if there is a problem and while things won’t look very nice, you’re as well to let things progress especially while developing as even a broken render can sometimes be useful, bug hunting is often detective work so as many clues as you can get the better!
In a similar vein I allow broken shaders through, as although usually you won’t see anything that attempts to render with an uncompiled shader (hardly surprising) If you are using a number of different shaders with different objects – seeing which are working and which are not might just in some circumstances give you that clue you need…
Each shader program is made from a vertex shader – to place each vertex depending on at least our view and the relative positions, and the fragment shader which is more to do with how the surface (triangle) is actually drawn (its texture / colour and other effects)
Once we have a shader, we need to pick up its attributes, in this example each vertex has a position (XYZ) a colour (RGB) and texture coordinates (UV) the shader picks these out of either a single packed array or separate arrays, either way attributes give us a way of supplying this data to the shader.
Uniforms are similar to attribs but are for information that doesn’t change with each vertex, like for instance a matrix describing the orientation of the object you are drawing – this read only data is available to both fragment and vertex shaders.
Related to uniforms and attributes there are varying variables but we don’t need to do anything about these – they are used solely for communicating values between vertex and fragment shaders. For example you might pick up a bit of colour information each vertex in the vertex shader and then pass that (with a varying) to the fragment shader to effect the colour.
Shaders are fantastically powerful, however unsurprisingly they are also complicated and usually deeply mathematical, if you see a coder with their head tilted at an odd angle, their fingers pointing in different directions and a pained look of concentration on their face – leave them well alone, they are probably trying to work out what’s going on with a shader!
Fortunately there are bucket loads of examples and frequently you can just “borrow” one that produces the effect you’re looking for and just drop it in and hopefully get it working fairly easily… most shaders have very similar requirements like various view matrices and light vectors for example.
I’m not going to go into too much detail on the text printing routines, as once you’ve figured out what’s going on with the 3D stuff – working out what’s happening in 2D should be well within your grasp.
We have to do a whole lot of mucking around with matrices before we can do any rendering and it wouldn’t be practical to go in depth here (1000+ word count already!) but there are lots of matrix tutorials such as this one.
Suffice to say, a matrix can have a multitude of uses, but at the least they can be used to describe a combined rotation, translation and scale. Often just part of a calculation like for example the perspective transformation is kept in a matrix and used as part of a chain of matrix calculations when drawing different objects.
Once we have done our calculations we can set up our shaders uniforms, the shader in this example requires a combination of the model matrix (the objects position and rotation) the view matrix (the effects of the “camera”) and finally the perspective transformation, I’ve called this matrix the mvp matrix
glBindBuffer(GL_ARRAY_BUFFER, vbo); glUseProgram(program); glBindTexture(GL_TEXTURE_2D, texture);
Once you have selected which vertex buffer you’re using (VBO), the shader and texture – we can then set up the rest of the information needed for the shader.
glUniformMatrix4fv(u_matrix, false , mvp.mat); glEnableVertexAttribArray(attr_pos); glEnableVertexAttribArray(attr_colour); glEnableVertexAttribArray(attr_uv); // attrib, number items, type, normalised, byte stride, byte offset glVertexAttribPointer( attr_pos, 3, GL_FLOAT, false, 32, 0 ); glVertexAttribPointer( attr_colour, 3, GL_FLOAT, false, 32, 12 ); glVertexAttribPointer( attr_uv, 2, GL_FLOAT, false, 32, 24 ); glDrawArrays(GL_TRIANGLES, 0, 36);
We set the matrix uniform with the mvp matrix, and then enable the vertex attibutes, when setting the attributes pointers we specify how many items are used per vertex (3 for the position attribute) what type they are, which as we’re using floats they don’t need to be normalised, the value of 32 is the “distance” between vertexes – so here we have 3 position floats, 3 colour floats and 2 UV floats (a total of 8 floats) – each float is 4 bytes so this gives a distance or stride of 32.
Its a combination of glBindBuffer and glVertexAttribPointer the lets the shader know where to get the vertex data from.
Once you have set all this up, you can draw the same object multiple times – usually all you need to change are things like (for example) the MVP matrix uniform.
If you’re using a different shader its important to remember to disable the vertex attributes that were in use.
This has only been an overview of the code in the example, however what is more important is you have a framework of code with which to experiment, you can leave the example code alone and simply add your own experiments – this way you should see the example running even if you have a problem with your own code, this added to the fact you can show values of various variable with the print routines, means you should find experiments very much easier.
In order to compile and run the example you will need to install both a reasonably up to date JDK and also ant (if you’re using windows make sure that ant’s binary folder is in your path)
In addition to the JDK and ant you’ll need LWJGL which you should unzip LWJGL (I used LWJGL 3.0.0b SNAPSHOT build 64) in the same directory you extract the example to
once you descend into GLES-test simply run ant and it will compile and run the example for you. If you’re on Linux you can use ant edit to load the source into geany.
here’s the source GLES-test.tar.gz