In addition to providing almost transparent cross platform support, libGDX has a range of useful support routines which eases some of the complexity of GLES 2.0. I’m assuming that you have had a decent amount of exposure with using libGDX and that you are conversant with creating a simple 2d game for example.

Once our resources like shaders, textures and models are loaded, we need to look at our render loop, the first task is to calculate the matrices the shader will be using, I prefer to do as much matrix calculation as possible and pass these matrices to the shader. A vertex shader could be called several hundred times for even just a simple model and even more for the fragment shader.

projection.setToProjection(0.1f, 1000.0f, 67.0f, winAspect); view.idt().rotate(1,0,0,camAng.x).rotate(0,1,0,camAng.y) .translate(camPos.x, camPos.y, camPos.z); pv.set(projection).mul(view);

setting up the projection matrix is a snap, we keep it separate as we don’t want the projection matrix included in the lighting calculations. The setup for the view matrix is fairly trivial, multiplying them together can save a little time when rendering multiple objects.

shader.begin(); shader.setUniformf("u_lightPos", lightPos ); shader.setUniformi("s_texture1", 0); // first texture unit (0)

initially we pass the light position and which texture unit to use to the shader

q.w = (float)rot.get0(); q.x = (float)rot.get1(); q.y = (float)rot.get2(); q.z = (float)rot.get3(); model.idt().set(q).trn((float)pos.get0(),(float)pos.get1(),(float)pos.get2());

Setting up the model matrix is similar to the view matrix here I’m creating it from a quaternion for the rotation and then a translation for the position (in this case the data is coming from a physics engine)

After the rotation and translation I’m applying a none uniform scale to the model matrix as I’m using a 1x1x1 dimensioned model to render many differently sized shapes.

combined.set(pv).mul(model); shader.setUniformMatrix("u_mvp_mat", combined);

multiplying the pv matrix (projection and view) with our model matrix provides us with a combined model, view, projection matrix which we can then pass to the shader, this will be used for the vector but we’ll have to do a bit more work for the lighting matrices.

tempM.set(view).mul(model).tra().inv(); shader.setUniformMatrix("u_norm_mat",tempM);

because we are doing non uniform scaling we need to provide a different matrix for the normal

shader.setUniformMatrix("u_v_mat",view); tempM.set(view).mul(model); shader.setUniformMatrix("u_mv_mat",tempM);

finally we pass the view matix and the model, view matrix and render the actual mesh

texture1.bind(); spheremesh.render(shader, GL20.GL_TRIANGLES);

We’ve basically spoon fed the shader with just about every stage of the matrix calculations but at least we are only doing this per model and not hundreds of times for each model in the shader.

The vertex shader, doesn’t really “shade” anything really its responsible for finally positioning the vertex (from its model position to the world position). The vertex shader also passes on the vertex information to the fragment shader, this also includes user defined values. Remember that a vertex includes not just position information but frequently texture coordinates and also usually vertex normals (in effect the direction the triangle is facing) as we have a normal per corner of each triangle this helps create a smooth lighting effect.

vec4 p = vec4(u_mv_mat * a_position); vec4 ld = vec4(u_v_mat * vec4(u_lightPos,0)) - p; v_lightDir = ld.xyz; v_eyeVec = -ld.xyz;

we use the model view matrix to get the position of the vertex without involving the projection matrix (I use the a_ prefix to remind me the position is a vertex attribute), we also use the model view matrix on the lights position. Subtracting the two gives the light direction which we’ll normalise later. Notice the “v_lightDir” and “v_eyeVec” are prefixed with v_ this isn’t necessary but I use this to remind me that these are varying values that can be used by the fragment shader.

v_texCoord = a_texCoord; v_eyeSpaceNormal = vec3(u_norm_mat * a_normal); gl_Position = u_mvp_mat * a_position;

The texture coordinate is passed straight on to the fragment shader, the normal is multiplied by the normal matrix as I’m using non uniform scaling the normal is transformed in a special manner as described previously. Finally the model view projection matrix is used to transform the vertex’s position attribute, gl_Position is a special variable which is used to set the final position of the vertex.

At last we get to the fragment shader, this is where we calculate the final colour of the fragment

vec3 N = normalize(v_eyeSpaceNormal); vec3 E = normalize(v_eyeVec); vec3 L = normalize(v_lightDir); vec3 reflectV = reflect(-L, N);

We normalise (make a direction a unit length vector) the values from the vertex shader, the normal view (eye) and light direction, using the reflect function we can then calculate the light reflection vector.

vec4 c = texture2D(s_texture1, v_texCoord); vec4 ambientTerm = c * 0.4; vec4 diffuseTerm = c * 0.6 * max(dot(N, L), 0.0f);

Using the texture coordinate we grab the colour from the texture unit specified, the ambient colour is the colour of the surface not effected by lighting, the defuse colour gets gradually brighter as the normal angle on the surface gets nearer to the lights angle to the surface.

float matShine = 100f; vec4 specularTerm = vec4(.4,.4,.4,1) * pow(max(dot(reflectV, E), 0.0f), matShine);

Assuming we want a nice shiny spot of light we calculate the specular value, we can effect this by the variable matShine which is a likely candidate for becoming a uniform value so different objects could have different amounts of shininess. For objects – in particular terrains for example that don’t need a specular point of light you can miss this step out altogether.

gl_FragColor = ambientTerm + diffuseTerm + specularTerm;

at last! The special variable gl_FragColor is used to set the final colour of the fragment.

Well thats “all” there is to it! As you can see there’s a fair bit of math but by breaking it all down into separate steps it need not be too complex, and as you can see the big advantage of such a low level although more complex its very flexible, you basically have complete control over the end effect you want to achieve.

I’m far from an expert at this, there may well be “better” methods, probably even more efficient ways too, however it works, gives a nice effect and is easy to follow what its doing.

You can see the end effect here… http://www.youtube.com/watch?v=W9lfjaaEO-Q

I’ve made a simplified example here just type ant to compile and run!

Enjoy!

thanks , useful for me