Implementing a 3d GUI with raylib

While experimenting with the rather excellent Nuklear GUI, I remembered my experiments with “painting” on a 3d model using a 3d camera so decided to take things further….

First of all the model must be multi-textured, the best path for this is to use raylib’s GLTF loader. As the “screen” texture is replaced with a raylib render texture, the model has a stand in 2×2 texture which for convenience is upside down. Also its the first material, which ensures that the screen part of the mesh is the first sub mesh in the model, which is important so we don’t attempt to position the mouse pointer when we’re pointing at the frame of the “screen” model.

Nuklear is able to convert its own “command” list into GL render buffers, I found this significantly less tiresome than drawing raylib primitives, however this does mean you need to link to GL as part of the application is directly accessing GL, fortunately raylib is well behaved with GL’s various render states and the two seem to work very happily together.

As there are two units including Nuklear I decided to put the Nuklear settings in a separate header so the implementation header and the header used in the main code both have the exactly the same settings.

Needing both which sub mesh has been hit by the mouse ray and the barycentre (to work out the UV coordinates of the hit) I copy and pasted raylib’s GetCollisionRayModel code and added the extra info I needed.

Because there is a text editing widget in the GUI, we need to check when its active as the camera relies on the keyboard too, otherwise strange things happen, if your name is Wasd! If the editor is active we simply disable camera movement by setting the movement keys to a none existent key values. Once the editor looses focus we reinstate the movement keys and you’re free to move the camera position again.

Actually rendering the GUI itself is simple, mainly because of the straight forward way render to texture works in raylib, having built the Nuklear command list with the various gui functions, some code abstracted from a Nuklear example converts the commands into a render buffers and renders them.

Please do bare in mind you are not restricted to using it in 3d, there is no reason you can’t directly render to the screen or even to a render texture that’s drawn in 2d (naturally you’ll have to be sure to pass the correct mouse coordinates it Nuklear)

Although it sound like all this should be complicated its actually not as bad as it sounds, as you’ll see if you have a look at the code which has plenty of comments…

Enjoy!

Leave a Reply

Your email address will not be published. Required fields are marked *