Posts
Search
Contact
Cookies
About
RSS

Getting started with bgfx (the bring your own framework renderer)

Added 2 Apr 2021, 6:21 p.m. edited 18 Jun 2023, 1:12 a.m.

Well over the years I've evaluated loads of different engines, and often you find things that don't quite cut the mark on your fist look. The first look with bgfx was with a 6 year old laptop using integrated graphics. Now integrated and graphics in the same sentence is no longer the horror it used to be, but even so I was rather impressed with both the performance and feature set shown with the examples on this hardware. This caught my attention, surely worth spending some time with as it isolates me from the differences with different render back ends on different platforms, while at the same time providing an impressive set of features. For example I'm just not interested in the ultra low level minutia of Vulkan, but its nice to know I can flip a switch and run the same code with OpenGL or on Windows DirectX ...

Now getting some code working that was independent of the example "glue" code, I'll be honest really did test my perseverance and worse still when asking on github discussions (don't report an issue if you're holding it wrong!) I could have kicked myself when the answer was actually right there in the example glue code. Doh!

The long and short of it basically, is if you want to create your own window (in this case one provided by glfw) you need to provide bgfx with the display as well as the native window handle, the code I found only used glfwGetWin32Window and in that case for windows that's all your need.

I'm generally not a fan of #ifdef's but often when attempting to hide the vagaries of cross platform development you don't really have much of a choice. There are just two places we need conditional compilation one just before the native header for glfw is included and the other when we collect the native information to pass it to bgfx

#if BX_PLATFORM_LINUX || BX_PLATFORM_BSD 
    #define GLFW_EXPOSE_NATIVE_X11
#else
    #define GLFW_EXPOSE_NATIVE_WIN32
#endif

#include "GLFW/glfw3native.h"

That isn't so bad and and at least reasonably readable! Looking at everything needed to initialise bgfx there is some boiler plate and in future I'll probably just hide this all away in a utilities unit and never look at it again....

    glfwInit();
    glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
    GLFWwindow* window = glfwCreateWindow(WNDW_WIDTH, WNDW_HEIGHT, "Hello, bgfx!", NULL, NULL);

    bgfx::PlatformData pd;
    
    #if BX_PLATFORM_LINUX || BX_PLATFORM_BSD 
    
        #if ENTRY_CONFIG_USE_WAYLAND // examples entry options define
            pd.ndt      = glfwGetWaylandDisplay(); 
        #else 
            pd.ndt      = glfwGetX11Display(); 
            pd.nwh      = (void*)glfwGetX11Window(window);
        #endif 
        
    #elif BX_PLATFORM_OSX
     
            pd.ndt      = NULL; 
    
    #elif BX_PLATFORM_WINDOWS 
    
            pd.ndt      = NULL; 
            pd.nwh      = glfwGetWin32Window(window);
    
    #endif // BX_PLATFORM_*

    bgfx::Init bgfxInit;
    bgfxInit.type = bgfx::RendererType::Count; // pick one!
    
    // seems to default to vulkan which is fine by me!
    //bgfxInit.type = bgfx::RendererType::Vulkan;
    //bgfxInit.type = bgfx::RendererType::OpenGL;
    
    bgfxInit.resolution.width = WNDW_WIDTH;
    bgfxInit.resolution.height = WNDW_HEIGHT;
    bgfxInit.resolution.reset = BGFX_RESET_VSYNC;
    
    // seems bgfx is bright enough to not use the active but unused integrated device!
    //bgfxInit.vendorId = BGFX_PCI_ID_NVIDIA; // just in case its selecting unused integrated device?
    
    bgfxInit.platformData = pd;
    bgfx::init(bgfxInit);

Normally when creating a window with GLFW you'd want to give it all sorts of hints about the type of context you wanted but in this case all we really want from GLFW is a window, the context will be created by bgfx. I did do some experimentation with how bgfx behaved when changing settings, it seems to default to Vulkan unless specifically asked for an OpenGL renderer (which is fine by me!) But probably more importantly, on my system the first GPU is the integrated intel, that's usually not connected to anything, but actually is active. It seems that bgfx is bright enough not to default to this card! I'm guessing it checks what native display handle its been given for this - don't know if it behaves the same in Windows, but I'll get to that eventually - lots to learn about bgfx!

Once I had bgfx::init working, I could get on and fix the shader loader, at some point I'll add in some code in my makefile to compile shaders from source, but for now its sufficient just to rely on the shader binaries that have been produced for the examples

ln -s bgfx/examples/runtime/shaders .

Will do for now ! I did add some simple error checking as an incorrect file name for example will cause a seg fault. When bailing at this point, I'm content to let the index and vertex buffers to leak, but its nice that you actually get a warning about this when shutting down bgfx.

Just comment out the

bgfx::destroy(program);

on the normal exit path, to see how comprehensive the feedback is. In future I'll probably wrap the program creation up in a function taking the path of both shaders (I typically only use vert and frag shaders)

Just for reference here's the complete code, I guess it should work in Windows as is, but I've yet to check it...

#include <stdio.h>
#include <string.h>
#include <limits.h>

#include "bgfx/bgfx.h"
#include "bgfx/platform.h"
#include "bx/math.h"

#include "GLFW/glfw3.h"

#if BX_PLATFORM_LINUX || BX_PLATFORM_BSD 
    #define GLFW_EXPOSE_NATIVE_X11
#else
    #define GLFW_EXPOSE_NATIVE_WIN32
#endif

#include "GLFW/glfw3native.h"


#define WNDW_WIDTH 960
#define WNDW_HEIGHT 540

struct PosColorVertex
{
    float x;
    float y;
    float z;
    uint32_t abgr;
};

static PosColorVertex cubeVertices[] =
{
    {-1.0f,  1.0f,  1.0f, 0xff888888 },
    { 1.0f,  1.0f,  1.0f, 0xff8888ff },
    {-1.0f, -1.0f,  1.0f, 0xff88ff88 },
    { 1.0f, -1.0f,  1.0f, 0xff88ffff },
    {-1.0f,  1.0f, -1.0f, 0xffff8888 },
    { 1.0f,  1.0f, -1.0f, 0xffff88ff },
    {-1.0f, -1.0f, -1.0f, 0xffffff88 },
    { 1.0f, -1.0f, -1.0f, 0xffffffff },
};

static const uint16_t cubeTriList[] =
{
    0, 1, 2,
    1, 3, 2,
    4, 6, 5,
    5, 6, 7,
    0, 2, 4,
    4, 2, 6,
    1, 5, 3,
    5, 7, 3,
    0, 4, 1,
    4, 5, 1,
    2, 3, 6,
    6, 3, 7,
};

bgfx::ShaderHandle loadShader(const char *FILENAME)
{
    const char* shaderPath = "???";

    //dx11/  dx9/   essl/  glsl/  metal/ pssl/  spirv/
    bgfx::ShaderHandle invalid = BGFX_INVALID_HANDLE;

    switch(bgfx::getRendererType()) {
        case bgfx::RendererType::Noop:
        case bgfx::RendererType::Direct3D9:     shaderPath = "shaders/dx9/";   break;
        case bgfx::RendererType::Direct3D11:
        case bgfx::RendererType::Direct3D12:    shaderPath = "shaders/dx11/";  break;
        case bgfx::RendererType::Gnm:           shaderPath = "shaders/pssl/";  break;
        case bgfx::RendererType::Metal:         shaderPath = "shaders/metal/"; break;
        case bgfx::RendererType::OpenGL:        shaderPath = "shaders/glsl/";  break;
        case bgfx::RendererType::OpenGLES:      shaderPath = "shaders/essl/";  break;
        case bgfx::RendererType::Vulkan:        shaderPath = "shaders/spirv/"; break;
        case bgfx::RendererType::Nvn:
        case bgfx::RendererType::WebGPU:
        case bgfx::RendererType::Count:         return invalid; // count included to keep compiler warnings happy
    }

    size_t shaderLen = strlen(shaderPath);
    size_t fileLen = strlen(FILENAME);
    char *filePath = (char *)malloc(shaderLen + fileLen + 1);
    memcpy(filePath, shaderPath, shaderLen);
    memcpy(&filePath[shaderLen], FILENAME, fileLen);
    filePath[shaderLen+fileLen] = 0;    // properly null terminate

    FILE *file = fopen(filePath, "rb");
    
    if (!file) {
        return invalid;
    }
    
    fseek(file, 0, SEEK_END);
    long fileSize = ftell(file);
    fseek(file, 0, SEEK_SET);

    const bgfx::Memory *mem = bgfx::alloc(fileSize + 1);
    fread(mem->data, 1, fileSize, file);
    mem->data[mem->size - 1] = '\0';
    fclose(file);

    return bgfx::createShader(mem);
}

int main(void)
{
    glfwInit();
    glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
    GLFWwindow* window = glfwCreateWindow(WNDW_WIDTH, WNDW_HEIGHT, "Hello, bgfx!", NULL, NULL);

    bgfx::PlatformData pd;
    
    #if BX_PLATFORM_LINUX || BX_PLATFORM_BSD 
    
        #if ENTRY_CONFIG_USE_WAYLAND // examples entry options define
            pd.ndt      = glfwGetWaylandDisplay(); 
        #else 
            pd.ndt      = glfwGetX11Display(); 
            pd.nwh = (void*)glfwGetX11Window(window);
        #endif 
        
    #elif BX_PLATFORM_OSX
     
            pd.ndt      = NULL; 
    
    #elif BX_PLATFORM_WINDOWS 
    
            pd.ndt      = NULL; 
            pd.nwh = glfwGetWin32Window(window);
    
    #endif // BX_PLATFORM_*

    bgfx::Init bgfxInit;
    bgfxInit.type = bgfx::RendererType::Count; // pick one!
    
    // seems to default to vulkan which is fine by me!
    //bgfxInit.type = bgfx::RendererType::Vulkan;
    //bgfxInit.type = bgfx::RendererType::OpenGL;
    
    bgfxInit.resolution.width = WNDW_WIDTH;
    bgfxInit.resolution.height = WNDW_HEIGHT;
    bgfxInit.resolution.reset = BGFX_RESET_VSYNC;
    
    // seems bgfx is bright enough to not use the active but unused integrated device!
    //bgfxInit.vendorId = BGFX_PCI_ID_NVIDIA; // just in case its selecting unused integrated device?
    
    bgfxInit.platformData = pd;
    bgfx::init(bgfxInit);

    bgfx::setViewClear(0, BGFX_CLEAR_COLOR | BGFX_CLEAR_DEPTH, 0x443355FF, 1.0f, 0);
    bgfx::setViewRect(0, 0, 0, WNDW_WIDTH, WNDW_HEIGHT);

    bgfx::VertexLayout pcvDecl;
    
    pcvDecl.begin()
        .add(bgfx::Attrib::Position, 3, bgfx::AttribType::Float)
        .add(bgfx::Attrib::Color0, 4, bgfx::AttribType::Uint8, true)
    .end();
    
    bgfx::VertexBufferHandle vbh = bgfx::createVertexBuffer(bgfx::makeRef(cubeVertices, sizeof(cubeVertices)), pcvDecl);
    bgfx::IndexBufferHandle ibh = bgfx::createIndexBuffer(bgfx::makeRef(cubeTriList, sizeof(cubeTriList)));

    bgfx::ShaderHandle vsh = loadShader("vs_cubes.bin");
    printf("shader handle %i created for vs_cubes.bin\n", vsh.idx);
    if (vsh.idx == USHRT_MAX)
    {
        printf("*** shader model not supported or file not found ****\n");
        bgfx::shutdown();
        return -1;
    }
    
    bgfx::ShaderHandle fsh = loadShader("fs_cubes.bin");
    printf("shader handle %i created for fs_cubes.bin \n", fsh.idx);
    if (fsh.idx == USHRT_MAX)
    {
        printf("*** shader model not supported or file not found ****\n");
        bgfx::shutdown();
        return -1;
    }
    
    bgfx::ProgramHandle program = bgfx::createProgram(vsh, fsh, true);
    printf("program handle %i created\n", program.idx);

    unsigned int counter = 0;
    while(!glfwWindowShouldClose(window)) 
    {
        const bx::Vec3 at = {0.0f, 0.0f,  0.0f};
        const bx::Vec3 eye = {0.0f, 0.0f, -5.0f};
        float view[16];
        bx::mtxLookAt(view, eye, at);
        float proj[16];
        bx::mtxProj(proj, 60.0f, float(WNDW_WIDTH) / float(WNDW_HEIGHT), 0.1f, 100.0f, bgfx::getCaps()->homogeneousDepth);
        bgfx::setViewTransform(0, view, proj);

        float mtx[16];
        bx::mtxRotateXY(mtx, counter * 0.01f, counter * 0.0081f);
        bgfx::setTransform(mtx);
        
        bgfx::setVertexBuffer(0, vbh);
        bgfx::setIndexBuffer(ibh);
    
        bgfx::submit(0, program);
        bgfx::frame();
        counter++;
        
        glfwPollEvents();
        if ( glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)
        {
            glfwSetWindowShouldClose(window, true);
        }
    }

    bgfx::destroy(program);
    
    bgfx::destroy(ibh);
    bgfx::destroy(vbh);
    bgfx::shutdown();       
    
    glfwDestroyWindow(window);
    glfwTerminate();

    return 0;

}

Enjoy !