Skip to article
RSS

~neon/thoughts

A blog about game development, real-time computer graphics, and programming.


Renderer Mk. II Retrospective, Chapter 0: Prelude

I’ve been working on a Vulkan renderer for a while, on and off. It’s now at the point where I think I’ve explored enough of the API to be comfortable with Vulkan, and I think the next step would be to start a new renderer, since I’ve come to understand why you’d want some of the abstractions I’ve seen elsewhere, which I purposefully avoided including in this one. Given that the renderer is done, in some sense of the word, I thought I’d write a retrospective on it. Kinda like a post-mortem code review for the whole thing. Ideally it would help me organize my thoughts on what I learned, what I should do differently in the future, and generally, what it is that I have made.

I’m inspired to write the series in a literate programming kind of way, like the PBR book. Of course, this one’s not intended to be an example to follow along, but rather, I want to paste all the code into the blog posts to make sure I cover all of it, and explain myself. I won’t end up doing a proper code review if I don’t have to comment on all of the code. We’ll see how much I stick to that.

Renderer Mk. II history

This is my second proper rendering project, by which I mean something that I’ve come back to work on after the initial phase of excitement, and actually used in a game I’ve made. My first renderer was fae, a 2D sprite renderer targeting OpenGL 2.1 / 3.3. There’s actually just one backend, but systems with 3.3+ get instanced rendering (one instance per sprite) instead of two triangles per sprite with each vertex written out into the vertex buffer. The DPI scaling factors post on this blog was about this one! I used fae in one roguelike, and later decided that I’d rather just use SDL’s in-built renderer for 2D sprite games.

Now, Mark II is a bigger project with loftier goals. I started playing with OpenXR in 2021, which got put on hold (to this day) while I write a 3D renderer using Vulkan. I’ll get back to the OpenXR stuff one day. The core idea when starting out was to write a very minimal renderer, focusing on making the Vulkan calls required to render the scene, not on designing abstractions to make the renderer cool to use, or anything else like that. That’s not to say there’s no abstraction, there is a relatively self-contained memory allocation module, and an uploader which manages command buffers and fences, allowing for straightforward uploads to VRAM and detecting when that’s done. At its current state, I’d say there’s not much unnecessary abstraction. The renderer is split into modules, and there is some data wrangling that takes CPU time not necessarily required by Vulkan, but the main design goal is still minimalism.

Due to said minimalism, there’s no render graph, no support for shaderly varying materials, no interesting content-specific optimization, and so on. That said, I did use this renderer in a 3D archery boomer shooter game I made for a uni course, though I had to fork it and quickly add in lights, which are still not implemented in the main branch. On one hand, the renderer is minimal :), but on the other hand, it’s minimal :(. All that said, when it comes to learning Vulkan, the project has been a great success. If nothing else, I’ve gained confidence using the API, and I feel like I would be relatively comfortable working on a Vulkan renderer, or writing a new one.

Renderer Mk. II overview

What is there, then? It’s a forward renderer, has support for alpha-to-coverage and skinning, and it technically has a pass for rendering transparent materials, but I never got around to sorting, so it requires some very careful scene composition to not have rendering issues. Everything is rendered using indirect instanced draw commands, into an HDR (R11G11B10) framebuffer and then tonemapped, and MSAA is supported (remember the VR origins?).

Memory is managed using simple arenas (creation-time-sized bump allocators), with manually created separate arenas based on content (i.e. textures or buffers) and lifetime (e.g. cleared never, once per level, once per frame, etc.). Meshes are loaded from mostly floats into various kinds of packed values: vertex positions are 16-bit floats, tangent and normal attributes are packed into 10 bits per channel, and joint weights are one byte (4 bytes per vertex). The gltf mesh loading is quite quick too, reading the values from a memory-mapped file, packing them, and writing into a memory-mapped Vulkan buffer. The buffer is also used for rendering, depending on if the user has ReBAR, otherwise it’s a staging one.

The descriptors are a bit more interesting/convoluted. Each draw indexes into an array of material indices based on BaseInstance, and the material indices point to a SoA-layout struct with the material parameters. Those parameters consist of constants, and indices to a global texture array, where the various PBR textures are sampled from. Notably, I haven’t gotten around to implementing any real shading in the main branch renderer, so most of the PBR textures are unused currently. The SoA material arrays and texture array (and other misc descriptors) are managed using a centralized descriptors-module, and they’re all just simple uniforms, not storage buffers. I’ve tried to keep the GPU requirements modest, though I do use dynamic rendering, so new drivers are needed to run the renderer.

Pipelines don’t have as much management, which I’m quite happy about, as there’s only eight pipelines, and they’re all specified in constant structs! They consist of the dear imgui pass, the tonemapping pass, and the non-skinned and skinned variants of opaque, alpha-to-coverage, and blended passes. Rust really shines here, with inferred types and const-time evaluation carrying the whole thing. The pipelines are created at startup based on these parameters.

The scene is constructed by queueing up “draws” (material + mesh + transform), which get sorted by mesh and material, repeat renders are combined into instanced draws. Those are then written as indirect draw structs, which are rendered using the seven scene rendering pipelines, after which there’s another render pass for tonemapping “and other post-processing” (which I never added). There’s no culling and no lod system, unfortunately.

I think that covers about all of the renderer, as a little summary, but I’ll delve deeper into all of that in the following chapters.

What’s next?

More posts on this blog, with lots of rendering code! And hopefully, explanations of what said code does, what problems it has, how I may want to do it better in the future, and so on. I have no clue how much work it’ll be to review all of that code, but I do like writing about things I’ve made and what I think of them, so it’ll probably get done. If not, I hope this has been a useful little article about the renderer, which should serve as a reference for the future, when the renderer is long buried, and I want to know what it was like.