Generating multiple sample positions per pixel

Anti-aliasing in ray tracing, requires casting multiple rays per pixel, to sample the whole solid angle subtended by the imaginary surface of each pixel, if we consider it to be a small rectangular part of the view plane (see diagram).

framebuffer pixel area

It’s obvious that to be able to generate multiple primary rays for each pixel, we need to have an algorithm that given the sample number, calculates a sample position within the area of the a pixel. Since it’s trivial to map points in the unit square, onto the actual area of an arbitrary pixel, it makes sense to write this sample position generation function, so that it calculates points in the unit square.

The easiest way to write such a function would be to generate random points in the unit square, like this:

void get_sample_pos(float *pos)
{
    pos[0] = (float)rand() / (float)RAND_MAX - 0.5;
    pos[1] = (float)rand() / (float)RAND_MAX - 0.5;
}

The problem with this approach is that, even if our random number generator really has a perfectly uniform distribution, any finite number of sample positions generated, especially if that number is in the low tens, will probably not cover the area of the pixel in anything resembling a uniform sampling. Clusters are very likely to occur, leaving large areas of the pixel space unexplored by our rays.
As the number of samples gets larger and larger, this problem is somewhat mitigated, but especially if we’re not writting a path tracer, we’re usually dealing with anything between 4 to 20 rays per pixel, no more.

The following animation shows random sample positions generated by the code above. Even at about 40 samples, the left part of the pixel is inadequately sampled.
random sample generation

Another approach is to avoid randomness. The following function gets the sample number as input, and calculates its position by recursively subdividing the pixel area, taking care to spread the samples of each recursion level around as much as possible instead of focusing on one quadrant at a time.

void get_sample_pos(int sidx, float *pos)
{
    pos[0] = pos[1] = 0.0f;
    get_sample_pos_rec(sidx, 1.0f, 1.0f, pos);
}

static void get_sample_pos_rec(int sidx, float xsz, float ysz, float *pos)
{
    int quadrant;
    static const float subpt[4][2] = {
        {-0.25, -0.25}, {0.25, -0.25}, {-0.25, 0.25}, {0.25, 0.25}
    };

    /* base case: sample 0 is always in the middle, do nothing */
    if(!sidx) return;

    /* determine which quadrant to recurse into */
    quadrant = ((sidx - 1) % 4);
    pos[0] += subpt[quadrant][0] * xsz;
    pos[1] += subpt[quadrant][1] * ysz;

    get_sample_pos_rec((sidx - 1) / 4, xsz / 2, ysz / 2, pos);
}

And here’s the animation showing that code in action (colors denote the recursion depth):
regular subdivision sampling

This sampling is perfectly uniform, but it’s still not ideal. The problem is that whenever we’re sampling in a regular grid, no matter how fine that grid is, we will introduce aliasing. By breaking up each pixel into multiple subpixels like this we effectively increase the cutoff frequency after which aliasing occurs, but we do not eliminate it.

The best solution is to combine both techniques. We need randomness to convert aliasing into noise, which is much less perceptible by human brains trained by evolution to recognize patterns. But we also need uniform sampling to properly explore the whole area of each pixel.

So, we’ll employ a technique known as jittering: first we uniformly subdivide the pixel into subpixels, and then we randomly perturb the sample position of each subpixel inside the area of that subpixel. The following code implements this algorithm:

void get_sample_pos(int sidx, float *pos)
{
    pos[0] = pos[1] = 0.0f;
    get_sample_pos_rec(sidx, 1.0f, 1.0f, pos);
}

static void get_sample_pos_rec(int sidx, float xsz, float ysz, float *pos)
{
    int quadrant;
    static const float subpt[4][2] = {
        {-0.25, -0.25}, {0.25, -0.25}, {-0.25, 0.25}, {0.25, 0.25}
    };

    if(!sidx) {
        /* we're done, just add appropriate jitter */
        pos[0] += (float)rand() / (float)RAND_MAX * xsz - xsz / 2.0;
        pos[1] += (float)rand() / (float)RAND_MAX * ysz - ysz / 2.0;
        return;
    }

    /* determine which quadrant to recurse into */
    quadrant = ((sidx - 1) % 4);
    pos[0] += subpt[quadrant][0] * xsz;
    pos[1] += subpt[quadrant][1] * ysz;

    get_sample_pos_rec((sidx - 1) / 4, xsz / 2, ysz / 2, pos);
}

And here’s the animation showing the jittered sample position generator in action:
Jittered sample generator

Color space linearity and gamma correction

Ok so it’s a well known fact among graphics practitioners, that pretty much every game does rendering incorrectly. Since performance, and not correctness is always the prime consideration in game graphics, usually we tend to turn a blind eye towards such considerations. However with todays ultra-high performance programmable shading processors, and hardware LUT support for gamma correction, excuses for why we continue doing it the wrong way, become progressively more and more lame. :)

The gist of the problem with traditional real-time rendering, is that we’re trying to do linear operations, in non-linear color spaces.

Let’s take lighting calculations for example, when light hits a plane with 60 degrees incidence angle from the normal vector of the plane, Lambert’s cosine law states that the intensity of the diffusely reflected light off the plane (radiant exitance), is exactly half of the intensity of the incident light (irradiance) from that light source. However the monitor, responsible for taking all those pixel values and sending them rushing into our retinas, does not play along with our assumptions. That half intensity grey light we expect from that surface, becomes much darker due to the exponential response curve of the electron gun.

Simply put, when half the voltage of the full input range is applied to the electron gun, much less than half the possible electrons hit the phosphor in the glass, making it emmit lower than half-intensity light to the user. That’s not a defect of the CRT monitors; all kinds of monitors, tv screens, projectors, or other display devices work the same way.

So how do we correct that? We need to use the inverse of the monitor response curve, to correct our output colors, before they are fed to the monitor, so that we can be sure that our linear color space where we do our calculations, does not get bent out of shape before it reaches our eyes. Since the monitor response curve is approximately a function of the form: x^\gamma where \gamma = 2.2 usually, it mostly suffices to do the following calculation before we write the color value to the framebuffer: x^\frac{1}{\gamma}. Or in a pixel shader:

gl_FragColor.rgb = pow(color.rgb, vec3(1.0 / 2.2));

That’s not entirely correct, because if we are doing any blending, it happens after the pixel shader writes the color value, which means it would operate after this gamma correction, in a non-linear color space. It would be fine if this shader is a final post-processing shader which writes the whole framebuffer without any blending operations, but there is a better and more efficient way. If we just tell OpenGL that we want to output a gamma-corrected framebuffer, or more precisely a framebuffer in the sRGB color space, it can do this calculation using hardware lookup tables, after any blending takes place, which is efficient and correct. This fucntionality is exposed by the ARB_framebuffer_sRGB extension, and should be available on all modern graphics cards. To use it we need to request an sRGB-capable framebuffer during context creation (GLX_FRAMEBUFFER_SRGB_CAPABLE_ARB / WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB), and enable it with glEnable(GL_FRAMEBUFFER_SRGB).

Now if we do just that, we’re probably going to see the following ghastly result:
bleach

The problem is that our textures are already gamma-corrected with a similar process, which makes them now completely washed out when we apply gamma correction in the end a second time. The solution is to make color values looked up from textures linear before using them, by raising them to the power of 2.2. This can either be done in the shader simply by: pow(texture2D(tex, tcoord).rgb, vec3(2.2)), or by using the GL_SRGB_EXT internal texture format instead of GL_RGB (EXT_texture_sRGB extension), to let OpenGL know that our textures aren’t linear and need conversion on lookups.

The result is correct rendering output, with all operations in a linear color space:
gamma-corrected rendering output

A final pitfall we may encounter is if we use intermediate render targets during rendering, with 8 bit per color channel resolution, we will observe noticable banding in the darker areas. That is because our 8bit/channel textures are now raised to a power and the result is again placed in an 8bit/channel render target, which obviously wastes color resolution and loses details, which cannot be replaced later on when we gamma correct the values again. Bottom-line is that we need higher precision intermedate render targets if we are going to work in a trully linear color space. The following screenshots show a dark area of the game when using a regular GL_RGBA intermediate render target (top), and when using a half-float GL_RGBA16F render target (bottom):

linear textures when using an 8bit/channel intermediate render target

linear textures when using an half-float intermediate render target

Color artifacts are clearly visible in the first image, around the dark unlit area.

Simple color-grading for games

Color grading is an easily overlooked, but extremely powerful way to add character to a game. Subtle color changes make day-night cycling much more atmospheric. Different areas can have their own signature “feel” based on how saturated the colors are. Dark games can shift the unlit areas of an environment to cool bluish tint that can remain visible but still feel like darkness. The possibilities are endless.

I haven’t given much thought to color grading before, until a friend (Samurai), told me of an extremely simple and powerful way to add color grading to a game. So simple in fact, that I had to try it as soon as possible!

The idea has two parts. First the obvious bit: Use a 3D texture as a look-up table, to map the RGB colors produced by the renderer to a different set of RGB colors which is the color-graded output. That translates to pretty much the following GLSL post-processing fragment shader:

uniform sampler2D framebuf;
uniform sampler3D gradelut;

void main()
{
    vec3 col = texture2D(framebuf, gl_TexCoord[0].st).xyz;
    gl_FragColor = vec4(texture3D(lut, col).xyz, 1.0);
}

And now the brilliant bit: write a bit of code to save a screenshot of the game with the “identity” 3D lookup-table serialized in the last few scanlines. Give that screenshot to an artist, and let him work his magic, color-grading it in photoshop or whatever… Did you get that? In the process of color-grading that screenshot, the artist automatically produces the look-up table which can be used to reproduce the same color-grading in-game, as part of the last few scanlines of the image! Feed that palette back into the game and it’s automatically color-graded!

So I used the dungeon crawler I’ve been writing recently to try out this algorithm. The output of my dungeon crawler as it stands, is not the best material to try color-grading on as it’s already very dark and highly tinted, leaving too little space for tweaking without banding everything to oblivion, but nevertheless I wrote the code, gave the screenshot to my friend Rawnoise to play with it in photoshop for a couple of minutes, fed it back into the game, and the result can be seen below.

Lower-left part of the screenshot produced by the game, with the identity palette attached:
grade-palette

Screenshot of the game before and after color grading:

Original dungeon crawler output Dungeon crawler output after color grading test

Of course you can always opt for a completely bizarre effect just as easily. This is the result of me moving color curves in gimp randomly, and then feeding the resulting palette into the game:
bizarr-o-dungeon

Obviously this algorithm opens up all sorts of interesting possibilities, such as having two palettes and interpolating between them during sunset, or when a player crosses the boundary between two areas, etc. Simple, yet effective.

Dungeon crawler game prototype

dungeon crawler prototype videoI started writting a first person dungeon crawler game recently. Nothing ground-breaking, but I intend to fill up the void of the simplistic gameplay with over the top eye-candy. My main inspiration comes from eye-of-the-beholder-esque dungeon crawlers with awesome graphics (for their respective times) such as stonekeep and the legend of grimrock, without necessarily intending to stay true to the retro 90 degree grid-based movement.

Before starting, I wanted to try out a couple of things to see how they feel in practice, so I decided to make a prototype. The main thing I wanted to try out was a suggestion of a friend of mine, for keeping the level creation as simple as possible, to use a regular grid tile-based system with a couple of enhancments. Namely:

  • Allowing multiple detail tiles on a single grid cell. Which makes it easy to lay down the whole level’s corridors and then add details such as torches on the walls, furnitures or whatever here and there.
  • Allowing arbitrary geometry for each tile, not necessarily contained in the volume of the grid cell it occupies. This would allow, for instance, elaborate prefab rooms to be attached at various places of the dungeon.

It turns out I don’t like the extended grid-based idea that much, and for the actual game I will revert to a more powerful level organization, I came up with some time ago. More on that when I actually implement it.

The rendering is done with “deferred shading“, a neat technique I implemented once before in the Theseis engine, which makes it possible to have hundreds of actual dynamic light sources active at the same time. This is the cornerstone of my “lots of eye-candy” idea, because it enables each and every torch, spell effect, flame, or magical glow to illuminate the dungeons and its denizens dynamically.

Finally I implemented a nice positional audio and music playback system, on top of OpenAL. It keeps static audio sources in a kd-tree for efficient selection of the nearest ones within a certain radius around the player and enables/disables the appropriate ones automatically.

In case you are curious to see it, I just uploaded a video on youtube. The actual tileset is obviously placeholder, since I made it myself in blender, to be replaced by proper artwork later on. The music and sound effects are made by George Savinidis, who will be in change of all the audio production for this game.

PCB Etching

I’m going through an electronics phase at the moment, and I did a few circuits on stripboards (a kind of perfboard), which are ok but it’s always a pain in the ass to wire them up correctly. Btw here’s a relatively complex one I did a few months ago. So, I thought it would be awesome to create my own PCBs instead of using messy error-prone perfboards all the time, plus I always wanted to try the laser-printer method for homemade PCB creation back from when I didn’t actually own a laser printer.

I didn’t want to start with a huge complex circuit, so I decided to make a PCB version of my vsync shutter glasses driver.

First step was to draw a schematic Read the rest of this entry »

VSync-driven shutter glasses

All my previous stereoscopic attempts are fun and cool, but what I really wanted was to manage to connect my cheap-o shutter glasses to my computer, and use them for stereoscopic rendering. The main barrier is that consumer nvidia cards do not include a stereo port (unlike expensive quadros), and their drivers don’t support stereoscopic OpenGL visuals.

I had already side-stepped the second problem by writing stereowrap, an LD_PRELOAD-based tool that fakes OpenGL stereo contexts for GLX apps and presents the stereo pair in a number of ways, such as various anaglyphs, side-by-side, etc.

So at some point I decided to attack the first problem. Turns out there’s a simple way to drive shutter glasses. It’s a brilliant idea, and I didn’t come up with it, but it boils down to making a simple circuit that toggles the shutter glasses when it detects a pulse on the montior vsync wire!

Original vsync shutter glasses driver schematic.
vsync stereo driver prefboard prototype
I immediately designed a circuit based on this design, but modified to work with the signals expected by my ASUS VR-100 shutter glasses. Then I wired it up on a perfboard, and it worked like a charm! Finally I added a sequential stereo presentation method to stereowrap, synchronized with vsync, and suddenly I can view all my stereoscopic programs in awesome full-color stereo glory.

The downside to this simple contraption is that it doesn’t really know whether the left or the right image is presented at any given time, it only knows when to switch between them. That’s why the switch is included in the circuit: if the image appears wrong, and you can really tell by your brain attempting to blow up while looking at it, the switch can be used to flip the glasses around instantly. If however the application can’t catch up with the refresh rate of the monitor and misses a vsync interval the images will flip again.

I plan to build a more intelligent, microcontroller-based, driver circuit at some point. But for now, the simple vsync driver works well enough.

OpenGL video editing hack

video post result shot
Hello, just a quick hack report cause I really liked how useful it turned out to be.

It all started when I located a small program I wrote, for an interesting coursework assignment, back when I did my graphics MSc at the University of Hull. I wanted to upload a video capture to youtube to show 4rknova who’s going through the same MSc course right now.

So I did capture the video, and saved it as an image sequence for further editing, because I wanted to add titles at the bottom describing what was demonstrated at each part of the video (the program was basically a sequence of arbitrary shader effects).

But how was I supposed to add the captions? The thought of wrestling with one of those fucking GUI video editing programs made me cringe. They are all slughish, heavy and unweildy, and I always have to fight for a few hours to do even simple things. I was more inclined to use ffmpeg from the command line, but then adding transitions to the captions, like having them fade and slide in from below would be a complete pain in the ass. Btw take a look at the final video to understand what I was trying to achieve with the caption transitions. Should be really simple right?

But then it dawned … hey, I could easily write that transition in a couple lines of C/OpenGL code instead of fighting with all those video editing programs! I started by writing a simple program that iterates over an input image sequence, opening each one in turn and then feeding them one by one to a dlopened plugin which could do whatever it likes with the frame. When that plugin processing function returns, I just dump the image back to the disk. It’s that simple!

Then the plugin was almost trivial as well. I just drop the frame into the OpenGL framebuffer and use my new text rendering library to draw the captions at the appropriate times with the appropriate alpha and position. The timing was derived by a simple event script file containing the frame numbers where each part starts and ends. I fed the script into my new event sequencing (demosystem) library which gives me back a nice linearly increasing [0, 1] value for each event (part) during the time when it is active according to the script. That makes it a piece of cake to fiddle with trig and some factors here and there to transition my captions just the way I wanted.

Here’s the code in case you want to play around with it:

I’m impressed, it’s fucking awesome and powerful to edit videos like that, and I’m definitely going to use it again.

Follow

Get every new post delivered to your Inbox.