Floating point framebuffer

I’ll warn you now. If you don’t know the difference between an integer and a floating point number then this article is going to be pretty tough to follow.

In a nutshell integers are whole numbers (i.e. 1, 2, 3) and are easily represented on computers. Floating point numbers are real numbers (i.e. 1.234, 2.345, 3.456, etc.). They are less easily represented on computers, but are absolutely required if you want to make a realistic raytracer.

So raytracers use floating point numbers, or doubles, to represent all sorts of things, including colours. Monitors on the other hand have been around for a long time, and they represent colours using integers. In fact, the standard monitors on PCs represent colours using an extremely simple format. Different colours are produced by mixing red, green and blue light. Similarly all colours on monitors are represented as an RGB value.

Each of these different components of the colour is represented using an 8-bit integer value. That means it is in the range of zero to 255. If you have 256 red, 256 green and 256 blue colours, then you can represent 16.7 million different colours, and that’s the way monitors have worked since the late 80s, and that’s how they still work today.

So the fundamental problem is that we have a screen that represents colours using 8-bit integers, and a raytracer that represents colours using real numbers.

Perhaps a quick word on how the real world works… Light in the real world can vary greatly from a dark room at night, to a bright sunny day. In fact, it can be hundreds of thousands of times darker in a dark room than outside on a sunny day. And yet our eyes are pretty happy dealing with this “high dynamic range”. So floating points are the right way to represent colour, and we shouldn’t be afraid of having very different lighting values across a scene.

However, we are left with the problem of how to convert these floating point numbers into integers so that we can actually view the image.

I’ll go through a few test scenes and illustrate these difficulties and a couple of solutions.

The first simple solution to converting floating point numbers to integers is to simply scale them straight into integers. Perhaps use a simple multipler like x256? Or make all your lights 256x brighter? That’ll get you a visible image that looks kinda like this…

linear float->int

Linear conversion float->int with no scaling

Now by adjusting the multiplication value we can get various results as shown below…

linear 2*float->int

Linear conversion float->int scaled by 2

But these aren’t really cutting it for me in terms of realism, so instead I use an algorithm that I first saw described by Hugo Elias, the guy that developed an early terrain rendered called Terragen, which is incidentally still going strong and looking extremely smart these days! So the exposure function then… the basic idea is to model the way film works as it’s exposed to light.

Imagine that a piece of film has 100 light sensitive chemicals on it. Light is falling on it at a constant rate, which over 1 second triggers 50% of these light sensitive chemicals. Over 2 seconds it triggers a further 50% of the remaining light sensitive chemicals. i.e. leaving 25%. After 3 seconds a further 50% of these chemicals are removed leaving 12.5%, and so on. This type of behaviour is known as exponential decay, and you can easily model it to work out how bright a pixel should be.

The basic formula is..

exposed_light = 1 – exp(-lightvalue * exposure)

The exposure value lets you adjust how sensitive the camera is.

So now take the previous example that we used integer clamping on. Using the exposure function we can get the following results.

val = 1-exp(-x*1)

Exposure function with scale factor of 1

val = 1-exp(-x*2)

Exposure function with scale factor of 2

val = 1-exp(-x*4)

Exposure function with scale factor of 4

Hopefully you agree that this is a preferable result. The main thing to notice is that with a scale factor of 4 and an exposure function we see detail in both the near area of the image, and by the bright light in the distance. With linear conversion you’re either overbright in the distance, or too dark in the foreground.

Next up I’ll go through a little trick called depth of field and try to explain why it makes images look so cool… like this…

depth of field example

Red white and blue marbles with depth of field

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam Protection *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>