Depth of Field

So far I’ve discussed some very simple approaches to raytracing which will give you some nice hobby images from a raytracer. This article is where we’re going to start getting into photorealism. And by photorealism I mean looking at some real photographs and trying to emulate the effects we see. First off is depth of field.

This is a really tough concept to explain mathematically, but lets start by looking at the effect it has on photographs taken in the real world. First of all I’ll show you a photograph taken with a large depth of field. This corresponds to a high f-number on a DSLR camera…

Large depth of field

Light fitting with large depth of field (f-14, 50mm)

I’ll admit that I’m a bit obsessive about spheres, to the point that our living room light fittings are metallic spheres with glass spheres coming out of them… anyway… onto what this same photograph looks like with a shallow depth of field (corresponding to a low f-number on a DSLR).

Shallow depth of field

Light fitting with shallow depth of field (f-1.8, 50mm)

Hopefully you can see quite a difference between these images. A shallow depth of field means that there is only a small distance from the camera which is in focus. Everything further away from this focal plane (and further in front) is thrown out of focus.

This is a common thing you’ll notice on professional portrait photography and is a great way to guide the eye onto the subject of your photograph.

What works in photography has a habit of working in raytracing but to have a hope of simulating this effect in a raytracer we have to try and understand what is actually going on in terms of the the light reaching the camera sensor.

The reason everything is in focus in a simple raytracer is that we cast a single ray from the virtual camera into the scene for each pixel. The result can’t be blurry because we’re sampling the scene a single time. Similarly in the top photo with deep depth of field, all of the light reaching the camera sensor is coming from a single direction.

The reason the light is only entering the camera from a single direction is all down to the aperture of the camera lens. The f-stop refers to the size of the aperture that light is travelling through within the lens although strangely larger f-numbers mean smaller apertures. As you set higher f-numbers on the lens little “blades” inside the lens slowly shut the aperture so that it lets in less and less light from a smaller and smaller hole. As the hole gets smaller the light that reaches the camera sensor comes from a smaller and smaller cone of directions. i.e. it becomes more in focus.

Going the other way, at a very low f-number the aperture blades become fully open. At this point the light reaching the camera sensor will come from a wider cone of different directions, which leads to blurrier images and a shallow depth of field.

To simulate this in a raytracer we need to imagine that light within the scene is being “focused” onto the screen from a cone of different directions. Consider the following diagram…

DOF Diagram

Diagram illustrating light entering a lens from differing angles

Note that light is reaching the camera from multiple rays within the scene which focus at a distance from the camera. This distance is called the focal plane. To simulate this mathematically is a relatively simple thing once you’ve got your head round it.

Lets start with a standard ray coming from the camera and pointing into the scene. This is described as an origin and a direction. In a simple raytracer all rays have the same origin and the direction scans across the screen.

The focal point of the ray can be calculated using the focal_length setting that we’re using for the camera.

focal_point = direction * focal_length

We know that our current ray intersects this focal point, but lets simulate a fake aperture by moving our origin by calculating a random x and y component

x_aperture = rand() // assuming a 0..1 rand generator

y_aperture = rand()

We then add this aperture randomness to the ray origin.

new_origin = origin + Vector(x_aperture, y_aperture, 0);

Now we need to work out what the new direction of our ray is going to be. We know the focal_point as calculated above, and we know the new origin of our ray, so the direction will go from this new origin to the focal point…

new_direction = normalise(focal point – new_origin)

Note that rotation matrices make this a bit more complicated. If you’re not facing directly along the z axis then do this calculation before rotating the ray to face the new direction.

Using this new ray we can now raytrace the scene as normal to get a single sample of a ray of light coming into the camera at a random position on a square aperture. To get depth of field to work nicely we need to do this calculation a number of different times using different aperture locations. Using 4 samples I get the following image…

DOF 4

Depth of field (4 samples per pixel)

16 gives this…

DOF 16

Depth of field (16 samples per pixel)

32 gives this…

DOF 32

Depth of field (32 samples per pixel)

64 gets us here…

DOF 64

Depth of field (64 samples per pixel)

As you can see this is quickly going to slow down your raytracer. In fact, it’s going to make it 64 times slower, but the results are going to be worth it! With an image like this rendered at 470×300 we’re calculating over 9 million ray intersections with the scene, and that’s taking about 30 seconds on my laptop.

I’ll have a think about what to cover next, most likely we’ll look at procedural texturing, but I’m also tempted to go through transformation matrices, or maybe patch sampling, acceleration structures? So much to talk about!

I’ll leave you with a final DOF render that I threw together this evening. This was a 12 minute render with 256 samples per pixel…

cool DOF example

Depth of field from below

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam Protection *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>