Path Tracing 1

Having built a Cornell Box I’ve finally got round to doing the first few diffuse inter-reflection tests on the raytracer. I know what you’re thinking: What the hell am I talking about?

Diffuse inter-reflection is the complicated way of talking about light which transmits from one diffuse surface to another. Up to now the raytraced images I’ve been building have only considered light which goes from a lightsource directly to the object which is being illuminated. In real life a huge amount of the light we see is passed around between different surfaces before it reaches the eye.

Consider a simple scenario where a window is the only source of light in a room. If you look at the wall either side of the window you’ll notice that it’s not pitch black. There is indeed light falling upon this wall, but it’s not coming from the only light source in the room. Instead the light is coming through the window, reflecting off a wall (and possibly a few other things) and then reaching the wall either side of the window.

Another example in case you’re still struggling. Look underneath your sofa. You can see things. There are unlikely to be any lights which could have cast light under your sofa. The light must have been reflected off other surfaces.

So diffuse inter-reflection is the trendy maths name for light which reaches a surface from another diffuse surface. There are loads of ways of simulating this in computer graphics but I’ve started off with a relatively simple one. Whenever light reaches a surface I send off a secondary ray in a random direction to sample the rest of the scene.

The first issue I came upon was what to do with the colour sample. Should it have an attenuation component factored in? It seems like what we’re doing here is using the rest of the scene as a light source. Up to now whenever we use a light source we use an attenuation factor to make the light less bright with distance from the source.

It turns out though that this is a genuinely bad idea. Here’s an example of a render with attenuation added to the diffuse inter-reflection.

Raytrace example of diffuse interreflection

Diffuse interreflection with attenuation

So attenuation isn’t necessary, but why not? If there’s a bright white wall right next to an object surely this needs to have more contribution to the lighting of the surface than if the white wall is miles away.

But, this isn’t classic attenuation. Classically light follows an inverse squared rule for falloff. This means that the light input is very great when an object is right next to the light source. In fact, at zero distance from a light source the attenuation multiplier becomes infinite. In real life you can’t have an object directly on top of a light source, and all light sources have a certain size, so this issue doesn’t occur in the real world.

Anyway, this attenuation doesn’t work for diffuse inter-reflecitons as you can see above. And when you think about it, attenuation isn’t really necessary in this case. If you’re sampling the scene with 1000 rays from an intersection point there’s an equal(ish) probability that they’ll go in any direction. If there’s a bright white object next to this point then many of the rays will hit that object. If the bright white object is further away, less of the rays will hit the object and therefore it’ll provide less light to the surface. i.e. attenuation is built in to this simple algorithm.

So what does it look like when we start to use this component without attenuation? This test scene is using a large light area source with 16 samples per pixel. First off, diffuse lighting only…

Direct raytracer example

Sample scene with direct illumination only

And now on to the indirect illumination provided by path tracing…

Indirect illumination example with a raytracer

Sample scene with indirect illumination added to the traditional lighting

Two things are immediately noticeable. The scene is a lot brighter, as we expect. The noise is also a LOT noisier. Where we used to need 128 samples to get a smooth render we’re now going to need thousands.

So this is with a large area light source, what about with a point light source?

Raytraced image with indirect illumination and point light

Sample Scene with point light source

The good news is that we now have considerably less noise. The bad news is that the shadows have gone all sharp.

So the other thing we can change is the size of the area light source. So what happens when we make the spherical light source much smaller?

Raytracer cornell box with indirect illumination

Sample scene with a small spherical light and indirect illumination

I quite like this result, less noise than the large sphere light but with noticeable soft shadows.

So on to the final render. I’ve tried to reproduce the classic cornell box scene with the ambient white boxes in place. This render has 500 samples per pixel and took nearly an hour to render. I am NOT impressed!!

Raytracer with path tracing, cornell box

Cornell box scene, 500 samples per pixel. ouch..

To be honest, I’m not entirely sure if I’m doing something wrong. Is this really the level of noise expected for 500 samples per pixel? I’ll gladly accept any hints!

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam Protection *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>