Global lighting

I thought I’d put a quick post together to talk about some work I’ve been doing with global lighting models.

Up to now most of the light sources I’ve used in the raytracer have been point or directional lights. I’m now looking at trying to simulate an all around light source for the scene.

This involve changing the lighting calculations so that instead of working out the point lighting contribution, you calculate a global lighting contribution instead. To do this you cast a ray out from the surface in a random direction. If it hits an object then it’s in shade, if not then we contribute the colour of the global light to the lighting calculation.

These are the results I’ve got so far.

A number of different global lighting renders

There’s a few different things I’m testing with here. But lets take them one at a time. The easy way to do this kind of sampling is to wait until we’ve got an intersection and then sample the global lighting sphere a large number of times. This is the type of result you get…

256 random lighting samples @ global or light level

You’ll notice that this is DAMN noisy. Even with 256 samples per intersection the noise level is pretty unacceptable.

This is largely because each sample goes in an entirely random direction from the point of intersection. The next thing to do is to sample the global lighting environment using a grid. The principle is pretty simple. Stick a grid overtop of the thing you’re sampling (in this case the sky) and then sample the centre of each grid square. This should eliminate noise and give a smoother result. But this is what you get…

256 lighting samples on a fixed grid

The grid just isn’t big enough to give smooth edges. It looks like you’ve got 256 lights dotted around the scene instead of a smooth global lighting environment.

The next thing I tried is jittering the sample within each grid cell. Jittering just means picking a random point within the grid cell. This gets us a much smoother result with a lot less noise than with completely random sampling. But I’m still not happy with the results. Noisy noisy noisy.

256 samples on a jitter grid

Next step is probably to try some kind of adaptive grid sampling so the system uses more samples when it starts to encounter shaded directions. But this is fraught with difficulty and with a low resolution grid it’ll be tough to make sure that you don’t sample around an object in one grid cell, and then add detail in the wrong place. This will lead to even more obvious noise…

So going back a step. Let’s say we want to do depth-of-field AND a multi-sampled global lighting model. Using a jitter grid I’ll be doing 256 samples per intersection. For depth of field to look good I’ll need 64 samples per pixel at least. So now we’re sampling each pixel 16000 times!!! Totally unacceptable.

The alternative is to go back to only 1 global lighting sample at each intersection and throwing away the grid principle. This increases the level if noise, but does allow you to mix in depth-of-field. With 256 samples it’s still fairly noisy…

256 global samples with depth-of-field

…but I think it’s the most promising. This is now getting pretty close to a Monte Carlo raytracer. Lets do lots of random things very often and hope that the eventual result looks good. Toy Story 1 used this approach incidentally. And that’s why you can see a bit of noise if you look carefully!

The other thing to note is that global super sampling is more than 3 times slower than only super sampling the lighting. Global 256 samples per pixel = 86.5 seconds, lighting level 256 samples = 26.5 seconds.

There’s two further approaches that are worth trying. One is a blend of light supersampling (but maybe only 16 samples) with standard global supersampling. That gives us this…

16x grid light sampling + 32 global samples

Not too bad, on a side by side comparison you can see that this has less noise than the full monte carlo approach…

side by side comparison of pure monte carlo and mixed global/lighting supersampling

You might need to stare at this for a while, but trust me that it’s definitely less noisy on the left.

The second, far more complex approach, would be to store a jitter grid between rays and use this to do grid sampling for the global illumination alongside the standard depth of field supersampling. Sadly; the code necessary to cache a jitter grid between separate intersection calls is pretty heavy and involves passing a bunch more state around which will impact raytracing speed. So I’ll probably skip that approach.

Next up will be scene caching of the lighting contribution. Photon mapping might be worth a bash, maybe path tracing offers a route here, and perhaps I’ll get onto the Metropolis algorithm one day. Bed time for me…