# Importance Sampling

A little while ago I did a short piece on my attempts to get global lighting working. To be honest I was pretty disappointed with the results so I’ve continued to look into alternative algorithms and this evening I’ve had a bit of a breakthrough.

To recap then, the basic lighting model I’m trying to simulate is a hemispherical light that provides the same amount of illumination from all directions. To simulate this lighting model I cast random rays from the point I’m trying to light into the environment. If they hit an object before reaching the hemisphere then there’s no light contribution from that ray. Otherwise I take the ray, compute the dot product with the surface normal, and multiply this by the hemisphere colour.

That was quite a lot of words, here’s the algorithm.

```for (i=0; i<samples; i++) { ray = GenerateRandomRay(); if (Scene.Intersect(ray)) { diffuse += GetDot(ray, normal); } } diffuse /= samples;```

And here’s the result if I use 64 samples per pixel.

64 random hemisphere samples

The next thing I tried to reduce the noise was to use a sample grid instead of randomly casting 64 rays into the world. In a nutshell imagine you partition the visible sky into 64 seperate areas. The new algorithm will only cast 1 ray into each of the 64 areas. The lighting calculation remains the same as above.

Now I get a less noisy image…

64 grid hemisphere samples

You’ll notice that this is considerably less noisy, although it’s still going to take an awful lot of samples to get this looking good.

So now I’ve come across importance sampling and the basic idea is this. Random sample vectors which are cast at shallow objects to the object normal are going to add far less light to the overall diffuse calculation compared to rays from above. So rather than cast hundreds of rays at shallow angles, why not concentrate on sampling the more important angles more.

The algorithm for this is not massively dissimilar to the one above, but this time we use the lighting contribution to make a decision about whether we do a shadow test.

```int i=0; while (i<samples) { ray = GenerateRandomRay(); diffuseContribution = GetDot(ray, normal); if (diffuseContribution<random.GetRand()) { i++; if (Scene.Intersect(ray)) { diffuse ++; } } } diffuse /= samples;```

Using this subtly modified algorithm we get the following result with 64 samples.

64 important hemisphere samples

This is considerably better than 64 random samples as shown above, and we’re now well within range of getting very decent smooth results with more samples. For example, lets ramp up to 512 random samples…

512 important hemisphere samples

So finally I’m happy with the global lighting solution. One thing I’m not entirely sure about is whether I need to square the diffuse term

`diffuse = diffuse * diffuse;`

before doing the importance test. If I square the diffuse component then I get the following.

512 important hemisphere samples + squared diffuse

I’m really not sure which is the right term, but the main difference with this image is the darker the shadows under the spheres compared to the lighting on the flat surfaces.

And for a final render I had a bit of fun with some procedural textures and got this result:

Importance sampling with procedural textures and depth of field

As ever, let me know if I went too quickly, happy to provide more detail.