2D Samplers

Today I’m covering a pretty deep technical subject, but for once the maths is straightforwards and it’s not even in 3d! The topic I’m discussing is how to distribute random numbers in such a way that they sample a 2d function evenly regardless of the number of samples taken.

The easiest way to illustrate the importance of sampling algorithms is to discuss the most approachable use of sampling in CG; antialiasing. Antialiasing is a fundamental part of image synthesis and is used in both realtime and offline rendering to ensure that rendered images aren’t full of “jaggies”. The basic principle is that if you’re rendering a scene using a single sample per pixel then it’s easy to end up with boundaries where pixels are either inside or outside an object. If that object’s colour contrasts strongly with the background then you’ll see an obvious jagged line in the synthetic image. To get around this, modern renderers calculate values for subpixels and combine them together to give an average value for the pixel colour.

Lets take a quick look at an example of antialiasing in action. This first image is a test render of a simple checkerboard pattern. The render was done at 1/4 resolution and then zoomed up to 480 pixels wide using a point filter.

antialiased checkerboard

antialiased checkerboard

You can see that edges are a little jaggy but that’s just from zooming it up. The result if we only render a single sample per pixel is far worse…

aliased image

aliased image

The aim of a good 2d sampling algorithm is to get close to the reference image with the minimum number of samples, and because wooscripter is an iterative raytracer I’d like the image quality to increase with every pass of the raytracer. If we can solve this problem, then the same principal can be applied to many other places in the raytracer that use this random sampling method such as glossy surfaces, soft shadows, etc. Broadly speaking this solution is required for monte carlo methods, which is what we’re doing when we randomly sample and average the result.

Lets take a single pixel from the image above and start to look at how we can distribute subsamples to give a good approximation of the average value. The following image shows a typical sampling problem we’d have in the checkerboard image where the boundary of a black tile meets the boundary of a white tile. The image at the botton left shows the pixel we’re sampling. The dots at the top left show a typical 256×256 distribution of the first 256 random samples for this algorithm.

Completely random sampler

Completely random sampler

There are two other elements of this image that are interesting if we’re trying to evaluate sampling functions. To work out how good a sampling function is I generate 1024 sequences of 256 2d vectors. These samples are then taken from the pixel image and the average value is measured for each increasing sample. The red line shows that the average error in the estimate decreases as we add more samples. The text in the middle shows the exact error amount for the 1st, 4th, 16th, 64th and 256th sample.

This very primitive sampler is showing a good general decrease in error, but by visual inspection you can see that the random distribution of points has left a number of large gaps, and in other places multiple samples occur close together. This is going to impact the accuracy or our estimate.

I’m going to cover three approaches to reducing error in the rest of this article.

Fill in the gaps

Filling in the gaps is a simple modification to the vanilla sampler. Instead of just picking a random sample point, lets generate two potential sample points, and then pick the one that is furthest away from the existing samples.

Using this simple algorithm we now get the following results for our sampler.

Fill the gap, 2 samples

Fill the gap, 2 samples

At the top you can see the new distribution of samples to the right of the old distribution. It’s noticeable that while there are still some holes in the distribution, we no longer have as many instances of neighbouring sample points that are very close together.

On the noise chart, the green line shows our new sampling algorithm while the red line shows the original pure random sampler. We can see straight away that the amount of error in the estimate is significantly less, somewhere around 30% looking at the numbers in the middle panel. Generation time has rocketed up to two seconds though using my crudely optimised approach. We can take this further and use 4 candidate samples per pixel and that gives us the following blue line.

Fill in the gap, 4 candidates

Fill in the gap, 4 candidates

This is showing a really decent improvement, with the amount of error in the estimate now halved for the same number of samples. What happens if we go even further and use 64 sample points?

fill the gap, 64 samples

fill the gap, 64 samples

Well now we seem to have started to hit the limits of this algorithm, and 60 second generation time is getting pretty damn expensive in terms of performance. That said the results are significantly better than the vanilla sampling algorithm. To put this in perspective, the 256th sample using the naive approach leaves an error of 0.023. With the 64 candidate “fill in the gap” approach, we reach the same level of error in ~40 samples. This is 8 times faster for the same level of noise!

Moving on from “fill in the gap” lets take a look at another couple of sampling algorithms that are mentioned frequently in the literature.

Jitter Grids

The principle behind a jitter grid is that we’ll evenly distribute our samples by applying an imaginary grid to our sampler and limiting it to one sample per pixel. For 256 samples we can use a simple 16×16 grid for the constraint. The result is interesting. The eventual level of noise is even better than the best “fill in the gap” strategy, but the rate of improvement is far slower than with the fill in the gap approaches.

jitter, 16x16

jitter, 16×16

We can blend a jitter solution with the “fill in the gap” principle to improve the rate of improvement, but there are better ways of jittering samples which can achieve even better results.

First off lets consider heriarchical jitter. In this case we start by sampling a 2×2 grid (4 samples), and then move up to a 4×4 grid (16 samples), followed by an 8×8 grid (64 samples) and finally an 16×16 grid (256 samples).

heirarchical jitter

heirarchical jitter

This algorithm reaches far better estimates on the 4, 16, 64 and 256 sample boundaries as shown above, but note that our error actually increases as we add samples above these boundary points. This is an entirely unwanted artefact. Effectively increased numbers of samples will decrease the accuracy of our estimate.

How do we get around this so that we can consistently improve the quality of the estimate. My first idea as to why the error increases was that we’re randomly distributing the samples and hence introducing unwanted bias into the sampling. To improve this, why not simply resample a 2×2 block at a time which should lead to an improved estimate. This seems logical, but sadly the new result just skews the error levels even more!

Grid jitter

Grid jitter

There is a stateful way to improve this algorithm (downweight the new samples by a factor of 4 each time you split a square), but it makes the accumulation slightly more complex and I think I’ll just keep this around for an importance sampling article one day in the future!

However there is a neat way to keep the error going downwards, and that is to continually fit each new set of 4 samples to the 2×2 distribution grid, and each new 16 samples by the 4×4 grid, etc. Effectively we’re running all grid constraints simultaneously for every sample generated by the algorithm. This gives a very solid result…

Iterative Jitter

Iterative Jitter

Poisson Sampling

The final approach I’ll discuss is poisson sampling, often refered to as “blue noise” in the literature for a reason that I have been unable to discover…

To generate basic poisson noise we use an algorithm called Poisson Disc Sampling. In a nutshell we add sample positions sequentially while ensuring that two samples are never within a specified distance of each other. There are lots of ways to optimise this algorithm using exotic techniques like voronoi graphs, but we’re only generating 256 2d samples so there’s no need to overcomplicate it.

The vanilla poisson disc sample does a decent job of ending up with a fair quality result which incrementally improves, although once again, the rate of improvement is slower than some of the approaches we’ve already discussed above.

Poisson sampling

Poisson sampling

It’s easy to see that as samples are added the constraint is too loose to ensure an even distribution at low numbers of samples, however there is a way round this problem where we can use relexation techniques on the distance constraint. For example, drop the first sample on the grid. Now start our relaxation distance at 0.5. Generate additional samples until one is found that’s further than 0.5 from the existing sample points. If we test a predetermined number of candidates without success (100 in this example) we reduce the constraint and try again.

Relaxing poisson

Relaxing poisson

This results in a rapid decrease in noise without any power of 2 aliasing like we see in the jittered approaches discussed above.

To make sure that these results aren’t too skewed I’ve also run different subpixel patterns for each of the best approaches above (64 candidate fill the gap [green], relaxing poisson [blue], iterative jitter [orange]) with the following results.

Best algorithm comparison

Best algorithm comparison

Note how the non power of 2 axis aligned grid actually causes trouble for the jitter algorithm, but in the other two cases the jitter algorithm is outperforming the other two algorithms. This makes it a fairly tough call on knowing which to choose. I’ve added the generation times at the top. 64 candidate fill the gap is taking a minute to generate, the relaxing poisson approach takes 7.8s seconds, and the iterative jitter algorithm takes a mere 0.2s. So at this point I’d be leaning towards the jitter approach.

The real question though is how do these algorithms perform in real life? The next image shows a simple box render using glossy reflections, soft shadows and area lighting. I’ve zoomed this in by a factor of two and anti-aliasing is off!

Real world noise comparison

Real world noise comparison

The top image is clearly far noisier, and this is indeed the default random sampler. The other two are closer in terms of quality to each other although for my money, I’d choose the middle of the three. The middle one is the relaxing poisson filter, so that’s the one for me! Now I just need to work out a way of precaching this for wooscripter so you don’t need to generate 500K samples every time the application loads!!

If you’d like to investigate these algorithms and others in more detail, then you can download the sourcecode for my noise comparison app on github at https://github.com/dom767/wootracer. The app is called NoiseAnalysis, and the samplers are in RandomSampler.cpp. Any other suggestions for iterative sampling algorithms please post away! :)

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

Spam Protection *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>