# Reflections

So there’s quite a lot that I haven’t got round to discussing yet including some of the very basic things that take a raytracer from a lone billiard ball simulator to a system capable of rendering many billiards in cool ways.

The most obvious big win that you get with a raytracer is the ability to calculate reflections within the scene.

So far we’ve talked about light rays which are cast from the camera into the scene. When they hit an object we work out the normal to the surface and do some basic lighting calculations. That’s gets us as far as this…

Image of raytraced spheres with no reflection

To make things look a little bit prettier we can also simulate a mirror surface and see what the eye-ray will hit next. To do this we calculate the reflection vector at the surface by mirroring the incident eye-ray around the normal. Don’t worry if that sounds complicated, the mathematics are really very simple indeed. We have two vectors so far, the incident vector and the normal vector. To calculate the reflection vector we do this…

reflection = (incident.normal * 2 * normal) + incident

The incident.normal is another instance of the lovely dot product function introduced in the last article. The derivation of this formula is explained in detail on this page.

Now we’ve calculated the direction of a reflection vector we run another ray into the scene. This time instead of starting at the eye, we start at the collision point, and follow the reflection.

There’s a couple of gotchas that you have to solve before you get good results. Firstly there’s a good probability that your raytracer will calculate that the collision point is *inside* the sphere that we first hit. If this happens then you’re likely to immediately count yourself as shadowed (black reflection) and you’ll likely send off another reflection vector inside the sphere.

And that’s the second thing to watch out for. It’s quite easy to calculate reflections forever, or at least, until the computer stops working. Which just means that your raytracer will never finish and the CPU will just sit there burning away. So the next thing to do is to keep a count of how many reflections we’ve traced. Each time we calculate a reflection we add one to the count and pass it on. Once we’ve got to a reflection brightness which is <0.1% of the colour value, we stop calculating reflections.

So now we’ve dealt with a couple of unpleasant gotchas we should be able to turn our earlier scene into something like this…

Image of raytraced spheres with reflections enabled

I’ll admit it’s not exactly going to blow anyone’s brain away but when I first saw reflections in a raytracer I was pretty impressed. Showing my age?

Next up I’ll talk about floating point frame buffers and exposure functions.