The cornell box is a simple geometric scene used to test raytracer accuracy. I finally got round to setting up my own cornell box primitive so I can start to test global lighting solutions.
The original Cornell Box was developed by Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg, and Bennett Battaile at Cornell University in 1983/1984. It’s a simple scene which can be photographed using a real camera, and then compared to the output from a raytracing engine.
The basic properties are:
- A white floor
- A white ceiling
- A white back wall
- A white front wall
- A red left wall
- A green right wall
- A lightsource (usually a cuboid/square light source on the ceiling)
For my cornell box I’ve left out the front wall and the lightsource so I can play around with different scenes. And also, because I don’t yet have a cuboid/square light source for the ceiling!
The other thing I didn’t have before working on the Cornell box was a square primitive, so this was a good excuse to add an additional raytracing primitive to the core engine.
Here’s my simple code for the ray/square line intersection test.
// fast hit test if (mNormal.Dot(rRayContext.m_Ray.GetDirection())>0) return false;float a = -mNormal.Dot(rRayContext.m_Ray.GetStart())/mNormal.Dot(rRayContext.m_Ray.GetDirection()); out_Response.mHitPosition = rRayContext.m_Ray.GetStart() + rRayContext.m_Ray.GetDirection() * a; // Square test if (!(out_Response.mHitPosition>-0.5 && out_Response.mHitPosition-0.5 && out_Response.mHitPosition<0.5)) return false; // ray behind the origin? if (a<=0) return false; // set the normal out_Response.mNormal = mNormal; return true;
Nice and simple. I’ve already written a generalised transformation engine as detailed here, so no need to worry about anything but a unit square centred at the origin.
Once I’d put this together, I built the basic cornell box out of squares, whacked a couple of spheres and a light inside, and hey presto, cornell render no. 1…
And to show just how far I’ve got to go, I’ve also rendered a version of the scene with the very basic global lighting algorithm I’d developed to learn about importance sampling.
The only area light source available to me at the moment is my omnidirectional light source, which casts rays into the scene from all directions. Most of these end up hitting the outside of the box, and that’s why this image is so noisy. I had to cast hundreds of rays just to get 50-60 of them to land inside the box.
The other thing the sharp eyed may have noticed is that there’s no colour transfer from the red/green walls to the white back wall. One of the main things that’s hard to get right with a computer render is the transfer of light from diffuse surfaces to other diffuse surfaces. That’s because there are so many different ways that diffuse light can be transferred to other diffuse surfaces. It can be done though, and here’s a POV-ray image showing exactly that.
Note the red and green sides of the cubes visible in the render. My raytracer isn’t going to do that for quite some time because I’ve got other fish to fry first. Like a smaller area light source, the sphere light, which I’m already working on. Check back soon for a detailed article.
So often I finish an article, read through it, and then have a eureka moment. For this article the eureka moment was when I started to wonder about why the Cornell render had such subdued colours. Well, for my cornell box I used:
White : 1, 1, 1 Red : 1, 0, 0 Green : 0, 1, 0
Turns out that a Cornell scene definition I found online, attributed to Chris Wyman, uses the following colours instead:
White : 0.76, 0.75, 0.5 Red : 0.63, 0.06, 0.04 Green : 0.15, 0.48, 0.09
Which gives quite a different result even in my own system:
So for now that’s it, I guess next I’ll be looking at why exactly the ceiling is black in the corners!