I started my initial investigation into rendering 3d mandlebrot fractals a few weeks ago and one of the first things I encountered was the theory of distance estimation functions. Needless to say I got completely sidetracked and have now spent a couple of weeks researching this branch of raytracing and building functionality into Wooscripter to support it.

To understand the value of a distance estimation function you have to start with a simple idea called raymarching. Raymarching is a basic way of working out where the surface of an object is. Imagine that we have a function inside() which tells us whether or not a point is inside an object. To find the surface of an object we can simply step along a ray, testing whether inside() returns true. As soon as it does, we know we’ve found the boundary of the object.

As an example imagine a sphere. We can compute exact intersections using a standard quadratic equation solution, but we can also easily write a method to calculate whether we’re inside the surface. The simple version of this function is

inside(Vec point) { return (mag(point-centre)<radius); }

Now lets take a ray and march it up to the surface of the sphere

vec position = ray.start while (!inside(position)) { position += 0.1*ray.direction; } return position; //this is close to the surface of the object

A proper raymarching algorithm is probably going to do a binary search for a more accurate surface position, but you get the idea. One weakness of this approach is that we have this 0.1* sitting inside the formula. This step size is a bit of a guess right now. Note that if we make the step too big we might step right across the sphere. If we make it too small we’ll spend ages checking points miles away from the sphere.

The way to solve this is to use a different function to estimate how far the surface is from the current position and use this as the multiplier inside our raymarching algorithm. This function is often referred to as a distance estimation function and should return a minimum value for the distance of the point from the object.

With a sphere the distance estimation function is very simple indeed. We simply take the distance of the point from the centre of the sphere and subtract the sphere radius. If the ray is pointing straight at the centre of the sphere then this will take us directly to the surface. If the ray is glancing the sphere, then this will step us a lot closer, but still leave us with a gap to the surface.

Our raymarching algorithm has now become

vec position = ray.start while (!inside(position)) { position += DE()*ray.direction; } return position; //this is close to the surface of the object float DE(vec position) { return mag(position-centre)-radius; }

There’s one further thing that we can do to simplify this. Right now we have an inside() function, but why bother with this if we could just do a proximity test instead? To do that our while loop will simply terminate when the DE function returns a small value.

vec position = ray.start float de do { de = DE(position); position += de * ray.direction; } while (de>0.01)

There is one more thing that we need before we can render the object, and that’s to calculate the normal at the point which touches the sphere. It turns out that this is far simple than you’d imagine. The easy way to do this is to calculate the gradient of the distance estimation function at the point in question. This should align to the surface of the object.

const float normalTweak=mMinimumDistance*0.01f; out_Response.mNormal = Vec(DE(p+Vec(normalTweak,0,0)) - DE(p-Vec(normalTweak,0,0)), DE(p+Vec(0,normalTweak,0)) - DE(p-Vec(0,normalTweak,0)), DE(p+Vec(0,0,normalTweak)) - DE(p-Vec(0,0,normalTweak))); out_Response.mNormal.Normalise();

And to prove this principle works here’s a raytraced scene with two spheres. The red sphere is rendered using a traditional approach, and the green sphere is rendered using a distance estimation function.

Can you tell the difference? Me neither.

Just for comparisons sake I’ve done a character count on my standard sphere intersection routine, and the sphere distance estimation function. The sphere intersection takes 1321 characters. The sphere DE is 106 characters. That’s over ten times smaller, which makes it far faster for me to add additional primitives into the DE functions.

So Wootracer… how does it support distance estimation functions? In a nutshell I’ve gone and created a DE scripting language that allows you to build up more complex distance fields. Distance fields are added to the scene using call(distance), while the distance estimation function is set using distancefunction(“sphere(pos, vec(0,0,0), 1.0)”). sphere isn’t the only function supported, I’ve also added…

min(mFunc1, mFunc2) smin(mFunc1, mFunc2, mK) smin3(mFunc1, mFunc2, mFunc3, mK) max(mFunc1, mFunc2) neg(mVal) mag(mVec) sub(mFunc1, mFunc2) float(mRawFloat) mandelbox(mPos, mIterations, mScale) box(mPos, mDimensions, mOffset) torus(mPos, mR1, mR2) getx(mPos) gety(mPos) getz(mPos) mul(mArg, mMul) pos() vec(mX, mY, mZ) subv(mX, mX) rotx(mPos, mRot) roty(mPos, mRot) rotz(mPos, mRot) repx(mPos, mRep1) repy(mPos, mRep1) repz(mPos, mRep1) repxyz(mPos, mRep1)

These are supported within a simple box -1,1 on all 3 axes. Note that no operators are supported in DE expressions so if you want to negate a value you need to use neg() rather than simply add a – in front. Arithmetic uses simple functions like mul() or sub().

So what can you do with these simple additional functions? Well we’ve done the sphere, what else is possible?

Next up I’ll talk about folding and cutting space, and then shortly afterwards we’ll be into the 3d mandlebrots!

Nice. I was wondering when you’d add DE support

Just getting started!