Ray Tracing using C++ |
||
---|---|---|

In order to prepare for Studio 1, I have been working on a C++ Ray Tracer. I intend to have a walk through of the progress I have made so far. All of the images above were created using my program. The long-term goal is to have a fully-functional volumetric renderer written in C++. |

Writing Images |
---|

Before I cast my first ray, I had to write some code that would write out an image. Images are stored in computers as a list of pixels, with each pixel containing three values for red, green, and blue color channels. I created a Pixel class that simply holds three floats and an Image class that holds an array of Pixel objects. The Image class also knows the width and height of the image, so the length of the array of Pixels is equal to the width * height. The Pixels are then stored in the array starting at the top left of the image. Since we are technically using a 2D array, the x and y coordinates of a pixel are equal to the index % width and the index / width, respectively. |

While there are many libraries floating around the internet for writing out pngs, tifs, or whatever else your heart desires, I opted to simply write out the raw pixel values as a raw image file, at least, for the time being. That means that the file contains no additional header information about the bit depth or dimensions of the image, so in order to properly view the image, this information must be known by the user. To do this, I create a new character array and then store the r,g,b values of each pixel as individual characters. Note that this creates an interlaced image, where the r,g,b values for each pixel are written out together, as opposed to a non-interlaced image, where all the r values are written, then the g values, then the b values. Also note that the image has a bit depth of 8, which is why I am multiplying the r,g,b values by 255 and masking by 0xFF. |

For the actual body of the ray tracer, we will loop through all of the pixels and calculate the coloration for each one. The first step is to convert the pixel coordinates to world coordinates, for instance, while your image may have a width of 1280, you may only be displaying the world coordinates from 5 to -5. This is conversion is relatively simple. First, you convert the coordinates with the origin at the center of the image. To do this you subtract half of the width from the x and then divide by the width. This will give you a value ranging from -0.5 to 0.5. Then you simply multiply by the world space width. For instance, a world space width of 10 will display a range of -5 to 5. Once you have the x and y coordinates of the image centered at the origin, you can offset based on any translations on your camera. I also multiply by -1 so that the image is written out rightside-up. |

I have found that working with coordinates where one unit in world space is equal to one pixel cuts down on banding. |

Casting Rays |
---|

To get started, I used orthographic projection, which means that all of the camera rays are parallel. This is easily converted to work with a conic projection scheme later on. From each pixel, we shoot a ray and check to see if it has intersected any of the objects in the scene. I set up the camera to be located -1000 units in the z direction and to be looking down the the z axis. So each ray will start at the point (x,y,-1000) and point in the direction (0,0,1). The rays also include a min and max length which are set to 0 and INFINITY, which has been defined to be the maximum float value. |

Now we check to see if the rays have intersected any of the objects in our scene. Since ray-sphere intersections are the easiest to calculate, I began there. A sphere can be described by a center point and a radius. So, I simply created a Sphere struct to store these values and hard-coded in some spheres to the main function. |

Ray-Sphere Intersection |
---|

First, it simplifies the process to convert the ray to the sphere's object space, which means that the sphere will be centered at (0,0,0). So, a point P will lie on the surface of the sphere of radius R if the P*P = R*R. We can substitue our ray with an origin point O and direction vector D for the point P, and then find the distance t at which our ray intersects the sphere by setting (O + tD)*(O + tD) = R*R. Expanding this gives us a quadratic equation which can then be solved using the Quadratic formula, with the following values for our A, B, and C variables and the two possible solutions t0 and t1 to our equation: (D*D)t^2 + 2(D*O)t + O*O - R*R = 0 Depending on the values of A, B, and C, the solutions might be imaginary numbers. If this is the case, then it means that the ray does not intersect the sphere. We can easily determine if the solution will be imaginary based on whether or not the discriminant, B^2 - 4AC, is less than zero. If it is, then we simply stop right there and move on to the next ray. If the discriminant is positive then calculate the two solutions and store them in variables. In order to cut down on precision errors, t0 and t1 can be calculated as: t0 = q / A If t1 is smaller than t0, we swap the values. Now we check to see if t1 is less than zero. If it is we know that it's intersection is behind the camera, and since we know that t1 is larger than t0, we can conclude that t0 is also behind the camera. Since both intersections are behind the camera, we return false. If t1 is positive, then we check to see if t0 is negative. If t0 is not negative, then we pass it back and return true. Otherwise, we pass back t1 and return true. |

In our main loop, we iterate through all of the spheres in our scene, convert our ray to object space, and check to see if it intersects the sphere. We then keep track of which intersection is the closest to the camera, as well as which sphere the intersection belongs to. Then later we will determine the coloration based on the shading algorithm of that particular sphere. |

In our main loop, we iterate through all of the spheres in our scene, convert our ray to object space, and check to see if it intersects the sphere. We then keep track of which intersection is the closest to the camera, as well as which sphere the intersection belongs to. Then later we will determine the coloration based on the shading algorithm of that particular sphere. For now, we can simply assign an arbitrary color for pixels where the rays intersected spheres and another for pixels where they didn't. |

Contact: zephmann@gmail.com |