RAYTRACER
Computer Graphics ✶ C++
via NVIDIA Developer
Ray tracing is a technique used to render images by simulating how light rays interact with objects in a scene. Unlike traditional rendering methods that approximate lighting, ray tracing follows the path of light rays as they travel through a virtual environment, bouncing off surfaces, and passing through transparent objects.
My implementation of the ray tracing algorithm uses the Phong illumination model with three types of light sources (directional lights, point lights, and spot lights) and four primitive shapes (cubes, spheres, cylinders, and cones).
How does this work?
A scene file is loaded and information about the camera, objects, and lights is parsed.
↓
For each pixel in the image plane, a ray is generated that originates from the camera's position and passes through the pixel.
↓
Each object is checked for an intersection with the ray. Once the intersection closest to the camera is found, the color of the surface at that point is computed using the Phong shading model. This takes into account the material properties of the object, the position and intensity of the lights, and the view direction.
→ to determine if the intersection point is in shadow, rays are cast from the intersection point to each light source. If these rays intersect other objects before reaching the light, the point is in shadow, and its color is adjusted accordingly.
→ if the object is reflective, a reflection ray is generated using the law of reflection, where the angle of incidence equals the angle of reflection. This ray is traced into the scene, and its contribution is added to the color of the pixel.
Results
Note the key features of this renderer: shadows, reflections, and texture mapping.
What's next?
I'm most interested in how I could optimize my raytracer--and there are endless ways to do that!
Many optimization techiques would be applied in the inner loop within the per-pixel loop, where object is checked for intersections. I could implement several techniques there, such as:
Bounding Volume Hierarchies (BVHs) to organize all the objects in the scene into one data structure, like Timothy Kay and James Kajiya did (SIGGRAPH 1986)
A variety of spatial subdivision techniques, e.g., grids, octrees, kd-trees
There's some hardware parallelism I could implement, too, such as:
basic parallelism on the CPU, where each thread deals with a block of pixels
single instruction multiple data (SIMD) parallelism on a modern CPU (like ARM NEON or Intel AVX), to trace sets of rays simultaneously
parallelism on the GPU (similar to the first bullet, but I'd have access to many more threads in comparison to the CPU)
I'd love to explore one of these optimization techniques in the future. Or, even better, I could combine multiple techniques and get my offline raytracer up to real-time speed!