Project 4: Illuminate

Github Classroom assignment (same assignment as before)

You can find the algo for this project here.

Example scenefiles to render can be found in our scenefiles repository.


In Intersect, you saw a glimpse of what the raytracing algorithm can do. However, our rendered images are still rather plain. So, in Illuminate, you will build upon your previous code by implementing more complex lighting effects, shinier surfaces, shadows, and texture mapping. You're well on your way to having a fully-fledged raytracer!


For this project you will need to extend your raytracer with the following features:

Light Sources

Previously, you implemented the Phong illumination model with directional lights. In this project, you are required to implement the interaction between objects and two new types of light sources: point lights and spot lights. All light sources are parsed from the scenefiles that we provide. Each light source has light intensity values and its own specific parameters.

Point lights and spot lights also have attenuation coefficients, so you will need to account for attenuation in your Phong illumination calculations.

Point Lights

A point light is an infinitesimal point in space that emits light equally in all directions. You can understand its characteristics by imagining a light bulb whose volume is condensed to a single point. This light source's specific parameter is its position in space.

Spot Lights

A spot light is a point in space that emits light in a finite cone of directions. A good example of spot light in real life is a flashlight.

In this project, you are required to implement spot lights with angular falloff: the effect of light intensity becomes weaker as the direction to the point being illuminated diverges from the direction in which the spot light faces. The figure below illustrates the effect. The boundary of the whole region illuminated by the spotlight is called the outer cone. Within the outer cone, there is a smaller region where light intensity is full; the boundary of this region is called the inner cone. In the region between inner and outer cone, the light intensity gradually decreases as light direction approaches the edge of the outer cone.

Figure 1: Spotlight with its inner and outer cone.

There are many different falloff functions with different effects. For this project, we suggest you use the following function because of its smooth transitions at the boundaries:

The symbols used in this equation are explained in the table below.

SymbolWhat It Represents
the full light intensity of the spotlight
the light intensity at a given direction
the falloff function
the angle between inner cone boundary and spotlight direction
the angle between outer cone boundary and spotlight direction
the angle between current direction and spotlight direction

This light source's specific parameters include its position, direction, angle (angular size of outer cone, in radians) and penumbra (angular size of of the region between the inner and outer cone, in radians).

Adding Attenuation To Phong Lighting

Attenuation is the reduction in the intensity of light as it propagates further from a light source. You are required to add this effect to the Phong illumination model that you have previously implemented. Here is the extended Phong illumination model equation with attenuation. Notice that the attenuation factor works on both diffuse and specular components.

Click here for a full breakdown of the equation's terms.
SubscriptWhat It Represents
A given wavelength (either red, green, or blue).
/ / A given component of the Phong illumination model (either ambient, diffuse, or specular).
A given light source, one of lights in the scene.
SymbolWhat It Represents
The intensity of light.
E.g. would represent the intensity of red light, and is the intensity of blue light for a given light with index .
A model parameter used to tweak the relative weights of the Phong lighting model's three terms.
E.g. would represent the ambient coefficient.
A material property, which you can think of as the material's "amount of color" for a given illumination component.
E.g. represents how red the material is, when counting only ambient illumination.
The total number of light sources in the scene.
The surface normal at the point of intersection.
The normalized direction from the intersection point to light .
The reflected light of about normal .
The normalized direction from the intersection point to the camera.
A different material property. This one's called the specular exponent or shininess, and a surface with a higher shininess value will have a narrower specular highlight, i.e. a light reflected on its surface will appear more focused.
The attenuation factor.

The attenuation factor should be calculated according to the equation below. , , and are attenuation coefficients provided in the function field of SceneLightData.


Shadows occur when part of a surface is blocked from the light source by other objects. In this project, you are required to take shadows into account when implementing lighting.

Figure 3: An illustration of how shadows work in ray tracing

The key to adding shadows is to judge whether a light source is visible from an intersection point for which you are computing lighting. For this assignment, you are only required to implement "hard shadows," where a surface point completely ignores any contribution from a light source which it cannot see. But be careful to take the characteristics of different types of light sources into account. You may discover more interesting ways to add shadows in the Extra Credit section.


Smooth, polished surfaces in the real world often exhibit mirror-like reflections. In this project, you are required to implement this effect via recursive raytracing (i.e. tracing a ray in the mirror reflection direction about a surface normal). As such reflections only occur on shiny surfaces, you should only enable reflections for objects which have a nonzero reflective material component.

Figure 4: An illustration of mirror reflection

In scenes with multiple specular surfaces, it is possible for reflection rays to bounce recursively many (potentially infinitely many) times. To avoid infinite loop / recursion, you probably want to have a maximum depth. For reference, the TA demo uses .

Texture Mapping

With texture mapping, you can vary the material properties of an object across its surface. This allows us to simulate objects that have a more complicated appearance than just a solid color.

UV Coordinates

To put a texture onto a surface, each point on the surface must map to a corresponding point in the texture image. These per-point image coordinates are called UV coordinates. For example, on each face of a cube, the UV coordinate system may originate from the left bottom corner and use the left and bottom edges of the face as its axes. You should think about how that applies to other shapes (e.g. cylinders, spheres, cones) on your own.


Another aspect of texture mapping is how the texture image interacts with the material properties for the surface specified in the scene file. A blend value is supplied as a material property for objects that are texture mapped.

  • If the blend value is 1, then the texture entirely replaces the diffuse color of the object.
  • If the blend value is 0, then the texture will be invisible.
  • If the blend value is between 0 and 1, then you should perform linear interpolation between the diffuse color and the texture. Note: the diffuse color you are blending with should have already been multiplied by global diffuse coefficient.

All of our scenefiles and textures are in the scenefiles repo. Texture images in the scenefiles are specified by their relative paths, so please don't move any files around within the repo.

Texture Filtering

For this assignment, you only need to use nearest-neighbor for texture filtering to retrieve the color from a texture image at a given UV coordinate. More sophisticated texture filtering approaches can be implemented for extra credit.

Codebase & Design

Your work on this project will extend your code from the Intersect project; there is no additional stencil code. We have provided the data structures for object material and light sources in SceneData.h, which you may already have noticed.

As before, the structure of this project is entirely up to you: you can add new classes, files, etc. However, you should not alter any command line interfaces we have implemented in the code base.

We provide all the rendering results of the scenes we have in the scenefiles repo at bench/Illuminate. They serve as references for you to check the correctness of your implementation.

You may notice that features like shadows, reflection and many other extra credit features are part of the config that is specified in the QSettings file, which indicates that they should be able to be toggled on / off based on the input configuration. We expect your implementation to respect this behavior.

If you implement any additional extra credit feature that is not covered by the template config , make sure to rememeber adding the additional flag to your RayTracer::Config and document it properly in your README

Figure 5: Demo scene combining shadows, reflections and textures. (feature_test/shadow_test.xml)


Your repo should include a README in Markdown format with the filename This file should contain basic information about your design choices. You should also include any known bugs and any extra credit you've implemented.

For extra credit, please describe what you've done and point out the related part of your code. If you implement any extra features that requires you to add a parameter for QSettings.ini and RayTracer::Config, please also document it accordingly so that the TAs won't miss anything when grading your assignment.


This assignment is out of 100 points.

  • Point Lights (3 points)
  • Spot Lights (7 points)
  • Attenuation (5 points)
  • Reflection (15 points)
  • Shadows (18 points)
  • Texture Mapping (22 points)
  • Software Engineering, Efficiency, & Stability (25 points)
  • Readme (5 points)

Extra Credit

All of the extra credit options from Intersect are also valid options here (provided you haven't already done them for Intersect). In addition, you can consider the following options (or come up with your own ideas):

  • Texture filtering: nearest-neighbor filtering when retrieving texture values generates unsatisfying results when texture image resolution and surface size are very different in scales. You can use more advanced filtering methods to improve the results. For example:
    • Bilinear filtering (2 points)
    • Bicubic filtering (4 points)
  • Refraction (10 points): Simulate the effect of light transmitting through a semi-transparent objects. To receive credit for this feature, you must take Snell's law into effect and bend the refracted light rays according to the object's index of refraction (you may want to add an index of refraction material parameter to each object).
  • Depth of field (10 points): Rather than use a pinhole camera (i.e. a camera represented by an infinitesimal point in space), implement a camera with a finite-area lens. By sampling different origin locations for rays on this lens, you can create depth-of-field defocus blur in your scene. Keep in mind that with this method, the image plane (i.e. where rays cross through pixels) defines the plane at which the scene is in focus, so you may want to play with pushing this plane farther into your scene than you previously considered.
  • Soft shadows (10 points): Implement a new type of light source with a finite area (this could be as simple as a rectangle sitting somewhere in your scene). Then, when you need to shoot shadow rays toward this light source, sample a random location on the light source for the shadow ray to be traced toward. By sampling different locations on the light source, you can create soft shadows in your scene.

CS 1234/2230 students must attempt at least 10 points of extra credit.

FAQ & Hints

The colors in my rendered images look different from the reference images.

The color from lighting is affected by many factors: surface normals, object materials, light sources, global coefficients, etc. You can try to validate the correctness of most factors by setting breakpoints and checking intermediate variables. You can validate them with manually calculated values in the simplest scenes.

Debugging can be hard. Be patient and creative!

My images have many black dots.

This may be due to shadow rays intersecting their starting surface. Check if your code correctly prevents such erroneous self-intersections.

My ray tracer runs very slowly :(

We've provided some hints on how to address such issues in the Intersect handout.


Submit your Github repo for this project to the "Project 4: Illuminate (Code)" assignment on Gradescope.