Project 4: Illuminate (Algo Answers)

You can find the handout for this project here. Please skim the handout before your Algo Section!

You may look through the questions below before your section, but we will work through each question together, so make sure to attend section and participate to get full credit!

Raytracing Pipeline

As a group, come up with four or more aspects of the fully fledged raytracing pipeline that were omitted in Intersect, but are required for Illuminate. Discuss how you can use your code from Intersect to implement these components.

There is not necessarily a correct answer! We are looking for a qualitative explanation of your approach. Share your ideas with the TA before moving on.

There isn't a correct answer.

Reflection Equation

TODO
Figure 1: Reflection across a normal.

Given an incoming light direction (from light source to point of intersection) and surface normal (on point of intersection), the equation for the reflected outgoing ray is given by:

Assuming both and are unit vectors, discuss why this equation makes sense.

If you get stuck, try drawing out a diagram on the whiteboard with your group! How might you break down ?

Answer:

is the projection of onto the normal axis. minus two of that gives us , because this reflects / negates the component of that's parallel to , but leaves the component of that is perpendicular to unaffected.

Texture Coordinates

In Illuminate, you will be texture-mapping images onto your implicit shapes. One step of this process is taking a given intersection point, with known UV coordinates, and figuring out which pixel of the image we should use.

However, two different coordinate systems are in play here:

  • Image coordinates have their origin at the top left of a width x height pixel grid. These are used for images like our Brush canvas, our Intersect output, and, most relevantly, images we use as textures.
  • Texture (UV) coordinates have their origin at the bottom left of a unit square. These are used on the surfaces of our objects, as inputs to the equations you will write below.

Make sure you are comfortable with the diagram below, then work through the next two questions, where you will define a mapping from UV coordinates to image coordinates.

TODOTODO
Figure 2: Image coordinates vs texture (UV) coordinates.

A Single Texture

Suppose we have an image of size , which we'd like to use as a texture.

What is the pixel index for a given UV coordinate ? Please use nearest-neighbor texture sampling, and be mindful of your bounds.

Answer:

Note that you'll have to check for cases where (), or (). In those cases, it's fine to just subtract from the out-of-bounds index.

Explanation:

We can think of this as a map from continuous UV coordinates that range in to integer image indices that range in (width) and (height).

  • We can simply scale and floor to obtain .
  • However, since UV coordinates use the bottom-left corner as their origin, while images use the top-left corner, we have to "flip" before scaling and flooring.

Repeated Textures

Suppose that we'd now like to repeat the texture times horizontally and times vertically.

Now, what is the pixel index for a given UV coordinate ? Again, be mindful of the bounds on and , and be prepared to explain your answers as a group.

Answer:

Again, you'll have to check for cases where (), or (). In those cases, it's fine to just subtract from the out-of-bounds index.

Explanation:

We can think of this as a map from continuous UV coordinates that range in to integer image indices that range in (width) and (height).

  • See the previous question's answer for an explanation.
  • Note that it's necessary to take the moduli of the floored indices, to get the actual pixel indices for the texture image.

Shadow Generation

To test if some intersection point is in shadow relative to some light, you must check if there are any objects blocking that light. This can be done by casting a "shadow ray" from towards the light, and checking for intersections between that ray and other objects within the scene.

TODO
Figure 3: Recursive raytracing for shadows.

Suppose point is on object . When casting your shadow ray, do you have to check for intersections between that ray and object ? Why, or why not?

Does your answer change if we are restricted to the primitives we use in this course? Discuss with your group.

Answer:

Conditionally, yes. If object is a concave shape, then self-shadowing is possible (i.e. one part of the object could shadow another).

Note 1: while it happens that none of the primitives you're using are concave, you are not obligated to optimize-out the shadow ray's intersection test with the original object.

Note 2: you do not want to detect erroneous intersections with the exact point on object from which the shadow ray starts. To avoid this problem, you must offset the starting point of the shadow ray by a small amount in the direction of the ray.

Submission

Algo Sections are graded on attendance and participation, so make sure the TAs know you're there!