Project 2: Filter

Github Classroom assignment (same assignment as before)

You can find the algo for this project here.

Introduction

In Brush, you modified arrays of pixels to create your own digital images. In Filter, you'll instead be using image processing techniques to manipulate existing images.

To do so, you will be implementing a few common image processing algorithms using convolution to create filters not unlike those you might find in an image-editing application such as Photoshop.

Requirements

You must implement 3 filters—blur, edge detect, and scale— using convolution with separable kernels.

Blur

Implement a blurring filter that blurs the image using the kernel radius set by the GUI slider. You may use a triangle or Gaussian blur for this, but not a box blur. Remember to normalize your kernel to prevent changes in brightness!

triangle functiongaussian function
Figure 1: Continuous triangular and Gaussian filters. You will have to discretize, then normalize, these functions to obtain your kernels.

Triangle Filter

Recall from lectures that the triangular filter has a linear falloff. Its analytical formulation is trivial, and is thus left as an exercise to the reader.

Gaussian Filter

The Gaussian filter uses the Gaussian function to get its kernel values:

is the distance from the center of the function, and is the standard deviation. You may be familiar with this function if you've taken a class on statistics—it is simply a normal distribution!

For the best blur effect, given a blur radius of , you should choose . This is because values further than 3 standard deviations from the center become very small and close to .

Edge Detection (Sobel Operator)

Implement an edge detection filter, which highlights areas with high-frequency details (i.e. edges). This can be done by converting the input image to grayscale, convolving with Sobel filters (once for each of x and y), then combining the two outputs.

Convolution with a Sobel filter approximates the derivative of the image signal in the spatial domain in some direction. By convolving our input image twice, separately, with two Sobel filters each representing the x and y directions, we obtain two images and :

These images represent the approximate derivatives of the image in the x and y directions respectively. We can then approximate the magnitude of the gradient of the image as such:

Note that the values in may not be confined to the range, and you will have to clamp them accordingly. This can result in loss of detail, which can be tuned by multiplying our final value by an adjustable sensitivity parameter found in settings.

You do not need to normalize your Sobel kernels. It is neither possible, nor desirable to do so, since they sum to zero by design.

Sobel Kernels Separated

Sobel kernels can be separated in this manner:

Scaling

Implement a scaling filter which can scale images both up and down, horizontally and vertically. You must do so by resampling the image. Be sure to pre-filter the image, as discussed in lecture.

Everything you need to know for this implementation has already been covered in great depth during lectures. For more detail, refer back to the Image Processing III lecture.

As with blurring, remember to normalize your kernels to prevent changes in brightness.

Stencil Code

You already have the stencil code for this assignment; it's in the same repository as the one you worked in for Project 1: Brush. The code for this project is similar to the code you worked with in Lab 3: Convolution, with some minor naming differences.

Loading Images

You can open an image using the "Load Image" button in the Filter menu, which will import the image and display it on the canvas. Once you have loaded in an image, the image data becomes the canvas data. Accessing the canvas data as you have done in previous assignments is then equivalent to indexing into the image's pixels, and you can modify the pixels by modifying the RGBA structs as before. As a reminder, you will need to call Canvas2D's displayImage function to update the canvas after you make changes to the RGBA data vector.

filterImage()

In Canvas2D, there is a function called filterImage() which you will need to fill in. When the Filter button is pressed in the GUI, this function will be called, which should then apply the selected filter to the image loaded into the canvas. As you did in Brush, you can figure out which option has been selected in the GUI using the Settings class.

You should not be doing all of your convolution directly in the filterImage() function. Instead, practice good software engineering by organizing your filter computations into appropriate helper functions/classes.

Design

Think about how to avoid redundancy and inefficiency in your code. Some of the tasks such as blurring and edge detection which have a constant kernel use the same computations for doing convolution.

Separable Kernels

Recall also that many 2D kernels are separable. A 2D kernel is separable if it can be expressed as the outer product of two 1D vectors. In other words, a kernel is separable if we can get the same result by convolving first with a vertical column vector, and then a horizontal row vector.

For all three required filters, you must use separable kernels. That is, you must perform image convolution by convolving your input with a horizontal, then vertical vector, in succession:

  1. For blurring, the filter functions themselves are the separated kernels;
  2. For edge detection, we have provided the separated kernels; and
  3. For scaling, which does not use a fixed kernel, you can simply scale in the x and y directions independently.
Why do we separate our kernels?

Convolution is a computationally expensive operation.

However, given an image and a kernel, using a separated kernel reduces our time complexity from to .

Image Boundaries

In lab 3, we covered different ways to handle convolution at image boundaries. You may choose any one of these methods to deal with canvas edges.

README

Your repo should already contain a README in Markdown format with the filename README.md, from Project 1: Brush. To this README, please append a section for this project.

This section should contain basic information about your design choices, any known bugs, and any extra credit you've implemented.

If you attempted any extra credit, please describe what you did, and point out the relevant parts of your code.

Grading

This assignment is out of 100 points:

  • 3 Filters (60 points)
    • Blur (18 points)
    • Edge Detect (18 points)
    • Scaling (24 points)
  • General Functionality (10 points)
  • Software Engineering, Efficiency, & Stability (25 points)
  • README (5 points)

Extra Credit

For extra credit we encourage you to try implementing some extra filters of your choice. Below we provide a list of suggestions, though you are also welcome to come up with your own:

  • Median Filtering (3 points): Instead of convolving a kernel as usual, take the median over a sample area (the size of which is adjusted by the radius in settings). Filter the red, green, and blue channels separately. This filter has the effect of removing noise from an image, but also has a blur effect.
  • Tone Mapping (3 points): Modify the dynamic range of an image by mapping the minimum and maximum brightnesses to a new range. You should do this in two ways and compare the results:
    • Linear: Modify the image to automatically stretch the dynamic range between full black and white. In other words, images that are too dark will be automatically brightened. Images that are too bright will become darkened.
    • Non-linear: Apply a gamma correction to the image (using the gamma parameter in settings).
  • Chromatic Aberration: Due to dispersion, camera lenses don't perfectly focus all wavelengths of incoming light to the same point. This results in chromatic aberration, an effect where you can sometimes see 'fringes' of different colors in a photographed image. (We provide you with 3 parameters in settings lambda1, lambda2, and lambda3 for the amount of distortion along each channel of the image. You may scale/modify these parameters as you see fit.)
    • Basic (4 points): intentionally mis-align the different color channels of the image by a small amount to simulate chromatic aberration.
    • Spatially-Aware (8 points): in the real world, chromatic aberration isn't equally-noticeable everywhere in an image; rather, it is most noticeable at boundaries that separate light and dark regions. Can you simulate such a spatially-aware effect?
  • Rotation By Arbitrary Angles (10 points): Be sure to do this the "right" way (using correct backmapping) to avoid ugly aliasing!
  • Bilateral Smoothing (10 points): Blur the image with preserving sharp edges. This filter has a really neat, photo-enhancing effect. Adobe Photoshop implements a bilateral filter in its surface blur tool. There are many online resources about bilateral filtering; one example is the SIGGRAPH 2008 course A Gentle Introduction to Bilateral Filtering and its Applications. The naïve implementation of the bilateral filter is per pixel, which is overall. We'll give you full credit for this feature if you implement the naïve version; you can earn even more extra credit if you implement a faster version ;)

If you come up with your own filter that is not in this list, be sure to check with a TA or the Professor to verify that it is sufficient.

CS 1234/2230 students must attempt at least 10 points of extra credit.

Reference Images

Blur

cat blur radius 2cat blur radius 5
Figure 2: Cat Image with a Blur Radius of 2 and 5.

Edge Detect

cat blur sensitivity 1cat blur sensitivity 0.5
Figure 3: Pinwheel Image with a Sensitivity of 1 and 0.5.

Scale

scale downscale up
Figure 4: Among Us image scaled down (x = 0.5, y = 0.2) and scaled up (x = 1.2, y = 1.5)

Median

capybara median radius 2capybara median radius 5
Figure 5: Capybara Image with a Median Radius of 2 and 5.

Chromatic Aberration

aberration 1aberration 2
Figure 6: Grid Image with Lambda Settings (1e-7, 1e-5, 1e-6) and Settings (1e-4, 1e-7, 1e-6) (Spatially Aware)

Tone Mapping

tonemap 1tonemap 2tonemap 3
Figure 7: Tonemapping with linear function, nonlinear function with gamma=2, and nonlinear function with gamma=0.5.

Rotation

rotation 1rotation 2
Figure 8: Mona Lisa with Rotation of 69 and 142 degrees.

Bilateral Smoothing

andy radius 2andy radius 5
Figure 9: Andy Image with a Bilateral Radius of 10 and 50.

Submission

Submit your Github repo for this project to the "Project 2: Filter (Code)" assignment on Gradescope.