Part 1: Fun with Filters
In this part, we will build intuitions about 2D convolutions and filtering.
Part 1.1: Finite Difference Operator
Initially, we apply finite difference operators \(D_x\) and \(D_y\) to compute the partial derivatives of an image. The resulting gradient magnitude is then used to generate an edge-detection image by applying a threshold.
Part 1.2: Derivative of Gaussian (DoG) Filter
To address the noise issues observed with simple finite difference operations, we introduced Gaussian smoothing. A Gaussian filter, designed by convolving a 1D Gaussian kernel with its transpose to form a 2D Gaussian kernel, was used to preprocess the image.
From the above results, it is evident that preprocessing the image with Gaussian blur filter giving us a smoother and less noisy image, while also rendering the edges thicker and more distinct
We also created the Derivative of Gaussian filters by directly convolving the Gaussian kernel with \(D_x\) and \(D_y\). We can see the results below are the same as the results obtained above.
Part 2: Fun with Frequencies!
Part 2.1: Image "Sharpening"
I first applied a Gaussian blur to each color channel. Then to enhance the edges, I added the difference between the original and blurred image back to the original. I used an alpha parameter to control how sharp the image gets and saved the results with different levels of sharpness for comparison.
Blurring then sharpen
When I tried to blur the image first and then sharpened it, the results were not as good as sharpening the original image. Many detail information was lost in the blurring process.
Part 2.2: Hybrid Images
In this part of the assignment, I created hybrid images using the method outlined in the SIGGRAPH 2006 paper by Oliva, Torralba, and Schyns. Hybrid images are designed to change in appearance depending on the viewing distance. The concept is based on the idea that high-frequency details dominate when viewed closely, while from a distance, only the low-frequency, smoother parts of the image are visible. By combining the high-frequency details of one image with the low-frequency content of another, I generated hybrid images that produce different perceptions depending on how far the viewer is from the image.
Derek and Nutmeg
Golden and Bull
Oski and Happy-death (Favorite)
This is the bayfiled baby mask from the movie Happy Death Day. When I first saw Oski, I thought of this. Sorry! lol
The effect is not very good, probably because the teeth of the mask is too bright.
Frequency Analysis
Bells and Whistles
I experimented with using color to enhance the image by applying it separately to the high-frequency and low-frequency components, as well as to both combined. Overall, I think images with both colored look the best.
Part 2.3: Gaussian and Laplacian Stacks
Gaussian Stack of Apple
Gaussian Stack of Orange
Laplacian Stack of Apple
Laplacian Stack of Orange
Blending
I applied Gaussian and Laplacian stacks to the "Oraple" image to recreate the results shown in Figure 3.42 of *Computer Vision: Algorithms and Applications* (2nd Edition) by Richard Szeliski, page 167.
Part 2.4: Multiresolution Blending (a.k.a. the oraple!)
Orange and Apple => Oraple
Deadpool + Wolverine => Wolpool
Banana + Shiba => Shinana (Favorite One)
Reflection
The project is pretty fun! I learned a lot about the image filtering and frequency manipulation techniques. It is pretty cool to blend images and create hybrid images by using Gaussian and Laplacian stacks.