Sale!

CSE 152 Introduction to Computer Vision Homework 1 solved

Original price was: $35.00.Current price is: $35.00. $17.50

Category: You will receive a download link of the .ZIP file upon Payment

Description

5/5 - (1 vote)

1 Convolutions and Correlations
The file you will work on in this section is filters.m.
1.1 Implementation [10 points]
In this section, you will implement your own convolution and correlation in
MATLAB. You should implement the following two functions in filters.m:
function y = my conv2d(I,f)
function y = my corr2d(I,f)
Both of them take an image I and a filter f as inputs and output an image
y that has the same shape as the input image. Your implementation will be
compared with MATLAB function conv2 and filter2. Show an input image
and two corresponding output images in your write-up. You are provided with
example images in folder img to test your implementation, but you are free to
include any image you like in your write-up.
Note: The convolution and correlation formulas showed in the slides are not
exactly the same as the built-in functions in MATLAB. You may need to add
an offset to the original image like this:
h[m, n] = X
k,l
f[k, l]I[m − k + ∆, n − l + ∆]
1.2 Commutative Property [5 points]
Recall that the convolution of an image f : R2 → R and a kernel h : R2 → R
is defined as follows:
(f ∗ h)[m, n] = X∞
i=−∞
X∞
j=−∞
f[i, j] · h[m − i, n − j]
Or equivalently,
(f ∗ h)[m, n] = X∞
i=−∞
X∞
j=−∞
h[i, j] · f[m − i, n − j] = (h ∗ f)[m, n]
Show that this is true (i.e. prove that the convolution operator is commutative:
f ∗ h = h ∗ f). Please answer it in your write-up.
1.3 Linear and Shift Invariance [5 points]
Let f be a function R2 → R. Consider a system f
S
−→ g, where g = (f ∗ h) with
some kernel h : R2 → R. Show that S defined by any kernel h is a Linear Shift
Invariant (LSI) system. In other words, for any h, show that S satisfies both of
the following:
• S[a · f1 + b · f2] = a · S[f1] + b · S[f2]
2
• If f[m, n]
S
−→ g[m, n], then f[m − m0, n − n0]
S
−→ g[m − m0, n − n0]
Please answer it in your write-up.
2 Fourier Transform
The file you will work on in this section is frequency.m.
2.1 1D Fourier Series [3 points]
In this question, you are expected to compute the 1D Fourier series to convert an input function from spatial domain to frequency domain by filling out
the corresponding part of frequency.m. The Fourier transform needs to be
rearranged by shifting the zero-frequency component to the center of the array.
You are allowed to call built-in MATLAB functions. Figure 1 shows the example results for function cos(
2πx
10 ). Please save your resulting figure for function
sin(
2πx
5
) + cos(
2πx
10 ) and include it in your write-up.
Figure 1: 1D Fourier Transform for cos(
2πx
10 )
3
2.2 2D Fourier Transform [3 points]
Now you will implement 2D Fourier transform to convert an image from spatial
domain to frequency domain by filling out necessary lines in frequency.m. The
Fourier transform needs to be rearranged by shifting the zero-frequency component to the center of the array. You are allowed to call built-in MATLAB
functions. Figure 2 shows the example results for image dog.jpg. Pick an arbitrary image other than the example one, save the resulting figure produced by
your code, and include it in your write-up.
Figure 2: 2D Fourier Transform for dog.jpg
2.3 Sampling and Aliasing [8 points]
In the lecture of filters and frequency analysis, we have introduced how to subsample an image and fix aliasing. In this question, you are asked to first downsample an image, then fix the aliasing issue using the algorithm introduced in
the class. (See slides P84-P93 for reference.) The implementation can be done
by filling out the corresponding part in frequency.m. You are not allowed to
call imresize() directly, but you can use other built-in MATLAB functions.
Run your code on zebra.jpg and show the resulting figure in your report.
2.4 Low-pass Filtering and High-pass Filtering [12 points]
In this question, you will implement a procedure to do low-pass filtering and
high-pass filtering on an input image. Both of the low-pass filter and highpass filter should be applied in frequency domain. (See slides P76-P80 for
reference) One low-pass filter you are recommended to try is Gaussian filter.
Images after filtering are shown in spatial domain. You are allowed to call
built-in MATLAB functions to do Fourier transform or create filters. Run your
code on an image and include the resulting figure in your write-up.
4
3 Local Feature Descriptors and Matching
The file you will work on in this section is local feature.m.
In this part, you will be implementing keypoint detector, feature descriptors
and feature matching. Keypoint detectors find particularly salient points, such
as corners, in an image upon which we can apply a feature descriptor. Once we
have descriptors, we can use them to extract local features for each keypoint,
and match keypoints between images by measuring feature similarities. The file
you will work on in this section is local feature.m.
3.1 Harris Corner Detector and Feature Matching [25 points]
1. Complete the implementation of function harris corners(I), which takes
an image and outputs a response map that has the same shape as the input
image. The response map is computed according to the corner response
function.
2. Design and implement your own simple feature descriptor. One design
choice you are recommended to try is to build a gradient histogram for
the local patch.
First, you need to complete the function simple descriptor(patch).
Then, implement describe keypoints(I, keypoints) to get local feature descriptors for all keypoints. (simple descriptor() is called in
describe keypoints())
3. Complete the function match descriptors(desc1, desc2). In this function, you will match the keypoints according to your descriptors. You can
use any distance function mentioned in the class. Run your code on the
provided images img/building 1.jpg and img/building 2.jpg and include the visualization results in your write-up.
3.2 SURF Features [4 points]
In the class, we have talked about SIFT feature descriptor. However, MATLAB
doesn’t have a built-in SIFT function because it is patented. Instead, you can try
another local feature SURF. In this part, you need to find corresponding points
between two images using SURF features. You are not asked to implement
SURF by yourself, so feel free to call any relevant built-in functions. Run your
code on the provided images img/building 1.jpg and img/building 2.jpg
and include the visualization results in your write-up.
4 Filters in Convolutional Neural Networks
The file you will work on in this section is cnn.m.
In this part, you will get a chance to see what CNN learns on a simple dataset.
You are provided with MNIST dataset and a simple pretrained CNN on it.
5
MNIST is a handwritten digit dataset which contains 60,000 training examples
and 10,000 testing examples. All of the examples are 28 × 28 gray images with
category labels 0 to 9. The file you are working on for this section is cnn.m.
Before getting started, make sure that
1. Install MATLAB Deep Learning toolbox by running the following command in MATLAB:
matlab.addons.supportpackage.internal.explorer.showSupportPackages(’ALEXNET’,’tripwire’)
2. Run the first three lines of cnn.m to load images and labels in training set
and test set, load network model and print out the details of each layer.
If everything works well, you should see the same results as in Figure 3.
Figure 3: Layer details of the pretrained CNN on MNIST
4.1 Filter Visualization [7 points]
Complete the provided code to visualize the 16 filters in the conv1 layer and
report your visualization results in your write-up. Select two filters and explain
in your report what properties do each of the filter functions pick up.
4.2 Network Activation Visualization [3 points]
First, select 5 images you like from test set. Then, using the two filters you
selected in section 4.1 to extract activations on each image, and show the visualization results in your write-up.
4.3 Image Retrieval [14 points]
First, select 3 images you like from test set. Then for each testing image, retrieve
5 most similar images from the first 1000 images in the training set. In your
implementation, use Euclidean distance on the activations of conv3 layer as the
similarity metric.
6
5 Questionnaire [1 point]
Approximately how many hours do you spend on this homework? Please answer
it in your write-up.
7