EE 5111: Estimation Theory Mini Project 1 solved

$40.00

Category: You will receive a download link of the .ZIP file upon Payment

Description

5/5 - (1 vote)

1 Problem
Consider the following OFDM system model:
y = XFh + n, (1)
where y ∈ C
512 is the set of observations, X is a 512 dimensional diagonal matrix with known
symbols, h is the L tap time domain channel vector, F is the 512×L matrix performing IDFT1
and n is complex Gaussian noise with variance σ
2
.
For the following set of experiments, generate a set of random bits and modulate them as QPSK
symbols to generate2 X. h is a multipath Rayleigh fading channel vector with an exponentially
decaying power-delay profile p where p[k] = e
−λ(k−1), k = 1, 2…L . That is, each component
of h will be h[k] = 1
kpk2
(a[k] + ib[k])p[k], where a[k], b[k] ∼ N (0,
1
2
); k = 1, 2…L. Here, λ
is the decay factor (and choose λ = 0.2 for your simulations). Now, perform the following
experiments on the described problem set up.
1. Estimate h using least squares method of estimation with L = 32.3
2. Now, suppose that h is sparse with just 6 non zero taps. Assuming that you know the
non zero locations, estimate h using Least squares with the sparsity information.
3. Next, introduce guard band of 180 symbols on either side4
, i.e. now we have reduced
number of observations. For this case:
a Repeat (1),(2) for the above set up.
b Apply regularization and redo least squares. Use various values of α for regularization with αI and compare the estimation results.
1F(i, j) = e
j2π(i−1)(j−1)
512 ;i = 1, · · · , 512, j = 1, · · · , L
2Xi,i ∈ {1 + 1j, −1 + 1j, 1 − 1j, −1 − 1j}
3Note that you are dealing with complex data now and hence the least squares estimate for the model y = Xb
shall now be bˆ = (X
HX)
−1XHy
4Suppress to zero the first and last 180 symbols in X
1
4. Perform least squares estimation on h with the following linear constraints :
h[1] = h[2]
h[3] = h[4]
h[5] = h[6]
For each of the above experiments, you have to compare E[hˆ] and h, theoretical and
simulated MSE of estimation, all averaged over 10000 random trials. (Generate different
instances of X and n for each trial.) Repeat the experiments for σ
2 = {0.1, 0.01} for
each case. Plot hˆ and h for one trial in each of the above cases.
5. Next, for the scenarios in question 2 and 3, compare the results with the estimates you
get from the following steps :
– Step 1 :
Algorithm 1: To find the non-zero locations of the sparse vector h (support estimate).
Input: Observation y, matrix A = XF, sparsity ko = 6
Initialize S
0
omp = φ, k = 1, r
0 = y
for k ← 1 to k0 do
Identify the next column as tk = argmax
j
|AH
j
r
k−1
|
Expand the current support as S
k
omp = S
k−1
omp ∪ tk
Update residual: y
k = [I512 − Pk] y where Pk = ASk
ompA

Sk
omp
.
Increment k → k + 1
end
Output: Support estimate Sˆ = S
k
omp
– Step 2 : Now that you know the non-zero locations of h, estimate h using least
squares.
*In the algorithm Aj is the j
th column of matrix A, AS denotes the sub-matrix of A
formed using the columns indexed by S and A† = (AHA)
−1AH is the Moore-Penrose
pseudo inverse of A. Also, IN is the N dimensional identity matrix.
2 Submission
You are required submit this problem no later than Feb 17th 2020. The submission is by
showing the codes of your three member team to the respective TA and explaining your results.
2