CM 146 Problem Set 5: Boosting, Unsupervised learning solved


Category: You will receive a download link of the .ZIP file upon Payment


5/5 - (1 vote)

1 AdaBoost [5 pts]
In the lecture on ensemble methods, we said that in iteration t, AdaBoost is picking (ht
, βt) that
minimizes the objective:

(x), β∗
) = arg min
= arg min
βt − e
wt(n)I[yn 6= ht(xn)]
−βt X
We define the weighted misclassification error at time t, t to be t =
n wt(n)I[yn 6= ht(xn)]. Also
the weights are normalized so that P
n wt(n) = 1.
(a) Take the derivative of the above objective function with respect to βt and set it to zero to
solve for βt and obtain the update for βt
(b) Suppose the training set is linearly separable, and we use a hard-margin linear support vector
machine (no slack) as a base classifier. In the first boosting iteration, what would the resulting
β1 be?
2 K-means for single dimensional data [5 pts]
In this problem, we will work through K-means for a single dimensional data.
(a) Consider the case where K = 3 and we have 4 data points x1 = 1, x2 = 2, x3 = 5, x4 = 7.
What is the optimal clustering for this data ? What is the corresponding value of the objective
Parts of this assignment are adapted from course material by Jenna Wiens (UMich) and Tommi Jaakola (MIT).
(b) One might be tempted to think that Lloyd’s algorithm is guaranteed to converge to the
global minimum when d = 1. Show that there exists a suboptimal cluster assignment (i.e.,
initialization) for the data in the above part that Lloyd’s algorithm will not be able to improve
(to get full credit, you need to show the assignment, show why it is suboptimal and explain
why it will not be improved).
3 Gaussian Mixture Models [8 pts]
We would like to cluster data {x1, . . . , xN }, xn ∈ R
d using a Gaussian Mixture Model (GMM)
with K mixture components. To do this, we need to estimate the parameters θ of the GMM, i.e.,
we need to set the values θ = {ωk, µk
, Σk}
k=1 where ωk is the mixture weight associated with
mixture component k, and µk and Σk denote the mean and the covariance matrix of the Gaussian
distribution associated with mixture component k.
If we knew which cluster each sample xn belongs to (we had complete data), we showed in the
lecture on Clustering that the log likelihood l is what we have below and we can compute the
maximum likelihood estimate (MLE) of all the parameters.
l(θ) = X
log p(xn, zn)
γnk log ωk +
γnk log N(xn|µk
, Σk)
Since we do not have complete data, we use the EM algorithm. The EM algorithm works by
iterating between setting each γnk to the posterior probability p(zn = k|xn) (step 1 on slide 26
of the lecture on Clustering) and then using γnk to find the value of θ that maximizes l (step 2
on slide 26). We will now derive updates for one of the parameters, i.e., µj
(the mean parameter
associated with mixture component j).
(a) To maximize l, compute ∇µj
l(θ): the gradient of l(θ) with respect to µj
(b) Set the gradient to zero and solve for µj
to show that µj = P
(c) Suppose that we are fitting a GMM to data using K = 2 components. We have N = 5
samples in our training data with xn, n ∈ {1, . . . , N} equal to: {5, 15, 25, 30, 40}.
We use the EM algorithm to find the maximum likeihood estimates for the model parameters,
which are the mixing weights for the two components, ω1 and ω2, and the means for the two
components, µ1 and µ2. The standard deviations for the two components are fixed at 1.
Suppose that at the end of step 1 of iteration 5 in the EM algorithm, the soft assignment γnk
for the five data items are as shown in Table 1.
γ1 γ2
0.2 0.8
0.2 0.8
0.8 0.2
0.9 0.1
0.9 0.1
Table 1: Entry in row n and column k of the table corresponds to γnk
What are updated values for the parameters ω1, ω2, µ1, and µ2 at the end of step 2 of the
EM algorithm?