CS6140 Asmt 6: Graphs solved

$40.00

Category: You will receive a download link of the .ZIP file upon Payment

Description

5/5 - (1 vote)

Overview
In this assignment you will explore different approaches to analyzing Markov chains.
You will use one data sets for this assignment:
• http://www.cs.utah.edu/˜jeffp/teaching/cs5140/A6/M.dat
These data sets are in matrix format and can be loaded into MATLAB or OCTAVE. By calling
load filename (for instance load M.dat)
it will put in memory the the data in the file, for instance in the above example the matrix M. You can then
display this matrix by typing M
As usual, it is highly recommended that you use LaTeX for this assignment. If you do not, you may
lose points if your assignment is difficult to read or hard to follow. Find a sample form in this directory:
http://www.cs.utah.edu/˜jeffp/teaching/latex/
1 Finding q∗ (50 points)
We will consider four ways to find q∗ = Mt
q0 as t → ∞.
Matrix Power: Choose some large enough value t, and create Mt
. Then apply q∗ = (Mt
)q0. There are two ways to
create Mt
, first we can just let Mi+1 = Mi ∗ M, repeating this process t − 1 times. Alternatively,
(for simplicity assume t is a power of 2), then in log2
t steps create M2i = Mi ∗ Mi
.
State Propagation: Iterate qi+1 = M ∗ qi for some large enough number t iterations.
Random Walk: Starting with a fixed state q0 = [0, 0, . . . , 1, . . . , 0, 0]T where there is only a 1 at the ith entry, and
then transition to a new state with only a 1 in the i
0
th entry by choosing a new location proportional
to the values in the ith column of M. Iterate this some large number t0 of steps to get state q
0
0
. (This
is the burn in period.)
Now make t new step starting at q
0
0
and record the location after each step. Keep track of how many
times you have recorded each location and estimate q∗ as the normalized version (recall kq∗k1 = 1)
of the vector of these counts.
Eigen-Analysis: Compute eig(M) and take the first eigenvector after it has been normalized.
A (20 points): Run each method (with t = 512, q0 = [1, 0, 0, . . . , 0]T
and t0 = 50 when needed) and
report the answers.
B (10 points): Rerun the Matrix Power and State Propagation techniques with q0 = [0.1, 0.1, . . . , 0.1]T
.
For what value of t is required to get as close to the true answer as the older initial state?
C (16 points): Explain at least one Pro and one Con of each approach. The Pro should explain a situation
when it is the best option to use. The Con should explain why another approach may be better for some
situation.
D (4 points): Is the Markov chain ergodic? Explain why or why not.
CS 6140 Data Mining; Spring 2014 Instructor: Jeff M. Phillips, University of Utah
2 BONUS: Taxation (5 points)
Repeat the trials in part 1.A above using taxation β = 0.85 so at each step, with probability 1 − β, any
state jumps to a random node. It is useful to see how the outcome changes with respect to the results from
Question 1. Recall that this output is the PageRank vector of the graph represented by M.
Briefly explain (no more than 2 sentences) what you needed to do in order to alter the process in question
1 to apply this taxation.
CS 6140 Data Mining