Assignment 2 Search Problems in AI solved

$35.00

Category: You will receive a download link of the .ZIP file upon Payment

Description

5/5 - (1 vote)

CS 440: INTRODUCTION TO ARTIFICIAL INTELLIGENCE
Problem 1 (10 points): Trace the operation of A∗
search (the tree version) applied to the problem of getting to Bucharest
from Lugoj using the straight-line distance heuristic. That is, show the sequence of nodes that the algorithm will consider and the f, g, and h score for each node. You don’t need to draw the graph, just right down a sequence of
(city, f(city), g(city), h(city)) in the order in which the nodes are expanded.
Giurgiu
Urziceni
Hirsova
Eforie
Neamt
Oradea
Zerind
Arad
Timisoara
Lugoj
Mehadia
Drobeta
Craiova
Sibiu Fagaras
Pitesti
Vaslui
Iasi
Rimnicu Vilcea
Bucharest
71
75
118
111
70
75
120
151
140
99
80
97
101
211
138
146 85
90
98
142
92
87
86
Figure 1: A simplified road map of part of Romania indicating distances between different cities.
Section 3.5. Informed (Heuristic) Search Strategies 93
Urziceni
Neamt
Oradea
Zerind
Timisoara
Mehadia
Sibiu
Pitesti
Rimnicu Vilcea
Vaslui
Bucharest
Giurgiu
Hirsova
Eforie
Arad
Lugoj
Drobeta
Craiova
Fagaras
Iasi
0
160
242
161
77
151
366
244
226
176
241
253
329
80
199
380
234
374
100
193
Figure 3.22 Values of hSLD—straight-line distances to Bucharest.
expanding a node that is not on the solution path; hence, its search cost is minimal. It is
not optimal, however: the path via Sibiu and Fagaras to Bucharest is 32 kilometers longer
than the path through Rimnicu Vilcea and Pitesti. This shows why the algorithm is called
“greedy”—at each step it tries to get as close to the goal as it can.
Greedy best-first tree search is also incomplete even in a finite state space, much like
depth-first search. Consider the problem of getting from Iasi to Fagaras. The heuristic suggests that Neamt be expanded first because it is closest to Fagaras, but it is a dead end. The
solution is to go first to Vaslui—a step that is actually farther from the goal according to
the heuristic—and then to continue to Urziceni, Bucharest, and Fagaras. The algorithm will
never find this solution, however, because expanding Neamt puts Iasi back into the frontier,
Iasi is closer to Fagaras than Vaslui is, and so Iasi will be expanded again, leading to an infinite loop. (The graph search version is complete in finite spaces, but not in infinite ones.) The
worst-case time and space complexity for the tree version is O(bm), where m is the maximum
depth of the search space. With a good heuristic function, however, the complexity can be
reduced substantially. The amount of the reduction depends on the particular problem and on
the quality of the heuristic.
3.5.2 A* search: Minimizing the total estimated solution cost
The most widely known form of best-first search is called A∗ A search (pronounced “A-star ∗ SEARCH
search”). It evaluates nodes by combining g(n), the cost to reach the node, and h(n), the cost
to get from the node to the goal:
f(n) = g(n) + h(n) .
Since g(n) gives the path cost from the start node to node n, and h(n) is the estimated cost
of the cheapest path from n to the goal, we have
f(n) = estimated cost of the cheapest solution through n .
Thus, if we are trying to find the cheapest solution, a reasonable thing to try first is the
node with the lowest value of g(n) + h(n). It turns out that this strategy is more than just
reasonable: provided that the heuristic function h(n) satisfies certain conditions, A∗ search is
Figure 2: Straight-line distances to Bucharest
Problem 2 (10 points): Consider a state space where the start state is number 1 and each state k has two successors:
numbers 2k and 2k + 1.
(a) Suppose the goal state is 11. List the order in which states will be visited for breadthfirst search, depth-limited search
with limit 3, and iterative deepening search.
(b) How well would bidirectional search work on this problem? List the order in which states will be visited. What is the
branching factor in each direction of the bidirectional search?
Problem 3 (5 points): Which of the following statements are correct and which ones are wrong?
(a) Breadth-first search is a special case of uniform-cost search.
(b) Depth-first search is a special case of best-first tree search.
(c) Uniform-cost search is a special case of A∗
search.
(d) Depth-first graph search is guaranteed to return an optimal solution.
(e) Breadth-first graph search is guaranteed to return an optimal solution.
(f) Uniform-cost graph search is guaranteed to return an optimal solution.
(g) A∗ graph search is guaranteed to return an optimal solution if the heuristic is consistent.
(h) A∗ graph search is guaranteed to expand no more nodes than depth-first graph search if the heuristic is consistent.
(i) A∗ graph search is guaranteed to expand no more nodes than uniform-cost graph search if the heuristic is consistent.
Problem 4 (2 points): Iterative deepening is sometimes used as an alternative to breadth first search. Give one advantage
of iterative deepening over BFS, and give one disadvantage of iterative deepening as compared with BFS. Be concise and
specific.
Problem 5 (10 points): Prove that if a heuristic is consistent, it must be admissible. Construct an example of an admissible
heuristic that is not consistent. (Hint: you can draw a small graph of 3 nodes and write arbitrary cost and heuristic values
so that the heuristic is admissible but not consistent).
Problem 6 (3 points): In a Constraint Satisfaction Problem (CSP) search, explain why it is a good heuristic to choose the
variable that is most constrained but the value that is least constraining.
Problem 7 (10 points): Consider the following game tree, where the first move is made by the MAX player and the second
move is made by the MIN player.
(a) What is the best move for the MAX player using the minimax procedure?
(b) Perform a left-to-right (left branch first, then right branch) alpha-beta pruning on the tree. That is, draw only the parts
of the tree that are visited and don’t draw branches that are cut off (no need to show the alpha or beta values).
(c) Do the same thing as in the previous question, but with a right-to-left ordering of the actions. Discuss why different
pruning occurs.
Alpha-Beta Pruning
oblem #22
nsider the following game tree.
)Find the best move for the MAX player using the minimax proced)Perform a left-to-right alpha-beta pruning on the tree. Indicate wthe cutoffs occur.
)Perform a right-to-left alpha-beta pruning on the tree. Discuss whdifferent pruning occurs.
A
B C
D E F G H
I J K L
M N
3 5
0 7
5 7 8
4
MAX
Zoran Duric (CS Dept., GMU) Midterm Review 3 5/ 10 October 7, 2008Problem 8 (10 points): Which of the following are admissible, given admissible heuristics h1, h2? Which of the following
are consistent, given consistent heuristics h1, h2? Justify your answer.
a) h(n) = min{h1(n), h2(n)}
b) h(n) = wh1(n) + (1 − w)h2(n), where 0 ≤ w ≤ 1
c) h(n) = max{h1(n), h2(n)}
Which one of these three new heuristics (a, b or c) would you prefer to use for A*? Justify your answer.
Problem 9 (10 points): Simulated annealing is an extension of hill climbing, which uses randomness to avoid getting stuck
in local maxima and plateaux.
a) For what types of problems will hill climbing work better than simulated annealing? In other words, when is the
random part of simulated annealing not necessary?
b) For what types of problems will randomly guessing the state work just as well as simulated annealing? In other
words, when is the hill-climbing part of simulated annealing not necessary?
c) Reasoning from your answers to parts (a) and (b) above, for what types of problems is simulated annealing a useful
technique? In other terms, what assumptions about the shape of the value function are implicit in the design of
simulated annealing?
d) As defined in your textbook, simulated annealing returns the current state when the end of the annealing schedule is
reached and if the annealing schedule is slow enough. Given that we know the value (measure of goodness) of each
state we visit, is there anything smarter we could do?
(e) Simulated annealing requires a very small amount of memory, just enough to store two states: the current state and
the proposed next state. Suppose we had enough memory to hold two million states. Propose a modification to
simulated annealing that makes productive use of the additional memory. In particular, suggest something that will
likely perform better than just running simulated annealing a million times consecutively with random restarts. [Note:
There are multiple correct answers here.]
(f) Gradient ascent search is prone to local optima just like hill climbing. Describe how you might adapt randomness in
simulated annealing to gradient ascent search avoid trap of local maximum.