SlideShare ist ein Scribd-Unternehmen logo
1 von 84
Downloaden Sie, um offline zu lesen
Informed Search
Dr. Amit Kumar, Dept of CSE, JUET, Guna
INFORMED SEARCH
• All the previous searches have been blind searches .They make no use of any
knowledge of the problem
• When more information than the initial state , the operator , and the goal test
is available, the size of the search space can usually be constrained.
• HEURISTIC INFORMATION:
Information about the problem (the nature of the states, the cost of transforming from one
state to another , the promise of taking a certain path, and the characteristics of the
goals).can sometimes be used to help guide the search more efficiently.
Information in form of heuristic evaluation function=f(n,g),a function of the
nodes n, and/or the goals g.
• They help to reduce the number of alternatives from an exponential number to a polynomial
number and , thereby , obtain a solution in a tolerable amount of time.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Heuristics
• A heuristic is a rule of thumb for deciding which choice
might be best
• There is no general theory for finding heuristics,
because every problem is different
• Choice of heuristics depends on knowledge of the
problem space
• An informed guess of the next step to be taken in
solving a problem would prune the search space
• A heuristic may find a sub-optimal solution or fail to
find a solution since it uses limited information
• In search algorithms, heuristic refers to a function that
provides an estimate of solution cost
Dr. Amit Kumar, Dept of CSE, JUET, Guna
The notion of Heuristics
• Heuristics use domain specific knowledge to
estimate the quality or potential of partial
solutions.
• Example:
• Manhattan distance heuristic for 8 puzzle.
• Minimum Spanning Tree heuristic for TSP.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
The 8-puzzle
• Heuristic Fn-1 : Misplaced Tiles Heuristics is the number of tiles out of place.
• The first picture shows the current state n, and the second picture the goal state.
• h(n) = 5 because the tiles 2, 8, 1, 6 and 7 are out of place.
• Heuristic Fn-2: Manhattan Distance Heuristic: Another heuristic for 8-puzzle is the
Manhattan distance heuristic. This heuristic sums the distance that the tiles are out of
place. The distance of a tile is measured by the sum of the differences in the x-
positions and the y-positions.
• For the above example, using the Manhattan distance heuristic,
• h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6
• This piece will have to be moved at least that many times to get it to where it belongs
• Suppose, from a given position, we try every possible single move (there can be up to
four of them), and pick the move with the smallest sum
• This is a reasonable heuristic for solving the 8-puzzle
Dr. Amit Kumar, Dept of CSE, JUET, Guna
The Informed Search Problem
• Given [S,s,O,G,h] where
– S is the (implicitly specified ) set of states.
– s is the start state.
– O is the set of state transition operators each having
some cost.
– G is the set of goal states.
– h() is a heuristic function estimating the distance to a
goal.
• To find :
– A min cost seq. of transition to a goal state .
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing
• Hill climbing is a variant of Generate and test
in which feedback from test procedure is used
to help the generator decide which direction to
move in search space.
• Feedback is provided in terms of heuristic
function
Dr. Amit Kumar, Dept of CSE, JUET, Guna
HILL CLIMBING SEARCH
1. Evaluate the initial state. If it is a goal state then return it and quit.
Otherwise, continue with initial state as current state.
2. Loop until a solution is found or there are no new operators left to be
applied in current state:
- Select an operator that has not yet been applied to the current state
and apply it to produce a new state
- Evaluate the new state:
* if it is a goal state, then return it and quit
* if it is not a goal state but better than current state then make
new state as current state
* if it is not better than current state then continue in the loop
Dr. Amit Kumar, Dept of CSE, JUET, Guna
• It is a simply a loop that continually moves in
the direction of increasing value.
• The algorithm does not maintain a search tree,
so the node structures need only record the
state and its elevation which denote by
VALUE.
• It terminates when it reaches a “peak” where
no neighbor has a higher value.
• When there is more than one best successor
choose from, the algorithm can select them at
random.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing Example
• Goal state : h and k
• Local minimum: A-> F ->
G
• Solution:
– A, J, K
– A, F, E, I, K
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Steepest-Ascent Hill Climbing
(Gradient Search)
• Considers all the moves from the current state.
• Selects the best one as the next state.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Steepest-Ascent Hill Climbing
(Gradient Search)
1. Evaluate the initial state. If it is a goal state then return it and quit. Otherwise,
continue with initial state as current state.
2. Loop until a solution is found or a complete iteration produces no change to current
state:
- let SUCC be a state such that any possible successor of the
current state will be better than SUCC (the worst state).
- For each operator that applies to the current state do:
* Apply any operator and generate new state
* evaluate the new state:
* if it is a goal state, then return it and quit
* if it is not a goal state, compare it to SUCC.
If it is better than set SUCC to this state
If it is not better than leave SUCC alone
* if SUCC is better than the current state then set the current state to SUCC.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
DRAWBACK
– Local maxima: A local maximum is a peak that is
higher than each of its neighboring states, but lower
than the global maximum. Once a local maximum
peak the algorithm is halt even though the solution
may be far from satisfactory.
– Plateau: A plateau is an area of the state space
landscape where the evaluation function is flat. It can
be a flat local maximum , from which no uphill exit
exists .The search will conduct a random walk.
– Ridges: A ridge is a special kind of local maximum. It
is an area of the search space that is higher than
surrounding areas and that itself has a slope (which
one would like to climb). But the orientation of the
high region, compared to the set of available moves
and the directions in which they move, makes it
impossible to traverse a ridge by single moves.Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Disadvantages
Ways Out
• Local maximum :Backtrack to some earlier node and try going in a
different direction.
• Plateau: Here make a big jump to some direction and try to get to
new section of the search space.
• Ridge: Here apply two or more rules before doing the test i.e.,
moving in several directions at once.
• Hill climbing is a local method:
Decides what to do next by looking only at the “immediate”
consequences of its choices.
• Global information might be encoded in heuristic functions.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Disadvantages
B
C
D
A
B
C
Start Goal
Blocks World
A D
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Disadvantages
B
C
D
A
B
C
Start Goal
Blocks World
A D
Local heuristic:
+1 for each block that is resting on the thing it is supposed to
be resting on.
-1 for each block that is resting on a wrong thing.
0 4
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Disadvantages
B
C
D
B
C
D
A
A
0 2
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Disadvantages
B
C
D
A
B
C D
A B
C
DA
0
0
0
B
C
D
A
2
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Disadvantages
B
C
D
A
B
C
Start Goal
Blocks World
A D
Global heuristic:
For each block that has the correct support structure: +1 to
every block in the support structure.
For each block that has a wrong support structure: -1 to
every block in the support structure.
-6 6
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Disadvantages
B
C
D
A
B
C D
A B
C
DA
-6
-2
-1
B
C
D
A
-3
Dr. Amit Kumar, Dept of CSE, JUET, Guna
One solution with Global heuristic
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Hill Climbing: Conclusion
• Can be very inefficient in a large, rough
problem space.
• Global heuristic may have to pay for
computational complexity.
• Often useful when combined with other
methods, getting it started right in the right
general neighbourhood.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Simulated annealing search
• A hill-climbing algorithm that never makes “downhill”
moves towards states with lower value (or high cost) is
guaranteed to be incomplete , because it can get stuck on a
local maxima.
• Instead of starting again randomly when stuck on a local
maximum , we could allow the search to take some
downhill steps to escape the local maximum. This is the
idea of simulated annealing.
• Innermost loop of simulated annealing is quite similar to
hill climbing . Instead of picking the best move, however , it
picks a random move.
• Lowering the chances of getting caught at a local maximum,
or plateau, or a ridge.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Simulated Annealing
• A variation of hill climbing in which, at the
beginning of the process, some downhill
moves may be made.
• To do enough exploration of the whole space
early on, so that the final solution is relatively
insensitive to the starting state.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Simulated Annealing
Physical Annealing
• Physical substances are melted and then gradually
cooled until some solid state is reached.
• The goal is to produce a minimal-energy state.
• Annealing schedule: if the temperature is lowered
sufficiently slowly, then the goal will be attained.
• Nevertheless, there is some probability for a
transition to a higher energy state: e-E/T.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Simulated Annealing
1. Evaluate the initial state. If it is a goal state then return it and quit. Otherwise,
continue with initial state as current state.
2. Initialize best-so-far to the current state.
3. Initialize T according to the annealing schedule.
4. Loop until a solution is found or there are no new operators left to be applied in
current state:
1. Select an operator that has not yet been applied to the current state and apply it to
produce a new state
2. Evaluate the new state, compute:
E = Val(current state) - Val(new state)
* if new state is a goal state, then return it and quit
* if it is not a goal state but better than current state then make new state as
current state
* if it is not better than current state then make new state as current state
with probability p` = e ^ -E/T. This step is usually implemented by invoking a
random number generator to produce a number in the range of [0,1]. If the number is
less then p`, then the move is accepted ,otherwise do nothing.
3. Revise T as necessary according to the annealing schedule.Dr. Amit Kumar, Dept of CSE, JUET, Guna
Simulated Annealing Example
Place: where each tile I should go. Place(i)=i.
Position: where it is at any moment.
Energy: sum(distance(i, position(i))), for i=1,8. Initial State
Energy(solution) = 0
Random neighbor: from each state there are at most 4 possible moves. Choose one randomly.
T = temperature. For example, we start with T=40, and at every iteration we decrease it by 1.
If T=1 then we stop decreasing it.
Energy = (2-1)+(6-2)+(7-3)+(4-1)+(8-5) +(6-4)+(7-3)+(9-8) = 1 + 4 + 4 + 3 + 3 + 2 + 4 +
1 = 22
Following are the neighbors
Suppose T = 40. Then the probability of the first neighbor is e-(25-22)/40 = 0.9277 = 92.77%Dr. Amit Kumar, Dept of CSE, JUET, Guna
Best-First Search
• Depth-first search: not all competing branches
having to be expanded.
• Breadth-first search: not getting trapped on dead-
end paths.
 Combining the two is to follow a single path
at a time, but switch paths whenever some
competing path look more promising than the
current one.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Best-First Search
A
DCB
FEHG
JI
5
66 5
2 1
A
DCB
FEHG
5
66 5 4
A
DCB
FE
5
6
3
4
A
DCB
53 1
A
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Best-first Vs Hill Climbing
hill climbing
• In hill climbing, one move
is selected and all the others
are rejected and are never
reconsidered.
• It stop if there are no
successor states with better
values than the current state.
best-first search
• one move is selected, but
the others are kept around so
that they can be revisited
later if the selected path
becomes less promising
• the best available state is
selected in best-first search,
even if that state has a value
that is lower than the value
of the state that was just
explored.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Best-First Search
• To get the path information, Each node will contain, in addition to a description of
the problem state it represents,
– an Indication of how promising it is,
– a parent link that points back to the best node from which it came,
– a list of the nodes that were generated from it.
• Once the goal is found the parent link will make it possible to recover the path to
the goal.
• The list of successors will make it possible, if a better path is found to an already
existing node, to propagate the improvement down to its successors.
• We will call a graph of this sort an OR graph, since each of its branches represents
an alternative problem-solving path.
• To implement such a graph-search procedure, we will need to use two lists of
nodes:
• OPEN: nodes that have been generated, but have not examined. This is organized as
a priority queue in which the elements with the highest priority are those with the
most promising value of the heuristic function
• CLOSED: nodes that have already been examined. It used, whenever a new node is
generated, check whether it has been generated before.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Best-First Search
Algorithm
1. Start with OPEN containing just the initial state.
2. Until a goal is found or there are no nodes left on OPEN do:
I. Pick the best node on OPEN.
II. Generate its successors.
III. For each successor do:
a. If it has not been generated before, evaluate it, add it to
OPEN, and record its parent.
b. If it has been generated before, change the parent if this
new path is better than the previous one. In that case,
update the cost of getting to this node and to any
successors that this node may already have.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Best-First Search
• Best-First Search
f(n) = g(n) + h(n)
 f(n) = measures the value of the current state (its “goodness”).
 h(n) = the estimated cost of the cheapest path from node n to a goal state.
 g(n) = the exact cost of the cheapest path from the initial state to node n.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Traditional informed search strategies
• Greedy search:
f(n)=h(n) ; i.e. g(n) = 0. it means the estimated cost of the cheapest
path from node n to a goal state.
Neither optimal nor complete
• Uniform-cost search:
f(n)=g(n) ; i.e. h(n) = 0. it means the cost of the cheapest path from
the initial state to node n.
Optimal and complete, but very inefficient
• A* Search:
– f(n)=g(n)+h(n), search uses an “admissible” heuristic function f that
takes into account the current cost g
– A heuristic function f(n) = g(n) + h(n) is admissible if h(n) never
overestimates the cost to reach the goal.
– f(n) never over-estimate the true cost to reach the goal state through
node n.
– Always returns the optimal solution path
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
f(n) = h (n) = straight-line distance heuristic
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search
A
Start
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
B
C
E
Start
75118
140 [374][329]
[253]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree
Search
A
B
C
E
F
99
G
A
80
Start
75118
140 [374][329]
[253]
[193]
[366]
[178]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
B
C
E
F
I
99
211
G
A
80
Start
Goal
75118
140 [374][329]
[253]
[193]
[366]
[178]
E
[0][253]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
B
C
E
F
I
99
211
G
A
80
Start
Goal
75118
140 [374][329]
[253]
[193]
[366]
[178]
E
[0][253]
dist(A-E-F-I) = 140 + 99 + 211 = 450
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Best First Search :
Optimal ? NO
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
dist(A-E-G-H-I) =140+80+97+101=418
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
State Heuristic: h(n)
A 366
B 374
** C 250
D 244
E 253
F 178
G 193
H 98
I 0
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
f(n) = h (n) = straight-line distance heuristic
140
Greedy Best First Search :
Optimal ?
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
Start
State h(n)
A 366
B 374
C 250
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
B
C
E
Start
75118
140 [374][250]
[253]
State h(n)
A 366
B 374
C 250
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
B
C
E
D
111
Start
75118
140 [374][250]
[253]
[244]
State h(n)
A 366
B 374
C 250
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
B
C
E
D
111
Start
75118
140 [374][250]
[253]
[244]
C[250]
Infinite Branch !
State h(n)
A 366
B 374
C 250
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Tree Search
A
B
C
E
D
111
Start
75118
140 [374][250]
[253]
[244]
C
D
[250]
[244]
Infinite Branch !
State h(n)
A 366
B 374
C 250
D 244
E 253
F 178
G 193
H 98
I 0
Not Complete
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Greedy Search: Time and Space
Complexity ?
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
140
• Greedy search is not optimal.
• Greedy search is incomplete
without systematic checking of
repeated states.
• In the worst case, the Time and
Space Complexity of Greedy
Search are both O(bm)
Where b is the branching factor and m the maximum
path length
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search
eval-fn: f(n)=g(n)+h(n)
Informed Search Strategies
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* (A Star)
• Greedy Search minimizes a heuristic h(n) which is an
estimated cost from a node n to the goal state. However,
although greedy search can considerably cut the search
time (efficient), it is neither optimal nor complete.
• Uniform Cost Search minimizes the cost g(n) from the
initial state to n. UCS is optimal and complete but not
efficient.
• New Strategy: Combine Greedy Search and UCS to get an
efficient algorithm which is complete and optimal.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* (A Star)
• A* uses a heuristic function which combines g(n) and h(n)
f(n) = g(n) + h(n)
• g(n) is the exact cost to reach node n from the initial state.
Cost so far up to node n.
• h(n) is an estimation of the remaining cost to reach the
goal.
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Algorithm
1. Start with OPEN containing only the initial node. Set that node's g value to
0, its h' value to whatever it is, and its f' value to h' + 0, or h'. Set CLOSED
to the empty list.
2. Until a goal node is found, repeat the following procedure:
– If there are no nodes on OPEN, report failure.
– else, pick the node on OPEN with the lowest f' value. Call it BEST
NODE. Remove it from OPEN. Place it on CLOSED.
• See if BESTNODE is a goal node. If so, exit and report a solution
(either BESTNODE ; if all we want is the node or the path that has
been created between the initial state and BESTNODE if we are
interested in the path).
• else, generate the successors of BEST NODE but do not set
BESTNODE to point to them yet. (First we need to see if any of
them have already been generated.) For each such SUCCESSOR, do
the following:
Dr. Amit Kumar, Dept of CSE, JUET, Guna
a) Set SUCCESSOR to point back to BEST NODE. These backwards links
will make it possible to recover the path once a solution is found.
b) Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from
BEST NODE to SUCCESSOR.
c) See if SUCCESSOR is the same as any node on OPEN (i.e., it has
already been generated but not processed).
– If so, call that node OLD. Since this I node already exists in the
graph, we can throw SUCCESSOR away and add OLD to the list of
BESTNODE's successors. Now we must decide whether OLD's
parent link should be reset to point to BESTNODE. It should be if
the path we have just found to SUCCESSOR is cheaper than the
current best path to OLD (since SUCCESSOR and OLD are really the
same node). So see whether it is cheaper to get to OLD via its current
parent or to SUCCESSOR via BESTNODE by comparing their g
values. If OLD is cheaper (or just as cheap), then we need do nothing.
If SUCCESSOR is cheaper, then reset OLD's parent link to point to
BESTNODE, record the new cheaper path in g(OLD), and update f'
(OLD).
Dr. Amit Kumar, Dept of CSE, JUET, Guna
d) If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so, call the
node on CLOSED OLD and add OLD to the list of BESTNODE's
successors.
– Check to see if the new path or the old path is better just as in step 2(c),
and set the parent link and g and f‘ values appropriately. If we have just
found a better path to OLD, we must propagate the improvement to
OLD's successors. This is a bit tricky. OLD points to its successors.
Each successor in turn points to its successors, and so forth, until each
branch terminates with a node that either is still on OPEN or has no
successors. So to propagate the new cost downward, do a depth-first
traversal of the tree starting at OLD, changing each node's g value (and
thus also its f' value), terminating each branch when you reach either a
node with no successors or a node to which an equivalent or better path
has already been found.
e) If SUCCESSOR was not already on either OPEN or CLOSED, then
put it on OPEN, and add it to the list of BESTNODE's successors.
Compute f'(SUCCESSOR) = g(SUCCESSOR) + h'(SUCCESSOR)
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* (A Star)
n
g(n)
h(n)
f(n) = g(n)+h(n)
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search
State Heuristic: h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
f(n) = g(n) + h (n)
g(n): is the exact cost to reach node n from the initial state.
A
B
D
C
E
F
I
99
211
G
H
80
Start
Goal
97
101
75118
111
140
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A Start
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A
BC E
Start
75118
140
[393] [449]
[447]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
C[447] = 118 + 329
E[393] = 140 + 253
B[449] = 75 + 374
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A
BC E
F
99
G
80
Start
75118
140
[393] [449]
[447]
[417][413]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A
BC E
F
99
G
80
Start
75118
140
[393] [449]
[447]
[417][413]
H
97
[415]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A
BC E
F
I
99
G
H
80
Start
97
101
75118
140
[393] [449]
[447]
[417][413]
[415]
Goal [418]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A
BC E
F
I
99
G
H
80
Start
97
101
75118
140
[393] [449]
[447]
[417][413]
[415]
Goal [418]
I [450]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A
BC E
F
I
99
G
H
80
Start
97
101
75118
140
[393] [449]
[447]
[417][413]
[415]
Goal [418]
I [450]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
A* Search: Tree Search
A
BC E
F
I
99
G
H
80
Start
97
101
75118
140
[393] [449]
[447]
[417][413]
[415]
Goal [418]
I [450]
State h(n)
A 366
B 374
C 329
D 244
E 253
F 178
G 193
H 98
I 0
Dr. Amit Kumar, Dept of CSE, JUET, Guna
How good is A*?
• Memory usage depends on the heuristic function
– If g(N) = constant, H(n) = 0 then A* = breadth-first
– If h(N) is a perfect estimator, A* goes straight to goal
– ...but if h(N) were perfect, why search?
• Quality of solution also depends on h(N)
• It can be proved that, if h(N) is optimistic (never
overestimates the distance to a goal), then A* will
find a best solution (shortest path to a goal)
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Estimation of h
h' Overestimates h h' Underestimates h
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Conditions for optimality
• Admissibility
– An admissible heuristic
never overestimates the
cost to reach the goal
– Straight-line distance
hSLD obviously is an
admissible heuristic
• Admissibility
/Monotonicity
– The admissible heuristic h
is consistent (or satisfies
the monotone restriction) if
for every node N and every
successor N’ of N:
h(N)  c(N,N’) + h(N’)
(triangular inequality)
– A consistent heuristic is
admissible.
• A* is optimal if h is
admissible or consistent
N
N’ h(N)
h(N’)
c(N,N’)
Dr. Amit Kumar, Dept of CSE, JUET, Guna
The Effect of Heuristic Accuracy on Performance
An Example: 8-puzzle
Heuristic Evaluation
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ?
• h2(S) = ?
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Admissible heuristics
E.g., for the 8-puzzle:
• h1(n) = number of misplaced tiles
• h2(n) = total Manhattan distance
(i.e., no. of squares from desired location of each tile)
• h1(S) = ? 8
• h2(S) = ? 3+1+2+2+2+3+3+2 = 18
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Dominance
• If h2(n) ≥ h1(n) for all n (both admissible)
• then h2 dominates h1
• h2 is better for search
Why?
• Typical search costs (average number of nodes expanded):
• A*(h1) = 227 nodes
A*(h2) = 73 nodes
• A*(h1) = 39,135 nodes
A*(h2) = 1,641 nodes
Self Study
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Problem Reduction
Goal: Acquire TV set
AND-OR Graphs
Goal: Steal TV set Goal: Earn some money Goal: Buy TV set
Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980)
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Problem Reduction: AO*
A
DCB
43 5
A
5
6
FE
44
A
DCB
43
10
9
9
9
FE
44
A
DCB
4
6 10
11
12
HG
75
Dr. Amit Kumar, Dept of CSE, JUET, Guna
When to Use Search Techniques
• The search space is small, and
– There are no other available techniques, or
– It is not worth the effort to develop a more efficient
technique
• The search space is large, and
– There is no other available techniques, and
– There exist “good” heuristics
Dr. Amit Kumar, Dept of CSE, JUET, Guna
Conclusions
• Frustration with uninformed search led to the idea
of using domain specific knowledge in a search so
that one can intelligently explore only the relevant
part of the search space that has a good chance of
containing the goal state. These new techniques
are called informed (heuristic) search strategies.
• Even though heuristics improve the performance
of informed search algorithms, they are still time
consuming especially for large size instances.
Dr. Amit Kumar, Dept of CSE, JUET, Guna

Weitere ähnliche Inhalte

Was ist angesagt?

Artificial Intelligence -- Search Algorithms
Artificial Intelligence-- Search Algorithms Artificial Intelligence-- Search Algorithms
Artificial Intelligence -- Search Algorithms Syed Ahmed
 
Control Strategies in AI
Control Strategies in AIControl Strategies in AI
Control Strategies in AIAmey Kerkar
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agentsMegha Sharma
 
Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}FellowBuddy.com
 
AI_Session 7 Greedy Best first search algorithm.pptx
AI_Session 7 Greedy Best first search algorithm.pptxAI_Session 7 Greedy Best first search algorithm.pptx
AI_Session 7 Greedy Best first search algorithm.pptxAsst.prof M.Gokilavani
 
Propositional logic
Propositional logicPropositional logic
Propositional logicRushdi Shams
 
State space search
State space searchState space search
State space searchchauhankapil
 
Problem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.pptProblem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.pptarunsingh660
 
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...vikas dhakane
 
AI_Session 10 Local search in continious space.pptx
AI_Session 10 Local search in continious space.pptxAI_Session 10 Local search in continious space.pptx
AI_Session 10 Local search in continious space.pptxAsst.prof M.Gokilavani
 
Forms of learning in ai
Forms of learning in aiForms of learning in ai
Forms of learning in aiRobert Antony
 
Hill Climbing Algorithm in Artificial Intelligence
Hill Climbing Algorithm in Artificial IntelligenceHill Climbing Algorithm in Artificial Intelligence
Hill Climbing Algorithm in Artificial IntelligenceBharat Bhushan
 
AI 7 | Constraint Satisfaction Problem
AI 7 | Constraint Satisfaction ProblemAI 7 | Constraint Satisfaction Problem
AI 7 | Constraint Satisfaction ProblemMohammad Imam Hossain
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AIVishal Singh
 
Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searchingLuigi Ceccaroni
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxkitsenthilkumarcse
 
Simulated Annealing
Simulated AnnealingSimulated Annealing
Simulated AnnealingJoy Dutta
 

Was ist angesagt? (20)

Artificial Intelligence -- Search Algorithms
Artificial Intelligence-- Search Algorithms Artificial Intelligence-- Search Algorithms
Artificial Intelligence -- Search Algorithms
 
Control Strategies in AI
Control Strategies in AIControl Strategies in AI
Control Strategies in AI
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agents
 
Hill climbing
Hill climbingHill climbing
Hill climbing
 
Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}Heuristic Search Techniques {Artificial Intelligence}
Heuristic Search Techniques {Artificial Intelligence}
 
AI_Session 7 Greedy Best first search algorithm.pptx
AI_Session 7 Greedy Best first search algorithm.pptxAI_Session 7 Greedy Best first search algorithm.pptx
AI_Session 7 Greedy Best first search algorithm.pptx
 
Problems, Problem spaces and Search
Problems, Problem spaces and SearchProblems, Problem spaces and Search
Problems, Problem spaces and Search
 
search strategies in artificial intelligence
search strategies in artificial intelligencesearch strategies in artificial intelligence
search strategies in artificial intelligence
 
Propositional logic
Propositional logicPropositional logic
Propositional logic
 
State space search
State space searchState space search
State space search
 
Problem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.pptProblem reduction AND OR GRAPH & AO* algorithm.ppt
Problem reduction AND OR GRAPH & AO* algorithm.ppt
 
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...
I.INFORMED SEARCH IN ARTIFICIAL INTELLIGENCE II. HEURISTIC FUNCTION IN AI III...
 
AI_Session 10 Local search in continious space.pptx
AI_Session 10 Local search in continious space.pptxAI_Session 10 Local search in continious space.pptx
AI_Session 10 Local search in continious space.pptx
 
Forms of learning in ai
Forms of learning in aiForms of learning in ai
Forms of learning in ai
 
Hill Climbing Algorithm in Artificial Intelligence
Hill Climbing Algorithm in Artificial IntelligenceHill Climbing Algorithm in Artificial Intelligence
Hill Climbing Algorithm in Artificial Intelligence
 
AI 7 | Constraint Satisfaction Problem
AI 7 | Constraint Satisfaction ProblemAI 7 | Constraint Satisfaction Problem
AI 7 | Constraint Satisfaction Problem
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AI
 
Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searching
 
Problem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptxProblem solving in Artificial Intelligence.pptx
Problem solving in Artificial Intelligence.pptx
 
Simulated Annealing
Simulated AnnealingSimulated Annealing
Simulated Annealing
 

Ähnlich wie Informed search

AI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptxAI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptxAsst.prof M.Gokilavani
 
Heuristic search
Heuristic searchHeuristic search
Heuristic searchNivethaS35
 
Informed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxInformed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxKirti Verma
 
AI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAsst.prof M.Gokilavani
 
Heuristic search
Heuristic searchHeuristic search
Heuristic searchNivethaS35
 
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxArtificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxCCBProduction
 
Heuristic or informed search
Heuristic or informed searchHeuristic or informed search
Heuristic or informed searchHamzaJaved64
 
24.09.2021 Reinforcement Learning Algorithms.pptx
24.09.2021 Reinforcement Learning Algorithms.pptx24.09.2021 Reinforcement Learning Algorithms.pptx
24.09.2021 Reinforcement Learning Algorithms.pptxManiMaran230751
 
Hill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligenceHill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligencesandeep54552
 
Reinforcement learning, Q-Learning
Reinforcement learning, Q-LearningReinforcement learning, Q-Learning
Reinforcement learning, Q-LearningKuppusamy P
 
Intro to Reinforcement Learning
Intro to Reinforcement LearningIntro to Reinforcement Learning
Intro to Reinforcement LearningUtkarsh Garg
 
hill climbing algorithm.pptx
hill climbing algorithm.pptxhill climbing algorithm.pptx
hill climbing algorithm.pptxMghooolMasier
 
Heuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.pptHeuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.pptkarthikaparthasarath
 
Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]
Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]
Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]Shakeeb Ahmad Mohammad Mukhtar
 

Ähnlich wie Informed search (20)

Hill climbing algorithm
Hill climbing algorithmHill climbing algorithm
Hill climbing algorithm
 
AI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptxAI3391 Session 11 Hill climbing algorithm.pptx
AI3391 Session 11 Hill climbing algorithm.pptx
 
Heuristic search
Heuristic searchHeuristic search
Heuristic search
 
Informed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptxInformed Search Techniques new kirti L 8.pptx
Informed Search Techniques new kirti L 8.pptx
 
AI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptxAI_Session 9 Hill climbing algorithm.pptx
AI_Session 9 Hill climbing algorithm.pptx
 
Heuristic search
Heuristic searchHeuristic search
Heuristic search
 
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptxArtificial Intelligence_Anjali_Kumari_26900122059.pptx
Artificial Intelligence_Anjali_Kumari_26900122059.pptx
 
Lec 6 bsc csit
Lec 6 bsc csitLec 6 bsc csit
Lec 6 bsc csit
 
Heuristic or informed search
Heuristic or informed searchHeuristic or informed search
Heuristic or informed search
 
AI 5 | Local Search
AI 5 | Local SearchAI 5 | Local Search
AI 5 | Local Search
 
24.09.2021 Reinforcement Learning Algorithms.pptx
24.09.2021 Reinforcement Learning Algorithms.pptx24.09.2021 Reinforcement Learning Algorithms.pptx
24.09.2021 Reinforcement Learning Algorithms.pptx
 
Hill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligenceHill climbing algorithm in artificial intelligence
Hill climbing algorithm in artificial intelligence
 
Reinforcement learning, Q-Learning
Reinforcement learning, Q-LearningReinforcement learning, Q-Learning
Reinforcement learning, Q-Learning
 
Intro to Reinforcement Learning
Intro to Reinforcement LearningIntro to Reinforcement Learning
Intro to Reinforcement Learning
 
Uninformed search
Uninformed searchUninformed search
Uninformed search
 
hill climbing algorithm.pptx
hill climbing algorithm.pptxhill climbing algorithm.pptx
hill climbing algorithm.pptx
 
Heuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.pptHeuristic Search Techniques Unit -II.ppt
Heuristic Search Techniques Unit -II.ppt
 
02LocalSearch.pdf
02LocalSearch.pdf02LocalSearch.pdf
02LocalSearch.pdf
 
AI_ppt.pptx
AI_ppt.pptxAI_ppt.pptx
AI_ppt.pptx
 
Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]
Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]
Q-Learning Algorithm: A Concise Introduction [Shakeeb A.]
 

Mehr von Amit Kumar Rathi

Hybrid Systems using Fuzzy, NN and GA (Soft Computing)
Hybrid Systems using Fuzzy, NN and GA (Soft Computing)Hybrid Systems using Fuzzy, NN and GA (Soft Computing)
Hybrid Systems using Fuzzy, NN and GA (Soft Computing)Amit Kumar Rathi
 
Fundamentals of Genetic Algorithms (Soft Computing)
Fundamentals of Genetic Algorithms (Soft Computing)Fundamentals of Genetic Algorithms (Soft Computing)
Fundamentals of Genetic Algorithms (Soft Computing)Amit Kumar Rathi
 
Fuzzy Systems by using fuzzy set (Soft Computing)
Fuzzy Systems by using fuzzy set (Soft Computing)Fuzzy Systems by using fuzzy set (Soft Computing)
Fuzzy Systems by using fuzzy set (Soft Computing)Amit Kumar Rathi
 
Fuzzy Set Theory and Classical Set Theory (Soft Computing)
Fuzzy Set Theory and Classical Set Theory (Soft Computing)Fuzzy Set Theory and Classical Set Theory (Soft Computing)
Fuzzy Set Theory and Classical Set Theory (Soft Computing)Amit Kumar Rathi
 
Associative Memory using NN (Soft Computing)
Associative Memory using NN (Soft Computing)Associative Memory using NN (Soft Computing)
Associative Memory using NN (Soft Computing)Amit Kumar Rathi
 
Back Propagation Network (Soft Computing)
Back Propagation Network (Soft Computing)Back Propagation Network (Soft Computing)
Back Propagation Network (Soft Computing)Amit Kumar Rathi
 
Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Amit Kumar Rathi
 
Introduction to Soft Computing (intro to the building blocks of SC)
Introduction to Soft Computing (intro to the building blocks of SC)Introduction to Soft Computing (intro to the building blocks of SC)
Introduction to Soft Computing (intro to the building blocks of SC)Amit Kumar Rathi
 
Sccd and topological sorting
Sccd and topological sortingSccd and topological sorting
Sccd and topological sortingAmit Kumar Rathi
 
Recurrence and master theorem
Recurrence and master theoremRecurrence and master theorem
Recurrence and master theoremAmit Kumar Rathi
 

Mehr von Amit Kumar Rathi (20)

Hybrid Systems using Fuzzy, NN and GA (Soft Computing)
Hybrid Systems using Fuzzy, NN and GA (Soft Computing)Hybrid Systems using Fuzzy, NN and GA (Soft Computing)
Hybrid Systems using Fuzzy, NN and GA (Soft Computing)
 
Fundamentals of Genetic Algorithms (Soft Computing)
Fundamentals of Genetic Algorithms (Soft Computing)Fundamentals of Genetic Algorithms (Soft Computing)
Fundamentals of Genetic Algorithms (Soft Computing)
 
Fuzzy Systems by using fuzzy set (Soft Computing)
Fuzzy Systems by using fuzzy set (Soft Computing)Fuzzy Systems by using fuzzy set (Soft Computing)
Fuzzy Systems by using fuzzy set (Soft Computing)
 
Fuzzy Set Theory and Classical Set Theory (Soft Computing)
Fuzzy Set Theory and Classical Set Theory (Soft Computing)Fuzzy Set Theory and Classical Set Theory (Soft Computing)
Fuzzy Set Theory and Classical Set Theory (Soft Computing)
 
Associative Memory using NN (Soft Computing)
Associative Memory using NN (Soft Computing)Associative Memory using NN (Soft Computing)
Associative Memory using NN (Soft Computing)
 
Back Propagation Network (Soft Computing)
Back Propagation Network (Soft Computing)Back Propagation Network (Soft Computing)
Back Propagation Network (Soft Computing)
 
Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)Fundamentals of Neural Network (Soft Computing)
Fundamentals of Neural Network (Soft Computing)
 
Introduction to Soft Computing (intro to the building blocks of SC)
Introduction to Soft Computing (intro to the building blocks of SC)Introduction to Soft Computing (intro to the building blocks of SC)
Introduction to Soft Computing (intro to the building blocks of SC)
 
Topological sorting
Topological sortingTopological sorting
Topological sorting
 
String matching, naive,
String matching, naive,String matching, naive,
String matching, naive,
 
Shortest path algorithms
Shortest path algorithmsShortest path algorithms
Shortest path algorithms
 
Sccd and topological sorting
Sccd and topological sortingSccd and topological sorting
Sccd and topological sorting
 
Red black trees
Red black treesRed black trees
Red black trees
 
Recurrence and master theorem
Recurrence and master theoremRecurrence and master theorem
Recurrence and master theorem
 
Rabin karp string matcher
Rabin karp string matcherRabin karp string matcher
Rabin karp string matcher
 
Minimum spanning tree
Minimum spanning treeMinimum spanning tree
Minimum spanning tree
 
Merge sort analysis
Merge sort analysisMerge sort analysis
Merge sort analysis
 
Loop invarient
Loop invarientLoop invarient
Loop invarient
 
Linear sort
Linear sortLinear sort
Linear sort
 
Heap and heapsort
Heap and heapsortHeap and heapsort
Heap and heapsort
 

Kürzlich hochgeladen

welding defects observed during the welding
welding defects observed during the weldingwelding defects observed during the welding
welding defects observed during the weldingMuhammadUzairLiaqat
 
Industrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESIndustrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESNarmatha D
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfAsst.prof M.Gokilavani
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catcherssdickerson1
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)Dr SOUNDIRARAJ N
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxsiddharthjain2303
 
Input Output Management in Operating System
Input Output Management in Operating SystemInput Output Management in Operating System
Input Output Management in Operating SystemRashmi Bhat
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...121011101441
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsSachinPawar510423
 
NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...Amil Baba Dawood bangali
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptJasonTagapanGulla
 
Mine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxMine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxRomil Mishra
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadaditya806802
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptSAURABHKUMAR892774
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptMadan Karki
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating SystemRashmi Bhat
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 

Kürzlich hochgeladen (20)

welding defects observed during the welding
welding defects observed during the weldingwelding defects observed during the welding
welding defects observed during the welding
 
Industrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIESIndustrial Safety Unit-I SAFETY TERMINOLOGIES
Industrial Safety Unit-I SAFETY TERMINOLOGIES
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdfCCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
 
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor CatchersTechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
TechTAC® CFD Report Summary: A Comparison of Two Types of Tubing Anchor Catchers
 
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
UNIT III ANALOG ELECTRONICS (BASIC ELECTRONICS)
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Energy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptxEnergy Awareness training ppt for manufacturing process.pptx
Energy Awareness training ppt for manufacturing process.pptx
 
Input Output Management in Operating System
Input Output Management in Operating SystemInput Output Management in Operating System
Input Output Management in Operating System
 
Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...Instrumentation, measurement and control of bio process parameters ( Temperat...
Instrumentation, measurement and control of bio process parameters ( Temperat...
 
Vishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documentsVishratwadi & Ghorpadi Bridge Tender documents
Vishratwadi & Ghorpadi Bridge Tender documents
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...
NO1 Certified Black Magic Specialist Expert Amil baba in Uae Dubai Abu Dhabi ...
 
Solving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.pptSolving The Right Triangles PowerPoint 2.ppt
Solving The Right Triangles PowerPoint 2.ppt
 
Mine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptxMine Environment II Lab_MI10448MI__________.pptx
Mine Environment II Lab_MI10448MI__________.pptx
 
home automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasadhome automation using Arduino by Aditya Prasad
home automation using Arduino by Aditya Prasad
 
Arduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.pptArduino_CSE ece ppt for working and principal of arduino.ppt
Arduino_CSE ece ppt for working and principal of arduino.ppt
 
Indian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.pptIndian Dairy Industry Present Status and.ppt
Indian Dairy Industry Present Status and.ppt
 
Main Memory Management in Operating System
Main Memory Management in Operating SystemMain Memory Management in Operating System
Main Memory Management in Operating System
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 

Informed search

  • 1. Informed Search Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 2. INFORMED SEARCH • All the previous searches have been blind searches .They make no use of any knowledge of the problem • When more information than the initial state , the operator , and the goal test is available, the size of the search space can usually be constrained. • HEURISTIC INFORMATION: Information about the problem (the nature of the states, the cost of transforming from one state to another , the promise of taking a certain path, and the characteristics of the goals).can sometimes be used to help guide the search more efficiently. Information in form of heuristic evaluation function=f(n,g),a function of the nodes n, and/or the goals g. • They help to reduce the number of alternatives from an exponential number to a polynomial number and , thereby , obtain a solution in a tolerable amount of time. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 3. Heuristics • A heuristic is a rule of thumb for deciding which choice might be best • There is no general theory for finding heuristics, because every problem is different • Choice of heuristics depends on knowledge of the problem space • An informed guess of the next step to be taken in solving a problem would prune the search space • A heuristic may find a sub-optimal solution or fail to find a solution since it uses limited information • In search algorithms, heuristic refers to a function that provides an estimate of solution cost Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 4. The notion of Heuristics • Heuristics use domain specific knowledge to estimate the quality or potential of partial solutions. • Example: • Manhattan distance heuristic for 8 puzzle. • Minimum Spanning Tree heuristic for TSP. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 5. The 8-puzzle • Heuristic Fn-1 : Misplaced Tiles Heuristics is the number of tiles out of place. • The first picture shows the current state n, and the second picture the goal state. • h(n) = 5 because the tiles 2, 8, 1, 6 and 7 are out of place. • Heuristic Fn-2: Manhattan Distance Heuristic: Another heuristic for 8-puzzle is the Manhattan distance heuristic. This heuristic sums the distance that the tiles are out of place. The distance of a tile is measured by the sum of the differences in the x- positions and the y-positions. • For the above example, using the Manhattan distance heuristic, • h(n) = 1 + 1 + 0 + 0 + 0 + 1 + 1 + 2 = 6 • This piece will have to be moved at least that many times to get it to where it belongs • Suppose, from a given position, we try every possible single move (there can be up to four of them), and pick the move with the smallest sum • This is a reasonable heuristic for solving the 8-puzzle Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 6. The Informed Search Problem • Given [S,s,O,G,h] where – S is the (implicitly specified ) set of states. – s is the start state. – O is the set of state transition operators each having some cost. – G is the set of goal states. – h() is a heuristic function estimating the distance to a goal. • To find : – A min cost seq. of transition to a goal state . Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 7. Hill Climbing • Hill climbing is a variant of Generate and test in which feedback from test procedure is used to help the generator decide which direction to move in search space. • Feedback is provided in terms of heuristic function Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 8. HILL CLIMBING SEARCH 1. Evaluate the initial state. If it is a goal state then return it and quit. Otherwise, continue with initial state as current state. 2. Loop until a solution is found or there are no new operators left to be applied in current state: - Select an operator that has not yet been applied to the current state and apply it to produce a new state - Evaluate the new state: * if it is a goal state, then return it and quit * if it is not a goal state but better than current state then make new state as current state * if it is not better than current state then continue in the loop Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 9. • It is a simply a loop that continually moves in the direction of increasing value. • The algorithm does not maintain a search tree, so the node structures need only record the state and its elevation which denote by VALUE. • It terminates when it reaches a “peak” where no neighbor has a higher value. • When there is more than one best successor choose from, the algorithm can select them at random. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 10. Hill Climbing Example • Goal state : h and k • Local minimum: A-> F -> G • Solution: – A, J, K – A, F, E, I, K Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 11. Steepest-Ascent Hill Climbing (Gradient Search) • Considers all the moves from the current state. • Selects the best one as the next state. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 12. Steepest-Ascent Hill Climbing (Gradient Search) 1. Evaluate the initial state. If it is a goal state then return it and quit. Otherwise, continue with initial state as current state. 2. Loop until a solution is found or a complete iteration produces no change to current state: - let SUCC be a state such that any possible successor of the current state will be better than SUCC (the worst state). - For each operator that applies to the current state do: * Apply any operator and generate new state * evaluate the new state: * if it is a goal state, then return it and quit * if it is not a goal state, compare it to SUCC. If it is better than set SUCC to this state If it is not better than leave SUCC alone * if SUCC is better than the current state then set the current state to SUCC. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 13. DRAWBACK – Local maxima: A local maximum is a peak that is higher than each of its neighboring states, but lower than the global maximum. Once a local maximum peak the algorithm is halt even though the solution may be far from satisfactory. – Plateau: A plateau is an area of the state space landscape where the evaluation function is flat. It can be a flat local maximum , from which no uphill exit exists .The search will conduct a random walk. – Ridges: A ridge is a special kind of local maximum. It is an area of the search space that is higher than surrounding areas and that itself has a slope (which one would like to climb). But the orientation of the high region, compared to the set of available moves and the directions in which they move, makes it impossible to traverse a ridge by single moves.Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 14. Hill Climbing: Disadvantages Ways Out • Local maximum :Backtrack to some earlier node and try going in a different direction. • Plateau: Here make a big jump to some direction and try to get to new section of the search space. • Ridge: Here apply two or more rules before doing the test i.e., moving in several directions at once. • Hill climbing is a local method: Decides what to do next by looking only at the “immediate” consequences of its choices. • Global information might be encoded in heuristic functions. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 15. Hill Climbing: Disadvantages B C D A B C Start Goal Blocks World A D Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 16. Hill Climbing: Disadvantages B C D A B C Start Goal Blocks World A D Local heuristic: +1 for each block that is resting on the thing it is supposed to be resting on. -1 for each block that is resting on a wrong thing. 0 4 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 17. Hill Climbing: Disadvantages B C D B C D A A 0 2 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 18. Hill Climbing: Disadvantages B C D A B C D A B C DA 0 0 0 B C D A 2 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 19. Hill Climbing: Disadvantages B C D A B C Start Goal Blocks World A D Global heuristic: For each block that has the correct support structure: +1 to every block in the support structure. For each block that has a wrong support structure: -1 to every block in the support structure. -6 6 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 20. Hill Climbing: Disadvantages B C D A B C D A B C DA -6 -2 -1 B C D A -3 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 21. One solution with Global heuristic Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 22. Hill Climbing: Conclusion • Can be very inefficient in a large, rough problem space. • Global heuristic may have to pay for computational complexity. • Often useful when combined with other methods, getting it started right in the right general neighbourhood. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 23. Simulated annealing search • A hill-climbing algorithm that never makes “downhill” moves towards states with lower value (or high cost) is guaranteed to be incomplete , because it can get stuck on a local maxima. • Instead of starting again randomly when stuck on a local maximum , we could allow the search to take some downhill steps to escape the local maximum. This is the idea of simulated annealing. • Innermost loop of simulated annealing is quite similar to hill climbing . Instead of picking the best move, however , it picks a random move. • Lowering the chances of getting caught at a local maximum, or plateau, or a ridge. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 24. Simulated Annealing • A variation of hill climbing in which, at the beginning of the process, some downhill moves may be made. • To do enough exploration of the whole space early on, so that the final solution is relatively insensitive to the starting state. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 25. Simulated Annealing Physical Annealing • Physical substances are melted and then gradually cooled until some solid state is reached. • The goal is to produce a minimal-energy state. • Annealing schedule: if the temperature is lowered sufficiently slowly, then the goal will be attained. • Nevertheless, there is some probability for a transition to a higher energy state: e-E/T. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 26. Simulated Annealing 1. Evaluate the initial state. If it is a goal state then return it and quit. Otherwise, continue with initial state as current state. 2. Initialize best-so-far to the current state. 3. Initialize T according to the annealing schedule. 4. Loop until a solution is found or there are no new operators left to be applied in current state: 1. Select an operator that has not yet been applied to the current state and apply it to produce a new state 2. Evaluate the new state, compute: E = Val(current state) - Val(new state) * if new state is a goal state, then return it and quit * if it is not a goal state but better than current state then make new state as current state * if it is not better than current state then make new state as current state with probability p` = e ^ -E/T. This step is usually implemented by invoking a random number generator to produce a number in the range of [0,1]. If the number is less then p`, then the move is accepted ,otherwise do nothing. 3. Revise T as necessary according to the annealing schedule.Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 27. Simulated Annealing Example Place: where each tile I should go. Place(i)=i. Position: where it is at any moment. Energy: sum(distance(i, position(i))), for i=1,8. Initial State Energy(solution) = 0 Random neighbor: from each state there are at most 4 possible moves. Choose one randomly. T = temperature. For example, we start with T=40, and at every iteration we decrease it by 1. If T=1 then we stop decreasing it. Energy = (2-1)+(6-2)+(7-3)+(4-1)+(8-5) +(6-4)+(7-3)+(9-8) = 1 + 4 + 4 + 3 + 3 + 2 + 4 + 1 = 22 Following are the neighbors Suppose T = 40. Then the probability of the first neighbor is e-(25-22)/40 = 0.9277 = 92.77%Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 28. Best-First Search • Depth-first search: not all competing branches having to be expanded. • Breadth-first search: not getting trapped on dead- end paths.  Combining the two is to follow a single path at a time, but switch paths whenever some competing path look more promising than the current one. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 29. Best-First Search A DCB FEHG JI 5 66 5 2 1 A DCB FEHG 5 66 5 4 A DCB FE 5 6 3 4 A DCB 53 1 A Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 30. Best-first Vs Hill Climbing hill climbing • In hill climbing, one move is selected and all the others are rejected and are never reconsidered. • It stop if there are no successor states with better values than the current state. best-first search • one move is selected, but the others are kept around so that they can be revisited later if the selected path becomes less promising • the best available state is selected in best-first search, even if that state has a value that is lower than the value of the state that was just explored. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 31. Best-First Search • To get the path information, Each node will contain, in addition to a description of the problem state it represents, – an Indication of how promising it is, – a parent link that points back to the best node from which it came, – a list of the nodes that were generated from it. • Once the goal is found the parent link will make it possible to recover the path to the goal. • The list of successors will make it possible, if a better path is found to an already existing node, to propagate the improvement down to its successors. • We will call a graph of this sort an OR graph, since each of its branches represents an alternative problem-solving path. • To implement such a graph-search procedure, we will need to use two lists of nodes: • OPEN: nodes that have been generated, but have not examined. This is organized as a priority queue in which the elements with the highest priority are those with the most promising value of the heuristic function • CLOSED: nodes that have already been examined. It used, whenever a new node is generated, check whether it has been generated before. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 32. Best-First Search Algorithm 1. Start with OPEN containing just the initial state. 2. Until a goal is found or there are no nodes left on OPEN do: I. Pick the best node on OPEN. II. Generate its successors. III. For each successor do: a. If it has not been generated before, evaluate it, add it to OPEN, and record its parent. b. If it has been generated before, change the parent if this new path is better than the previous one. In that case, update the cost of getting to this node and to any successors that this node may already have. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 33. Best-First Search • Best-First Search f(n) = g(n) + h(n)  f(n) = measures the value of the current state (its “goodness”).  h(n) = the estimated cost of the cheapest path from node n to a goal state.  g(n) = the exact cost of the cheapest path from the initial state to node n. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 34. Traditional informed search strategies • Greedy search: f(n)=h(n) ; i.e. g(n) = 0. it means the estimated cost of the cheapest path from node n to a goal state. Neither optimal nor complete • Uniform-cost search: f(n)=g(n) ; i.e. h(n) = 0. it means the cost of the cheapest path from the initial state to node n. Optimal and complete, but very inefficient • A* Search: – f(n)=g(n)+h(n), search uses an “admissible” heuristic function f that takes into account the current cost g – A heuristic function f(n) = g(n) + h(n) is admissible if h(n) never overestimates the cost to reach the goal. – f(n) never over-estimate the true cost to reach the goal state through node n. – Always returns the optimal solution path Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 35. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 36. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 37. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 38. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 39. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 40. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 41. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 42. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 43. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 44. Greedy Best First Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 f(n) = h (n) = straight-line distance heuristic A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 45. Greedy Best First Search A Start State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 46. Greedy Search: Tree Search A B C E Start 75118 140 [374][329] [253] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 47. Greedy Search: Tree Search A B C E F 99 G A 80 Start 75118 140 [374][329] [253] [193] [366] [178] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 48. Greedy Search: Tree Search A B C E F I 99 211 G A 80 Start Goal 75118 140 [374][329] [253] [193] [366] [178] E [0][253] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 49. Greedy Search: Tree Search A B C E F I 99 211 G A 80 Start Goal 75118 140 [374][329] [253] [193] [366] [178] E [0][253] dist(A-E-F-I) = 140 + 99 + 211 = 450 State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 50. Greedy Best First Search : Optimal ? NO State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic dist(A-E-G-H-I) =140+80+97+101=418 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 51. State Heuristic: h(n) A 366 B 374 ** C 250 D 244 E 253 F 178 G 193 H 98 I 0 A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 f(n) = h (n) = straight-line distance heuristic 140 Greedy Best First Search : Optimal ? Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 52. Greedy Search: Tree Search A Start State h(n) A 366 B 374 C 250 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 53. Greedy Search: Tree Search A B C E Start 75118 140 [374][250] [253] State h(n) A 366 B 374 C 250 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 54. Greedy Search: Tree Search A B C E D 111 Start 75118 140 [374][250] [253] [244] State h(n) A 366 B 374 C 250 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 55. Greedy Search: Tree Search A B C E D 111 Start 75118 140 [374][250] [253] [244] C[250] Infinite Branch ! State h(n) A 366 B 374 C 250 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 56. Greedy Search: Tree Search A B C E D 111 Start 75118 140 [374][250] [253] [244] C D [250] [244] Infinite Branch ! State h(n) A 366 B 374 C 250 D 244 E 253 F 178 G 193 H 98 I 0 Not Complete Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 57. Greedy Search: Time and Space Complexity ? A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 140 • Greedy search is not optimal. • Greedy search is incomplete without systematic checking of repeated states. • In the worst case, the Time and Space Complexity of Greedy Search are both O(bm) Where b is the branching factor and m the maximum path length Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 58. A* Search eval-fn: f(n)=g(n)+h(n) Informed Search Strategies Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 59. A* (A Star) • Greedy Search minimizes a heuristic h(n) which is an estimated cost from a node n to the goal state. However, although greedy search can considerably cut the search time (efficient), it is neither optimal nor complete. • Uniform Cost Search minimizes the cost g(n) from the initial state to n. UCS is optimal and complete but not efficient. • New Strategy: Combine Greedy Search and UCS to get an efficient algorithm which is complete and optimal. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 60. A* (A Star) • A* uses a heuristic function which combines g(n) and h(n) f(n) = g(n) + h(n) • g(n) is the exact cost to reach node n from the initial state. Cost so far up to node n. • h(n) is an estimation of the remaining cost to reach the goal. Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 61. Algorithm 1. Start with OPEN containing only the initial node. Set that node's g value to 0, its h' value to whatever it is, and its f' value to h' + 0, or h'. Set CLOSED to the empty list. 2. Until a goal node is found, repeat the following procedure: – If there are no nodes on OPEN, report failure. – else, pick the node on OPEN with the lowest f' value. Call it BEST NODE. Remove it from OPEN. Place it on CLOSED. • See if BESTNODE is a goal node. If so, exit and report a solution (either BESTNODE ; if all we want is the node or the path that has been created between the initial state and BESTNODE if we are interested in the path). • else, generate the successors of BEST NODE but do not set BESTNODE to point to them yet. (First we need to see if any of them have already been generated.) For each such SUCCESSOR, do the following: Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 62. a) Set SUCCESSOR to point back to BEST NODE. These backwards links will make it possible to recover the path once a solution is found. b) Compute g(SUCCESSOR) = g(BESTNODE) + the cost of getting from BEST NODE to SUCCESSOR. c) See if SUCCESSOR is the same as any node on OPEN (i.e., it has already been generated but not processed). – If so, call that node OLD. Since this I node already exists in the graph, we can throw SUCCESSOR away and add OLD to the list of BESTNODE's successors. Now we must decide whether OLD's parent link should be reset to point to BESTNODE. It should be if the path we have just found to SUCCESSOR is cheaper than the current best path to OLD (since SUCCESSOR and OLD are really the same node). So see whether it is cheaper to get to OLD via its current parent or to SUCCESSOR via BESTNODE by comparing their g values. If OLD is cheaper (or just as cheap), then we need do nothing. If SUCCESSOR is cheaper, then reset OLD's parent link to point to BESTNODE, record the new cheaper path in g(OLD), and update f' (OLD). Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 63. d) If SUCCESSOR was not on OPEN, see if it is on CLOSED. If so, call the node on CLOSED OLD and add OLD to the list of BESTNODE's successors. – Check to see if the new path or the old path is better just as in step 2(c), and set the parent link and g and f‘ values appropriately. If we have just found a better path to OLD, we must propagate the improvement to OLD's successors. This is a bit tricky. OLD points to its successors. Each successor in turn points to its successors, and so forth, until each branch terminates with a node that either is still on OPEN or has no successors. So to propagate the new cost downward, do a depth-first traversal of the tree starting at OLD, changing each node's g value (and thus also its f' value), terminating each branch when you reach either a node with no successors or a node to which an equivalent or better path has already been found. e) If SUCCESSOR was not already on either OPEN or CLOSED, then put it on OPEN, and add it to the list of BESTNODE's successors. Compute f'(SUCCESSOR) = g(SUCCESSOR) + h'(SUCCESSOR) Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 64. A* (A Star) n g(n) h(n) f(n) = g(n)+h(n) Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 65. A* Search State Heuristic: h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 f(n) = g(n) + h (n) g(n): is the exact cost to reach node n from the initial state. A B D C E F I 99 211 G H 80 Start Goal 97 101 75118 111 140 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 66. A* Search: Tree Search A Start State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 67. A* Search: Tree Search A BC E Start 75118 140 [393] [449] [447] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 C[447] = 118 + 329 E[393] = 140 + 253 B[449] = 75 + 374 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 68. A* Search: Tree Search A BC E F 99 G 80 Start 75118 140 [393] [449] [447] [417][413] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 69. A* Search: Tree Search A BC E F 99 G 80 Start 75118 140 [393] [449] [447] [417][413] H 97 [415] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 70. A* Search: Tree Search A BC E F I 99 G H 80 Start 97 101 75118 140 [393] [449] [447] [417][413] [415] Goal [418] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 71. A* Search: Tree Search A BC E F I 99 G H 80 Start 97 101 75118 140 [393] [449] [447] [417][413] [415] Goal [418] I [450] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 72. A* Search: Tree Search A BC E F I 99 G H 80 Start 97 101 75118 140 [393] [449] [447] [417][413] [415] Goal [418] I [450] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 73. A* Search: Tree Search A BC E F I 99 G H 80 Start 97 101 75118 140 [393] [449] [447] [417][413] [415] Goal [418] I [450] State h(n) A 366 B 374 C 329 D 244 E 253 F 178 G 193 H 98 I 0 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 74. How good is A*? • Memory usage depends on the heuristic function – If g(N) = constant, H(n) = 0 then A* = breadth-first – If h(N) is a perfect estimator, A* goes straight to goal – ...but if h(N) were perfect, why search? • Quality of solution also depends on h(N) • It can be proved that, if h(N) is optimistic (never overestimates the distance to a goal), then A* will find a best solution (shortest path to a goal) Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 75. Estimation of h h' Overestimates h h' Underestimates h Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 76. Conditions for optimality • Admissibility – An admissible heuristic never overestimates the cost to reach the goal – Straight-line distance hSLD obviously is an admissible heuristic • Admissibility /Monotonicity – The admissible heuristic h is consistent (or satisfies the monotone restriction) if for every node N and every successor N’ of N: h(N)  c(N,N’) + h(N’) (triangular inequality) – A consistent heuristic is admissible. • A* is optimal if h is admissible or consistent N N’ h(N) h(N’) c(N,N’) Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 77. The Effect of Heuristic Accuracy on Performance An Example: 8-puzzle Heuristic Evaluation Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 78. Admissible heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? • h2(S) = ? Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 79. Admissible heuristics E.g., for the 8-puzzle: • h1(n) = number of misplaced tiles • h2(n) = total Manhattan distance (i.e., no. of squares from desired location of each tile) • h1(S) = ? 8 • h2(S) = ? 3+1+2+2+2+3+3+2 = 18 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 80. Dominance • If h2(n) ≥ h1(n) for all n (both admissible) • then h2 dominates h1 • h2 is better for search Why? • Typical search costs (average number of nodes expanded): • A*(h1) = 227 nodes A*(h2) = 73 nodes • A*(h1) = 39,135 nodes A*(h2) = 1,641 nodes Self Study Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 81. Problem Reduction Goal: Acquire TV set AND-OR Graphs Goal: Steal TV set Goal: Earn some money Goal: Buy TV set Algorithm AO* (Martelli & Montanari 1973, Nilsson 1980) Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 82. Problem Reduction: AO* A DCB 43 5 A 5 6 FE 44 A DCB 43 10 9 9 9 FE 44 A DCB 4 6 10 11 12 HG 75 Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 83. When to Use Search Techniques • The search space is small, and – There are no other available techniques, or – It is not worth the effort to develop a more efficient technique • The search space is large, and – There is no other available techniques, and – There exist “good” heuristics Dr. Amit Kumar, Dept of CSE, JUET, Guna
  • 84. Conclusions • Frustration with uninformed search led to the idea of using domain specific knowledge in a search so that one can intelligently explore only the relevant part of the search space that has a good chance of containing the goal state. These new techniques are called informed (heuristic) search strategies. • Even though heuristics improve the performance of informed search algorithms, they are still time consuming especially for large size instances. Dr. Amit Kumar, Dept of CSE, JUET, Guna