SlideShare ist ein Scribd-Unternehmen logo
1 von 77
Heuristic Search


(Where we try to be smarter in how we choose among alternatives)




                 Dr. Ali Abdallah
         (dr_ali_abdallah@hotmail.com)
Search Algorithm (Modified)

 INSERT(initial-node,FRINGE)
 Repeat:
      If empty(FRINGE) then return failure
      n  REMOVE(FRINGE)
      s  STATE(n)
      If GOAL?(s) then return path or goal state
      For every state s’ in SUCCESSORS(s)
         Create a node n’ as a successor of n
         INSERT(n’,FRINGE)
Best-First Search
 It exploits state description to estimate
  how promising each search node is
 An evaluation function f maps each search
  node N to positive real number f(N)
 Traditionally, the smaller f(N), the more
  promising N
 Best-first search sorts the fringe in
  increasing f [random order is assumed among
 nodes with equal values of f]
Best-First Search



          “Best” only refers to the value of f,
          not to the quality of the actual path.
          Best-first search does not generate
          optimal paths in general
How to construct
an evaluation function?
 There are no limitations on f. Any
 function of your choice is acceptable.
 But will it help the search algorithm?

 The classical approach is to construct
 f(N) as an estimator of the cost of a
 solution path through N
Heuristic Function
 The heuristic function h(N) estimates the
    distance of STATE(N) to a goal state
    Its value is independent of the current
    search tree; it depends only on STATE(N)
    and the goal test
   Example:      5    8     1 2 3
                 4   2   1     4   5   6
                 7   3   6     7   8
                 STATE(N)      Goal state

     h1(N) = number of misplaced tiles = 6
Other Examples
             5       8     1   2   3
             4   2   1     4   5   6
             7   3   6     7   8
             STATE(N)      Goal state

   h1(N) = number of misplaced tiles = 6



   h2(N) = sum of the (Manhattan) distances of
             every tile to its goal position
          = 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13
Other Examples
(Robot Navigation)
                                       N
     yN




     yg



                                       xN       xg
h1 (N) =   (xN -xg ) +(yN -yg )
                   2          2
                                  (Euclidean distance)

h2(N) = |xN-xg| + |yN-yg|         (Manhattan distance)
Classical Evaluation Functions

 h(N): heuristic function
  [Independent of search tree]
 g(N): cost of the best path found so far
  between the initial node and N
  [Dependent on search tree]

 f(N) = h(N)  greedy best-first search

 f(N) = g(N) + h(N)
8-Puzzle
f(N) = h(N) = number of misplaced tiles


                                  3       3   4
                 5       3

   empty                          4
                                              2
   4                              2       1
                 3       3

                                  4
                                                  0
                 5       4
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles

                                 3+3
                1+5      2+3

                                 3+4
                                                5+2
        0+4                       3+2     4+1
                1+3      2+3
                                                5+0
                                  3+4

                1+5      2+4
8-Puzzle
f(N) = h(N) = ÎŁ distances of tiles to goal




                 6        5



                                                 2
        5                          2         1
                 4        3
                                                 0
                                   4

                 6        5
Best-First → Efficiency
       Local-minimum problem




    f(N) = h(N) = straight distance to the goal
Can we Prove Anything?
 If the state space is finite and we discard
  nodes that revisit states, the search is
  complete, but in general is not optimal

 If the state space is finite and we do not
  discard nodes that revisit states, in general
  the search is not complete

 If the state space is infinite, in general the
  search is not complete
Admissible Heuristic
 Let h*(N) be the cost of the optimal path
 from N to a goal node

 The heuristic function h(N) is admissible
 if:
            0 ≤ h(N) ≤ h*(N)

 An admissible heuristic function is always
  optimistic !
• Note: G is a goal node  h(G) = 0
8-Puzzle Heuristics
              5       8       1   2   3
              4   2   1       4   5   6
              7   3   6       7   8
             STATE(N)        Goal state

    h1(N) = number of misplaced tiles = 6
     is admissible



    h2(N) = sum of the (Manhattan) distances of
             every tile to its goal position
           = 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13
     is admissible
Robot Navigation Heuristics




    Cost of one horizontal/vertical step = 1
    Cost of one diagonal step = 2

h1 (N) = (xN -xg )2 +(yN -yg )2 is admissible
h2(N) = |xN-xg| + |yN-yg| is admissible if moving along
                              diagonals is not allowed, and
                              not admissible otherwise
A* Search
 (most popular algorithm in AI)
 f(N) = g(N) + h(N), where:
   • g(N) = cost of best path found so far to N
   • h(N) = admissible heuristic function
 for all arcs: 0 < ε ≤ c(N,N’)
 “modified” search algorithm is used

 Best-first search is then called A* search
Result #1

A* is complete and optimal

[This result holds if nodes revisiting
states are not discarded]
Proof (1/2)
•   If a solution exists, A* terminates and
    returns a solution

    For each node N on the fringe, f(N)≥d(N)×ε,
    where d(N) is the depth of N in the tree

    As long as A* hasn’t terminated, a node K on
    the fringe lies on a solution path

    Since each node expansion increases the
    length of one path, K will eventually be
    selected for expansion
Proof (2/2)
•   Whenever A* chooses to expand a goal
    node, the path to this node is optimal

    C*= h*(initial-node)

    G’: non-optimal goal node in the fringe
    f(G’) = g(G’) + h(G’) = g(G’) > C*

    A node K in the fringe lies on an optimal path:
    f(K) = g(K) + h(K) < C*

    So, G’ is not selected for expansion
Time Limit Issue
 When a problem has no solution, A* runs for ever if
    the state space is infinite or states can be revisited
    an arbitrary number of times (the search tree can
    grow arbitrarily large). In other cases, it may take a
    huge amount of time to terminate
   So, in practice, A* must be given a time limit. If it
    has not found a solution within this limit, it stops.
    Then there is no way to know if the problem has no
    solution or A* needed more time to find it
   In the past, when AI systems were “small” and solving
    a single search problem at a time, this was not too
    much of a concern. As AI systems become larger,
    they have to solve a multitude of search problems
    concurrently. Then, a question arises: What should
    be the time limit for each of them?
Time Limit Issue


  Hence, the usefulness of a simple test,
  like in the (2n-1)-puzzle, that determines
  if the goal is reachable

  Unfortunately, such a test rarely exists
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles

                                 3+3
                1+5      2+3

                                 3+4
                                                5+2
        0+4                       3+2     4+1
                1+3      2+3
                                                5+0
                                  3+4

                1+5      2+4
Robot Navigation
Robot Navigation
 f(N) = h(N), with h(N) = Manhattan distance to the goal
 (not A*)

    8   7    6   5   4   3    2   3   4   5    6
    7        5   4   3                         5

    6            3   2   1    0   1   2        4

    7   6                                      5
    8   7    6   5   4   3    2   3   4   5    6
Robot Navigation
 f(N) = h(N), with h(N) = Manhattan distance to the goal
 (not A*)

    8   7    6   5   4   3    2   3   4   5    6
    7        5   4   3                         5

    6            3   2   1    0
                              0   1   2        4
    7
    7   6                                      5
    8   7    6   5   4   3    2   3   4   5    6
Robot Navigation
 f(N) = g(N)+h(N), with h(N) = Manhattan distance to goal
 (A*)

   8+3 7+4 6+5 5+6 4+7 3+8 2+9 3+10 4
    8 7 6 5 4 3 2 3                       5   6
   7+2
    7        5+6 4+7 3+8
              5 4 3                           5
    6+1
     6           3 2+9 1+10 0+11 1
                    2 1      0        2       4
   7+0 6+1
    7 6                                       5
    8+1 7+2 6+3 5+4 4+5 3+6 2+7 3+8 4
     8 7 6 5 4 3 2 3                      5   6
How to create an admissible h?
 An admissible heuristic can usually be seen as
  the cost of an optimal solution to a relaxed
  problem (one obtained by removing constraints)
 In robot navigation on a grid:
  • The Manhattan distance corresponds to removing the
    obstacles
  • The Euclidean distance corresponds to removing both
    the obstacles and the constraint that the robot
    moves on a grid

 More on this topic later
What to do with revisited states?


          c=1        2

h = 100                   1
                              The heuristic h is
                      2
            1
                 90
                              clearly admissible
                100


                 0
What to do with revisited states?


          c=1        2

                                    f = 1+100                2+1
h = 100                     1

                      2
            1
                 90                                   4+90

                100
                                        ?
                 0                                     104

                          If we discard this new node, then the search
                          algorithm expands the goal node next and
                          returns a non-optimal solution
What to do with revisited states?


       1        2

                                  1+100                  2+1
 100                   1

                 2
       1
            90                     2+90           4+90

           100


            0                      102             104

                     Instead, if we do not discard nodes revisiting
                     states, the search terminates with an optimal
                     solution
But ...
If we do not discard nodes revisiting
states, the size of the search tree can be
exponential in the number of visited states


     1       1
         1                   1+1                 1+1
     1       1
         1       2+1       2+1             2+1         2+1

     2    2


                  4    4    4      4   4   4     4     4
 It is not harmful to discard a node revisiting a
  state if the new path to this state has higher
  cost than the previous one

 A* remains optimal

 Fortunately, for a large family of admissible
  heuristics – consistent heuristics – there is a
  much easier way of dealing with revisited
  states
Consistent Heuristic
A heuristic h is consistent if
1) for each node N and each child N’ of N:
   h(N) ≤ c(N,N’) + h(N’)          N
                                c(N,N’)

                                  N’       h(N)

2) for each goal node G:           h(N’)

   h(G) = 0                    (triangle inequality)



The heuristic is also said to be monotone
Consistency Violation
If h tells that N is
100 units from the
goal, then moving              N
from N along an arc     c(N,N’)

costing 10 units          =10
                          N’       h(N)

should not lead to a       h(N’)
                                   =100


node N’ that h              =10

estimates to be 10      (triangle inequality)

units away from the
goal
Admissibility and Consistency
 A consistent heuristic is also admissible

 An admissible heuristic may not be
 consistent, but many admissible heuristics
 are consistent
8-Puzzle
      5        8              1    2     3

      4    2   1              4    5     6

      7    3   6              7    8
      STATE(N)                    Goal

     h1(N) = number of misplaced tiles
     h2(N) = sum of the (Manhattan) distances
              of every tile to its goal position
    are both consistent
Robot Navigation




    Cost of one horizontal/vertical step = 1
    Cost of one diagonal step = 2

h1 (N) = (xN -xg )2 +(yN -yg )2 is consistent
h2(N) = |xN-xg| + |yN-yg| is consistent if moving along
                              diagonals is not allowed, and
                              not consistent otherwise
Result #2
  If h is consistent, then whenever A*
  expands a node, it has already found
  an optimal path to this node’s state
N

Proof
                                                 N’
 Consider a node N and its child N’
   Since h is consistent: h(N) ≤ c(N,N’)+h(N’)
   f(N) = g(N)+h(N) ≤ g(N)+c(N,N’)+h(N’) = f(N’)
   So, f is non-decreasing along any path

 If K is selected for expansion, then any other
   node K’ in the fringe verifies f(K’) ≥ f(K)
   So, if one node K’ lies on another path to the
   state of K, the cost of this other path is no
   smaller than the path to K
Result #2
  If h is consistent, then whenever A*
  expands a node, it has already found
  an optimal path to this node’s state

The path to N
is the optimal
path to S

                 N               N1
                     S                    S1

                                  N2 can be
                                  discarded
                           N2
Revisited States with Consistent
 Heuristic
 When a node is expanded, store its state
  into CLOSED
 When a new node N is generated:
   • If STATE(N) is in CLOSED, discard N
   • If there exists a node N’ in the fringe
     such that STATE(N’) = STATE(N),
     discard the node – N or N’ – with the
     largest f
Is A* with some consistent
heuristic all that we need?

No !
 There are very dumb consistent
  heuristics
h ≡ 0

 It is consistent (hence, admissible) !
 A* with h≡0 is uniform-cost search
 Breadth-first and uniform-cost are
 particular cases of A*
Heuristic Accuracy
Let h1 and h2 be two consistent heuristics such
that for all nodes N:
                  h1(N) ≤ h2(N)
h2 is said to be more accurate (or more informed)
than h1
 5      8    1 2 3    h1(N) = number of misplaced
                              tiles
 4   2   1   4   5   6
                             h2(N) = sum of distances of
 7   3   6   7   8
                              every tile to its goal position
 STATE(N)    Goal state

                           h2 is more accurate than h1
Result #3

 Let h2 be more accurate than h1
 Let A1* be A* using h1 and A2* be A*
 using h2
 Whenever a solution exists, all the
 nodes expanded by A2* are also
 expanded by A1*
Proof
 C* = h*(initial-node)
 Every node N such that f(N) < C* is
  eventually expanded
 Every node N such that h(N) < C*−g(N)
  is eventually expanded
 Since h1(N) ≤ h2(N), every node expanded
  by A2* is also expanded by A1*
 …except for nodes N such that f(N) = C*
Effective Branching Factor
 It is used as a measure the effectiveness
  of a heuristic
 Let n be the total number of nodes
  expanded by A* for a particular problem
  and d the depth of the solution
 The effective branching factor b* is
  defined by n = 1 + b* + (b*)2 +...+ (b*)d
Experimental Results
 8-puzzle with:
    h1 = number of misplaced tiles
    h2 = sum of distances of tiles to their goal positions
 Random generation of many problem instances
 Average effective branching factors (number of
  expanded nodes):
        d     IDS                 A1*              A2*
        2     2.45                1.79             1.79
        6     2.73                1.34             1.30
        12    2.78 (3,644,035)    1.42 (227)       1.24 (73)
        16    --                  1.45             1.25
        20    --                  1.47             1.27
        24    --                  1.48 (39,135)    1.26 (1,641)
How to create good heuristics?
 By solving relaxed problems at each node
 In the 8-puzzle, the sum of the distances of
  each tile to its goal position (h2) corresponds to
  solving 8 simple problems:
               5       8   1   2   3
               4   2   1   4   5   6
               7   3   6   7   8

               5
                               5



 It ignores negative interactions among tiles
Can we do better?
 For example, we could consider two more complex
  relaxed problems:
                 5        8   1   2   3
                 4    2   1   4   5   6
                 7    3   6   7   8



                 1    2   3       5       8
     4   2   1   4                                5   6

         3                        7       6   7   8

  h = d1234 + d5678 [disjoint pattern heuristic]
 These distances could have been precomputed in a
  database
Can we do better?



 Several order-of-magnitude speedups
  have been obtained this way for the
      15- and 24-puzzle (see R&N)
On Completeness and Optimality
 A* with a consistent heuristic has nice
    properties: completeness, optimality, no need
    to revisit states
   Theoretical completeness does not mean
    “practical” completeness if you must wait too
    long to get a solution (remember the time
    limit issue)
   So, if one can’t design an accurate consistent
    heuristic, it may be better to settle for a
    non-admissible heuristic that “works well in
    practice”, even through completeness and
    optimality are no longer guaranteed
Iterative Deepening A* (IDA*)
 Idea: Reduce memory requirement of
  A* by applying cutoff on values of f
 Consistent heuristic h
 Algorithm IDA*:
  1. Initialize cutoff to f(initial-node)
  2. Repeat:
       Perform depth-first search by expanding all
          nodes N such that f(N) ≤ cutoff
         Reset cutoff to smallest value f of non-
          expanded (leaf) nodes
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4

  Cutoff=4


               6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4

  Cutoff=4     4



               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4

  Cutoff=4     4        5



               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles


                        5




       4

  Cutoff=4     4        5



               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles


               6        5




       4

  Cutoff=4     4        5



               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4

  Cutoff=5


               6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4

  Cutoff=5     4



               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4

  Cutoff=5     4        5



               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4

  Cutoff=5     4        5

                                 7

               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4                         5
  Cutoff=5     4        5

                                 7

               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4                         5        5
  Cutoff=5     4        5

                                 7

               6        6
8-Puzzle
f(N) = g(N) + h(N)
  with h(N) = number of misplaced tiles




       4                         5        5
  Cutoff=5     4        5
                                              5
                                 7

               6        6
Advantages/Drawbacks of IDA*

 Advantages:
  •   Still complete and optimal
  •   Requires less memory than A*
  •   Avoid the overhead to sort the fringe
 Drawbacks:
  • Can’t avoid revisiting states not on the
    current path
  • Available memory is poorly used
SMA*
(Simplified Memory-bounded A*)
   Works like A* until memory is full
   Then SMA* drops the node in the fringe with the
    largest f value and “backs up” this value to its parent
   When all children of a node N have been dropped, the
    smallest backed up value replaces f(N)
   In this way, the root of an erased subtree
    remembers the best path in that subtree
   SMA* will regenerate this subtree only if all other
    nodes in the fringe have greater f values
   SMA* can’t completely avoid revisiting states, but it
    can do a better job at this that IDA*
Local Search

 Light-memory search method
 No search tree; only the current state
  is represented!
 Only applicable to problems where the
  path is irrelevant (e.g., 8-queen), unless
  the path is encoded in the state
 Many similarities with optimization
  techniques
Steepest Descent
 S  initial state
 Repeat:
      S’  arg minS’∈SUCCESSORS(S){h(S’)}
      if GOAL?(S’) return S’
      if h(S’) < h(S) then S  S’ else return failure


Similar to:
- hill climbing with –h
- gradient descent over continuous space
Application: 8-Queen
Repeat n times:
2) Pick an initial state S at random with one queen in each column
3) Repeat k times:
    –    If GOAL?(S) then return S
    –    Pick an attacked queen Q at random
    –    Move Q it in its column to minimize the number of attacking
         queens is minimum  new S [min-conflicts heuristic]
4) Return failure
            1
            2                             2
                                          0
            3                             2
            3                             2
            2                             2
            2                             2
            3                             2
• Why does it work ???
• There are many goal states that are
      well-distributed over the state space
•   If no solution has been found after a few
      steps, it’s better to start it all over again.
      Building a search tree would be much less
      efficient because of the high branching
      factor
•   Running time almost independent of the
      number of queens
Steepest Descent
may easily get stuck in local minima
 Random restart (as in n-queen
  example)

 Monte Carlo descent: chooses
   probabilistically among the successors
Monte Carlo Descent
   S  initial state
   Repeat k times:
       If GOAL?(S) then return S
       S’  successor of S picked at random
       if h(S’) ≤ h(S) then S  S’
       else
        -   ∆h = h(S’)-h(S)
        -   with probability ~ exp(−∆h/T), where T is called the
            “temperature” S  S’         [Metropolis criterion]
3) Return failure

Simulated annealing lowers T over the k iterations.
It starts with a large T and slowly decreases T
Search problems



                Blind search


                             Heuristic search:
                             best-first and A*




Construction of heuristics     Variants of A*    Local search
When to Use Search Techniques?

1) The search space is small, and
   • No other technique is available, or
   • Developing a more efficient technique is not
     worth the effort

2)The search space is large, and
   • No other available technique is available, and
   • There exist “good” heuristics

Weitere ähnliche Inhalte

Was ist angesagt?

Jarrar: Informed Search
Jarrar: Informed Search  Jarrar: Informed Search
Jarrar: Informed Search Mustafa Jarrar
 
Astar algorithm
Astar algorithmAstar algorithm
Astar algorithmShuqing Zhang
 
Heuristic Searching: A* Search
Heuristic Searching: A* SearchHeuristic Searching: A* Search
Heuristic Searching: A* SearchIOSR Journals
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmHema Kashyap
 
Control Strategies in AI
Control Strategies in AIControl Strategies in AI
Control Strategies in AIAmey Kerkar
 
Pathfinding - Part 3: Beyond the basics
Pathfinding - Part 3: Beyond the basicsPathfinding - Part 3: Beyond the basics
Pathfinding - Part 3: Beyond the basicsStavros Vassos
 
M4 heuristics
M4 heuristicsM4 heuristics
M4 heuristicsYasir Khan
 
Solving problems by searching Informed (heuristics) Search
Solving problems by searching Informed (heuristics) SearchSolving problems by searching Informed (heuristics) Search
Solving problems by searching Informed (heuristics) Searchmatele41
 
A star algorithms
A star algorithmsA star algorithms
A star algorithmssandeep54552
 
AI Greedy and A-STAR Search
AI Greedy and A-STAR SearchAI Greedy and A-STAR Search
AI Greedy and A-STAR SearchAndrew Ferlitsch
 
Dfs presentation
Dfs presentationDfs presentation
Dfs presentationAlizay Khan
 
Lecture 21 problem reduction search ao star search
Lecture 21 problem reduction search ao star searchLecture 21 problem reduction search ao star search
Lecture 21 problem reduction search ao star searchHema Kashyap
 
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VEC
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VECUnit II Problem Solving Methods in AI K.sundar,AP/CSE,VEC
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VECsundarKanagaraj1
 
M3 search
M3 searchM3 search
M3 searchYasir Khan
 
Lecture 12 Heuristic Searches
Lecture 12 Heuristic SearchesLecture 12 Heuristic Searches
Lecture 12 Heuristic SearchesHema Kashyap
 
A* Search Algorithm
A* Search AlgorithmA* Search Algorithm
A* Search Algorithmvikas dhakane
 
Analysis and design of algorithms part 4
Analysis and design of algorithms part 4Analysis and design of algorithms part 4
Analysis and design of algorithms part 4Deepak John
 

Was ist angesagt? (20)

Jarrar: Informed Search
Jarrar: Informed Search  Jarrar: Informed Search
Jarrar: Informed Search
 
AI Lesson 05
AI Lesson 05AI Lesson 05
AI Lesson 05
 
A Star Search
A Star SearchA Star Search
A Star Search
 
Astar algorithm
Astar algorithmAstar algorithm
Astar algorithm
 
Heuristic Searching: A* Search
Heuristic Searching: A* SearchHeuristic Searching: A* Search
Heuristic Searching: A* Search
 
Lecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithmLecture 14 Heuristic Search-A star algorithm
Lecture 14 Heuristic Search-A star algorithm
 
Control Strategies in AI
Control Strategies in AIControl Strategies in AI
Control Strategies in AI
 
Pathfinding - Part 3: Beyond the basics
Pathfinding - Part 3: Beyond the basicsPathfinding - Part 3: Beyond the basics
Pathfinding - Part 3: Beyond the basics
 
M4 heuristics
M4 heuristicsM4 heuristics
M4 heuristics
 
Searching Algorithm
Searching AlgorithmSearching Algorithm
Searching Algorithm
 
Solving problems by searching Informed (heuristics) Search
Solving problems by searching Informed (heuristics) SearchSolving problems by searching Informed (heuristics) Search
Solving problems by searching Informed (heuristics) Search
 
A star algorithms
A star algorithmsA star algorithms
A star algorithms
 
AI Greedy and A-STAR Search
AI Greedy and A-STAR SearchAI Greedy and A-STAR Search
AI Greedy and A-STAR Search
 
Dfs presentation
Dfs presentationDfs presentation
Dfs presentation
 
Lecture 21 problem reduction search ao star search
Lecture 21 problem reduction search ao star searchLecture 21 problem reduction search ao star search
Lecture 21 problem reduction search ao star search
 
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VEC
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VECUnit II Problem Solving Methods in AI K.sundar,AP/CSE,VEC
Unit II Problem Solving Methods in AI K.sundar,AP/CSE,VEC
 
M3 search
M3 searchM3 search
M3 search
 
Lecture 12 Heuristic Searches
Lecture 12 Heuristic SearchesLecture 12 Heuristic Searches
Lecture 12 Heuristic Searches
 
A* Search Algorithm
A* Search AlgorithmA* Search Algorithm
A* Search Algorithm
 
Analysis and design of algorithms part 4
Analysis and design of algorithms part 4Analysis and design of algorithms part 4
Analysis and design of algorithms part 4
 

Andere mochten auch

Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searchingLuigi Ceccaroni
 
Informed and Uninformed search Strategies
Informed and Uninformed search StrategiesInformed and Uninformed search Strategies
Informed and Uninformed search StrategiesAmey Kerkar
 
Investigations on Local Search based Hybrid Metaheuristics
Investigations on Local Search based Hybrid MetaheuristicsInvestigations on Local Search based Hybrid Metaheuristics
Investigations on Local Search based Hybrid MetaheuristicsLuca Di Gaspero
 
Artificial Intelligence -- Search Algorithms
Artificial Intelligence-- Search Algorithms Artificial Intelligence-- Search Algorithms
Artificial Intelligence -- Search Algorithms Syed Ahmed
 
Introduction to Artificial Intelligence
Introduction to Artificial Intelligence Introduction to Artificial Intelligence
Introduction to Artificial Intelligence Mustafa Jarrar
 
Jarrar: Un-informed Search
Jarrar: Un-informed SearchJarrar: Un-informed Search
Jarrar: Un-informed SearchMustafa Jarrar
 
Local search algorithms
Local search algorithmsLocal search algorithms
Local search algorithmsbambangsueb
 
Exploring Algorithms
Exploring AlgorithmsExploring Algorithms
Exploring AlgorithmsSri Prasanna
 
Depth First Search, Breadth First Search and Best First Search
Depth First Search, Breadth First Search and Best First SearchDepth First Search, Breadth First Search and Best First Search
Depth First Search, Breadth First Search and Best First SearchAdri Jovin
 
Technology Seminar Handout
Technology Seminar HandoutTechnology Seminar Handout
Technology Seminar HandoutDerecskei Anita
 
Seminar "Technology Enhanced Learning" - Einheit 1
Seminar "Technology Enhanced Learning" - Einheit 1Seminar "Technology Enhanced Learning" - Einheit 1
Seminar "Technology Enhanced Learning" - Einheit 1Martin Ebner
 
simple
simplesimple
simpleavy2cess
 
Travel Plan using Geo-tagged Photos in Geocrowd2013
Travel Plan using Geo-tagged Photos in Geocrowd2013 Travel Plan using Geo-tagged Photos in Geocrowd2013
Travel Plan using Geo-tagged Photos in Geocrowd2013 Arizona State University
 
Heuristics
HeuristicsHeuristics
HeuristicsMoe Tmz
 
Presentation - Bi-directional A-star search
Presentation - Bi-directional A-star searchPresentation - Bi-directional A-star search
Presentation - Bi-directional A-star searchMohammad Saiful Islam
 
Heuristc Search Techniques
Heuristc Search TechniquesHeuristc Search Techniques
Heuristc Search TechniquesJismy .K.Jose
 
Lecture 11 Informed Search
Lecture 11 Informed SearchLecture 11 Informed Search
Lecture 11 Informed SearchHema Kashyap
 

Andere mochten auch (20)

Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searching
 
Informed and Uninformed search Strategies
Informed and Uninformed search StrategiesInformed and Uninformed search Strategies
Informed and Uninformed search Strategies
 
Hill climbing
Hill climbingHill climbing
Hill climbing
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
Investigations on Local Search based Hybrid Metaheuristics
Investigations on Local Search based Hybrid MetaheuristicsInvestigations on Local Search based Hybrid Metaheuristics
Investigations on Local Search based Hybrid Metaheuristics
 
Artificial Intelligence -- Search Algorithms
Artificial Intelligence-- Search Algorithms Artificial Intelligence-- Search Algorithms
Artificial Intelligence -- Search Algorithms
 
Introduction to Artificial Intelligence
Introduction to Artificial Intelligence Introduction to Artificial Intelligence
Introduction to Artificial Intelligence
 
Making Complex Decisions(Artificial Intelligence)
Making Complex Decisions(Artificial Intelligence)Making Complex Decisions(Artificial Intelligence)
Making Complex Decisions(Artificial Intelligence)
 
Jarrar: Un-informed Search
Jarrar: Un-informed SearchJarrar: Un-informed Search
Jarrar: Un-informed Search
 
Local search algorithms
Local search algorithmsLocal search algorithms
Local search algorithms
 
Exploring Algorithms
Exploring AlgorithmsExploring Algorithms
Exploring Algorithms
 
Depth First Search, Breadth First Search and Best First Search
Depth First Search, Breadth First Search and Best First SearchDepth First Search, Breadth First Search and Best First Search
Depth First Search, Breadth First Search and Best First Search
 
Technology Seminar Handout
Technology Seminar HandoutTechnology Seminar Handout
Technology Seminar Handout
 
Seminar "Technology Enhanced Learning" - Einheit 1
Seminar "Technology Enhanced Learning" - Einheit 1Seminar "Technology Enhanced Learning" - Einheit 1
Seminar "Technology Enhanced Learning" - Einheit 1
 
simple
simplesimple
simple
 
Travel Plan using Geo-tagged Photos in Geocrowd2013
Travel Plan using Geo-tagged Photos in Geocrowd2013 Travel Plan using Geo-tagged Photos in Geocrowd2013
Travel Plan using Geo-tagged Photos in Geocrowd2013
 
Heuristics
HeuristicsHeuristics
Heuristics
 
Presentation - Bi-directional A-star search
Presentation - Bi-directional A-star searchPresentation - Bi-directional A-star search
Presentation - Bi-directional A-star search
 
Heuristc Search Techniques
Heuristc Search TechniquesHeuristc Search Techniques
Heuristc Search Techniques
 
Lecture 11 Informed Search
Lecture 11 Informed SearchLecture 11 Informed Search
Lecture 11 Informed Search
 

Ähnlich wie 04 search heuristic

Searching Informed Search.pdf
Searching Informed Search.pdfSearching Informed Search.pdf
Searching Informed Search.pdfDrBashirMSaad
 
Informed-search TECHNIQUES IN ai ml data science
Informed-search TECHNIQUES IN ai ml data scienceInformed-search TECHNIQUES IN ai ml data science
Informed-search TECHNIQUES IN ai ml data sciencedevvpillpersonal
 
Heuristics slides 321
Heuristics slides 321Heuristics slides 321
Heuristics slides 321Moe Tmz
 
Heuristics Search 3249989_slideplayer.ppt
Heuristics Search 3249989_slideplayer.pptHeuristics Search 3249989_slideplayer.ppt
Heuristics Search 3249989_slideplayer.pptTushar Chaudhari
 
2-Heuristic Search.ppt
2-Heuristic Search.ppt2-Heuristic Search.ppt
2-Heuristic Search.pptMIT,Imphal
 
informed_search.pdf
informed_search.pdfinformed_search.pdf
informed_search.pdfSankarTerli
 
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttkt
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttktshamwari dzerwendo.mmmmmmfmmfmfkksrkrttkt
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttktPEACENYAMA1
 
Logarithms
LogarithmsLogarithms
Logarithmssupoteta
 
Attributed Graph Matching of Planar Graphs
Attributed Graph Matching of Planar GraphsAttributed Graph Matching of Planar Graphs
Attributed Graph Matching of Planar GraphsRaĂźl ArlĂ ndez
 
Ch01 basic concepts_nosoluiton
Ch01 basic concepts_nosoluitonCh01 basic concepts_nosoluiton
Ch01 basic concepts_nosoluitonshin
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfShiwani Gupta
 
09 logic programming
09 logic programming09 logic programming
09 logic programmingsaru40
 
Jacob's and Vlad's D.E.V. Project - 2012
Jacob's and Vlad's D.E.V. Project - 2012Jacob's and Vlad's D.E.V. Project - 2012
Jacob's and Vlad's D.E.V. Project - 2012Jacob_Evenson
 
2. Fixed Point Iteration.pptx
2. Fixed Point Iteration.pptx2. Fixed Point Iteration.pptx
2. Fixed Point Iteration.pptxsaadhaq6
 
Graph Analytics - From the Whiteboard to Your Toolbox - Sam Lerma
Graph Analytics - From the Whiteboard to Your Toolbox - Sam LermaGraph Analytics - From the Whiteboard to Your Toolbox - Sam Lerma
Graph Analytics - From the Whiteboard to Your Toolbox - Sam LermaPyData
 
Introduction to MatLab programming
Introduction to MatLab programmingIntroduction to MatLab programming
Introduction to MatLab programmingDamian T. Gordon
 
December11 2012
December11 2012December11 2012
December11 2012khyps13
 

Ähnlich wie 04 search heuristic (20)

Searching Informed Search.pdf
Searching Informed Search.pdfSearching Informed Search.pdf
Searching Informed Search.pdf
 
Informed-search TECHNIQUES IN ai ml data science
Informed-search TECHNIQUES IN ai ml data scienceInformed-search TECHNIQUES IN ai ml data science
Informed-search TECHNIQUES IN ai ml data science
 
Heuristics slides 321
Heuristics slides 321Heuristics slides 321
Heuristics slides 321
 
Heuristics Search 3249989_slideplayer.ppt
Heuristics Search 3249989_slideplayer.pptHeuristics Search 3249989_slideplayer.ppt
Heuristics Search 3249989_slideplayer.ppt
 
2-Heuristic Search.ppt
2-Heuristic Search.ppt2-Heuristic Search.ppt
2-Heuristic Search.ppt
 
Search 2
Search 2Search 2
Search 2
 
informed_search.pdf
informed_search.pdfinformed_search.pdf
informed_search.pdf
 
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttkt
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttktshamwari dzerwendo.mmmmmmfmmfmfkksrkrttkt
shamwari dzerwendo.mmmmmmfmmfmfkksrkrttkt
 
1 13f-f
1 13f-f1 13f-f
1 13f-f
 
Logarithms
LogarithmsLogarithms
Logarithms
 
Attributed Graph Matching of Planar Graphs
Attributed Graph Matching of Planar GraphsAttributed Graph Matching of Planar Graphs
Attributed Graph Matching of Planar Graphs
 
Ch01 basic concepts_nosoluiton
Ch01 basic concepts_nosoluitonCh01 basic concepts_nosoluiton
Ch01 basic concepts_nosoluiton
 
module4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdfmodule4_dynamic programming_2022.pdf
module4_dynamic programming_2022.pdf
 
09 logic programming
09 logic programming09 logic programming
09 logic programming
 
Jacob's and Vlad's D.E.V. Project - 2012
Jacob's and Vlad's D.E.V. Project - 2012Jacob's and Vlad's D.E.V. Project - 2012
Jacob's and Vlad's D.E.V. Project - 2012
 
2. Fixed Point Iteration.pptx
2. Fixed Point Iteration.pptx2. Fixed Point Iteration.pptx
2. Fixed Point Iteration.pptx
 
Graph Analytics - From the Whiteboard to Your Toolbox - Sam Lerma
Graph Analytics - From the Whiteboard to Your Toolbox - Sam LermaGraph Analytics - From the Whiteboard to Your Toolbox - Sam Lerma
Graph Analytics - From the Whiteboard to Your Toolbox - Sam Lerma
 
algorithm Unit 3
algorithm Unit 3algorithm Unit 3
algorithm Unit 3
 
Introduction to MatLab programming
Introduction to MatLab programmingIntroduction to MatLab programming
Introduction to MatLab programming
 
December11 2012
December11 2012December11 2012
December11 2012
 

KĂźrzlich hochgeladen

Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfUmakantAnnand
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersChitralekhaTherkar
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformChameera Dedduwage
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting DataJhengPantaleon
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 

KĂźrzlich hochgeladen (20)

Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Concept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.CompdfConcept of Vouching. B.Com(Hons) /B.Compdf
Concept of Vouching. B.Com(Hons) /B.Compdf
 
Micromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of PowdersMicromeritics - Fundamental and Derived Properties of Powders
Micromeritics - Fundamental and Derived Properties of Powders
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
A Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy ReformA Critique of the Proposed National Education Policy Reform
A Critique of the Proposed National Education Policy Reform
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data_Math 4-Q4 Week 5.pptx Steps in Collecting Data
_Math 4-Q4 Week 5.pptx Steps in Collecting Data
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 

04 search heuristic

  • 1. Heuristic Search (Where we try to be smarter in how we choose among alternatives) Dr. Ali Abdallah (dr_ali_abdallah@hotmail.com)
  • 2. Search Algorithm (Modified)  INSERT(initial-node,FRINGE)  Repeat:  If empty(FRINGE) then return failure  n  REMOVE(FRINGE)  s  STATE(n)  If GOAL?(s) then return path or goal state  For every state s’ in SUCCESSORS(s)  Create a node n’ as a successor of n  INSERT(n’,FRINGE)
  • 3. Best-First Search  It exploits state description to estimate how promising each search node is  An evaluation function f maps each search node N to positive real number f(N)  Traditionally, the smaller f(N), the more promising N  Best-first search sorts the fringe in increasing f [random order is assumed among nodes with equal values of f]
  • 4. Best-First Search “Best” only refers to the value of f, not to the quality of the actual path. Best-first search does not generate optimal paths in general
  • 5. How to construct an evaluation function?  There are no limitations on f. Any function of your choice is acceptable. But will it help the search algorithm?  The classical approach is to construct f(N) as an estimator of the cost of a solution path through N
  • 6. Heuristic Function  The heuristic function h(N) estimates the distance of STATE(N) to a goal state Its value is independent of the current search tree; it depends only on STATE(N) and the goal test  Example: 5 8 1 2 3 4 2 1 4 5 6 7 3 6 7 8 STATE(N) Goal state  h1(N) = number of misplaced tiles = 6
  • 7. Other Examples 5 8 1 2 3 4 2 1 4 5 6 7 3 6 7 8 STATE(N) Goal state  h1(N) = number of misplaced tiles = 6  h2(N) = sum of the (Manhattan) distances of every tile to its goal position = 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13
  • 8. Other Examples (Robot Navigation) N yN yg xN xg h1 (N) = (xN -xg ) +(yN -yg ) 2 2 (Euclidean distance) h2(N) = |xN-xg| + |yN-yg| (Manhattan distance)
  • 9. Classical Evaluation Functions  h(N): heuristic function [Independent of search tree]  g(N): cost of the best path found so far between the initial node and N [Dependent on search tree]  f(N) = h(N)  greedy best-first search  f(N) = g(N) + h(N)
  • 10. 8-Puzzle f(N) = h(N) = number of misplaced tiles 3 3 4 5 3 empty 4 2 4 2 1 3 3 4 0 5 4
  • 11. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 3+3 1+5 2+3 3+4 5+2 0+4 3+2 4+1 1+3 2+3 5+0 3+4 1+5 2+4
  • 12. 8-Puzzle f(N) = h(N) = ÎŁ distances of tiles to goal 6 5 2 5 2 1 4 3 0 4 6 5
  • 13. Best-First → Efficiency Local-minimum problem f(N) = h(N) = straight distance to the goal
  • 14. Can we Prove Anything?  If the state space is finite and we discard nodes that revisit states, the search is complete, but in general is not optimal  If the state space is finite and we do not discard nodes that revisit states, in general the search is not complete  If the state space is infinite, in general the search is not complete
  • 15. Admissible Heuristic  Let h*(N) be the cost of the optimal path from N to a goal node  The heuristic function h(N) is admissible if: 0 ≤ h(N) ≤ h*(N)  An admissible heuristic function is always optimistic ! • Note: G is a goal node  h(G) = 0
  • 16. 8-Puzzle Heuristics 5 8 1 2 3 4 2 1 4 5 6 7 3 6 7 8 STATE(N) Goal state  h1(N) = number of misplaced tiles = 6 is admissible  h2(N) = sum of the (Manhattan) distances of every tile to its goal position = 2 + 3 + 0 + 1 + 3 + 0 + 3 + 1 = 13 is admissible
  • 17. Robot Navigation Heuristics Cost of one horizontal/vertical step = 1 Cost of one diagonal step = 2 h1 (N) = (xN -xg )2 +(yN -yg )2 is admissible h2(N) = |xN-xg| + |yN-yg| is admissible if moving along diagonals is not allowed, and not admissible otherwise
  • 18. A* Search (most popular algorithm in AI)  f(N) = g(N) + h(N), where: • g(N) = cost of best path found so far to N • h(N) = admissible heuristic function  for all arcs: 0 < Îľ ≤ c(N,N’)  “modified” search algorithm is used  Best-first search is then called A* search
  • 19. Result #1 A* is complete and optimal [This result holds if nodes revisiting states are not discarded]
  • 20. Proof (1/2) • If a solution exists, A* terminates and returns a solution For each node N on the fringe, f(N)≥d(N)×ε, where d(N) is the depth of N in the tree As long as A* hasn’t terminated, a node K on the fringe lies on a solution path Since each node expansion increases the length of one path, K will eventually be selected for expansion
  • 21. Proof (2/2) • Whenever A* chooses to expand a goal node, the path to this node is optimal C*= h*(initial-node) G’: non-optimal goal node in the fringe f(G’) = g(G’) + h(G’) = g(G’) > C* A node K in the fringe lies on an optimal path: f(K) = g(K) + h(K) < C* So, G’ is not selected for expansion
  • 22. Time Limit Issue  When a problem has no solution, A* runs for ever if the state space is infinite or states can be revisited an arbitrary number of times (the search tree can grow arbitrarily large). In other cases, it may take a huge amount of time to terminate  So, in practice, A* must be given a time limit. If it has not found a solution within this limit, it stops. Then there is no way to know if the problem has no solution or A* needed more time to find it  In the past, when AI systems were “small” and solving a single search problem at a time, this was not too much of a concern. As AI systems become larger, they have to solve a multitude of search problems concurrently. Then, a question arises: What should be the time limit for each of them?
  • 23. Time Limit Issue Hence, the usefulness of a simple test, like in the (2n-1)-puzzle, that determines if the goal is reachable Unfortunately, such a test rarely exists
  • 24. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 3+3 1+5 2+3 3+4 5+2 0+4 3+2 4+1 1+3 2+3 5+0 3+4 1+5 2+4
  • 26. Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal (not A*) 8 7 6 5 4 3 2 3 4 5 6 7 5 4 3 5 6 3 2 1 0 1 2 4 7 6 5 8 7 6 5 4 3 2 3 4 5 6
  • 27. Robot Navigation f(N) = h(N), with h(N) = Manhattan distance to the goal (not A*) 8 7 6 5 4 3 2 3 4 5 6 7 5 4 3 5 6 3 2 1 0 0 1 2 4 7 7 6 5 8 7 6 5 4 3 2 3 4 5 6
  • 28. Robot Navigation f(N) = g(N)+h(N), with h(N) = Manhattan distance to goal (A*) 8+3 7+4 6+5 5+6 4+7 3+8 2+9 3+10 4 8 7 6 5 4 3 2 3 5 6 7+2 7 5+6 4+7 3+8 5 4 3 5 6+1 6 3 2+9 1+10 0+11 1 2 1 0 2 4 7+0 6+1 7 6 5 8+1 7+2 6+3 5+4 4+5 3+6 2+7 3+8 4 8 7 6 5 4 3 2 3 5 6
  • 29. How to create an admissible h?  An admissible heuristic can usually be seen as the cost of an optimal solution to a relaxed problem (one obtained by removing constraints)  In robot navigation on a grid: • The Manhattan distance corresponds to removing the obstacles • The Euclidean distance corresponds to removing both the obstacles and the constraint that the robot moves on a grid  More on this topic later
  • 30. What to do with revisited states? c=1 2 h = 100 1 The heuristic h is 2 1 90 clearly admissible 100 0
  • 31. What to do with revisited states? c=1 2 f = 1+100 2+1 h = 100 1 2 1 90 4+90 100 ? 0 104 If we discard this new node, then the search algorithm expands the goal node next and returns a non-optimal solution
  • 32. What to do with revisited states? 1 2 1+100 2+1 100 1 2 1 90 2+90 4+90 100 0 102 104 Instead, if we do not discard nodes revisiting states, the search terminates with an optimal solution
  • 33. But ... If we do not discard nodes revisiting states, the size of the search tree can be exponential in the number of visited states 1 1 1 1+1 1+1 1 1 1 2+1 2+1 2+1 2+1 2 2 4 4 4 4 4 4 4 4
  • 34.  It is not harmful to discard a node revisiting a state if the new path to this state has higher cost than the previous one  A* remains optimal  Fortunately, for a large family of admissible heuristics – consistent heuristics – there is a much easier way of dealing with revisited states
  • 35. Consistent Heuristic A heuristic h is consistent if 1) for each node N and each child N’ of N: h(N) ≤ c(N,N’) + h(N’) N c(N,N’) N’ h(N) 2) for each goal node G: h(N’) h(G) = 0 (triangle inequality) The heuristic is also said to be monotone
  • 36. Consistency Violation If h tells that N is 100 units from the goal, then moving N from N along an arc c(N,N’) costing 10 units =10 N’ h(N) should not lead to a h(N’) =100 node N’ that h =10 estimates to be 10 (triangle inequality) units away from the goal
  • 37. Admissibility and Consistency  A consistent heuristic is also admissible  An admissible heuristic may not be consistent, but many admissible heuristics are consistent
  • 38. 8-Puzzle 5 8 1 2 3 4 2 1 4 5 6 7 3 6 7 8 STATE(N) Goal  h1(N) = number of misplaced tiles  h2(N) = sum of the (Manhattan) distances of every tile to its goal position are both consistent
  • 39. Robot Navigation Cost of one horizontal/vertical step = 1 Cost of one diagonal step = 2 h1 (N) = (xN -xg )2 +(yN -yg )2 is consistent h2(N) = |xN-xg| + |yN-yg| is consistent if moving along diagonals is not allowed, and not consistent otherwise
  • 40. Result #2 If h is consistent, then whenever A* expands a node, it has already found an optimal path to this node’s state
  • 41. N Proof N’  Consider a node N and its child N’ Since h is consistent: h(N) ≤ c(N,N’)+h(N’) f(N) = g(N)+h(N) ≤ g(N)+c(N,N’)+h(N’) = f(N’) So, f is non-decreasing along any path  If K is selected for expansion, then any other node K’ in the fringe verifies f(K’) ≥ f(K) So, if one node K’ lies on another path to the state of K, the cost of this other path is no smaller than the path to K
  • 42. Result #2 If h is consistent, then whenever A* expands a node, it has already found an optimal path to this node’s state The path to N is the optimal path to S N N1 S S1 N2 can be discarded N2
  • 43. Revisited States with Consistent Heuristic  When a node is expanded, store its state into CLOSED  When a new node N is generated: • If STATE(N) is in CLOSED, discard N • If there exists a node N’ in the fringe such that STATE(N’) = STATE(N), discard the node – N or N’ – with the largest f
  • 44. Is A* with some consistent heuristic all that we need? No !  There are very dumb consistent heuristics
  • 45. h ≡ 0  It is consistent (hence, admissible) !  A* with h≡0 is uniform-cost search  Breadth-first and uniform-cost are particular cases of A*
  • 46. Heuristic Accuracy Let h1 and h2 be two consistent heuristics such that for all nodes N: h1(N) ≤ h2(N) h2 is said to be more accurate (or more informed) than h1 5 8 1 2 3  h1(N) = number of misplaced tiles 4 2 1 4 5 6  h2(N) = sum of distances of 7 3 6 7 8 every tile to its goal position STATE(N) Goal state  h2 is more accurate than h1
  • 47. Result #3  Let h2 be more accurate than h1  Let A1* be A* using h1 and A2* be A* using h2  Whenever a solution exists, all the nodes expanded by A2* are also expanded by A1*
  • 48. Proof  C* = h*(initial-node)  Every node N such that f(N) < C* is eventually expanded  Every node N such that h(N) < C*−g(N) is eventually expanded  Since h1(N) ≤ h2(N), every node expanded by A2* is also expanded by A1*  …except for nodes N such that f(N) = C*
  • 49. Effective Branching Factor  It is used as a measure the effectiveness of a heuristic  Let n be the total number of nodes expanded by A* for a particular problem and d the depth of the solution  The effective branching factor b* is defined by n = 1 + b* + (b*)2 +...+ (b*)d
  • 50. Experimental Results  8-puzzle with:  h1 = number of misplaced tiles  h2 = sum of distances of tiles to their goal positions  Random generation of many problem instances  Average effective branching factors (number of expanded nodes): d IDS A1* A2* 2 2.45 1.79 1.79 6 2.73 1.34 1.30 12 2.78 (3,644,035) 1.42 (227) 1.24 (73) 16 -- 1.45 1.25 20 -- 1.47 1.27 24 -- 1.48 (39,135) 1.26 (1,641)
  • 51. How to create good heuristics?  By solving relaxed problems at each node  In the 8-puzzle, the sum of the distances of each tile to its goal position (h2) corresponds to solving 8 simple problems: 5 8 1 2 3 4 2 1 4 5 6 7 3 6 7 8 5 5  It ignores negative interactions among tiles
  • 52. Can we do better?  For example, we could consider two more complex relaxed problems: 5 8 1 2 3 4 2 1 4 5 6 7 3 6 7 8 1 2 3 5 8 4 2 1 4 5 6 3 7 6 7 8   h = d1234 + d5678 [disjoint pattern heuristic]  These distances could have been precomputed in a database
  • 53. Can we do better? Several order-of-magnitude speedups have been obtained this way for the 15- and 24-puzzle (see R&N)
  • 54. On Completeness and Optimality  A* with a consistent heuristic has nice properties: completeness, optimality, no need to revisit states  Theoretical completeness does not mean “practical” completeness if you must wait too long to get a solution (remember the time limit issue)  So, if one can’t design an accurate consistent heuristic, it may be better to settle for a non-admissible heuristic that “works well in practice”, even through completeness and optimality are no longer guaranteed
  • 55. Iterative Deepening A* (IDA*)  Idea: Reduce memory requirement of A* by applying cutoff on values of f  Consistent heuristic h  Algorithm IDA*: 1. Initialize cutoff to f(initial-node) 2. Repeat:  Perform depth-first search by expanding all nodes N such that f(N) ≤ cutoff  Reset cutoff to smallest value f of non- expanded (leaf) nodes
  • 56. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 Cutoff=4 6
  • 57. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 Cutoff=4 4 6 6
  • 58. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 Cutoff=4 4 5 6 6
  • 59. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 5 4 Cutoff=4 4 5 6 6
  • 60. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 6 5 4 Cutoff=4 4 5 6 6
  • 61. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 Cutoff=5 6
  • 62. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 Cutoff=5 4 6 6
  • 63. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 Cutoff=5 4 5 6 6
  • 64. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 Cutoff=5 4 5 7 6 6
  • 65. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 5 Cutoff=5 4 5 7 6 6
  • 66. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 5 5 Cutoff=5 4 5 7 6 6
  • 67. 8-Puzzle f(N) = g(N) + h(N) with h(N) = number of misplaced tiles 4 5 5 Cutoff=5 4 5 5 7 6 6
  • 68. Advantages/Drawbacks of IDA*  Advantages: • Still complete and optimal • Requires less memory than A* • Avoid the overhead to sort the fringe  Drawbacks: • Can’t avoid revisiting states not on the current path • Available memory is poorly used
  • 69. SMA* (Simplified Memory-bounded A*)  Works like A* until memory is full  Then SMA* drops the node in the fringe with the largest f value and “backs up” this value to its parent  When all children of a node N have been dropped, the smallest backed up value replaces f(N)  In this way, the root of an erased subtree remembers the best path in that subtree  SMA* will regenerate this subtree only if all other nodes in the fringe have greater f values  SMA* can’t completely avoid revisiting states, but it can do a better job at this that IDA*
  • 70. Local Search  Light-memory search method  No search tree; only the current state is represented!  Only applicable to problems where the path is irrelevant (e.g., 8-queen), unless the path is encoded in the state  Many similarities with optimization techniques
  • 71. Steepest Descent  S  initial state  Repeat:  S’  arg minS’∈SUCCESSORS(S){h(S’)}  if GOAL?(S’) return S’  if h(S’) < h(S) then S  S’ else return failure Similar to: - hill climbing with –h - gradient descent over continuous space
  • 72. Application: 8-Queen Repeat n times: 2) Pick an initial state S at random with one queen in each column 3) Repeat k times: – If GOAL?(S) then return S – Pick an attacked queen Q at random – Move Q it in its column to minimize the number of attacking queens is minimum  new S [min-conflicts heuristic] 4) Return failure 1 2 2 0 3 2 3 2 2 2 2 2 3 2
  • 73. • Why does it work ??? • There are many goal states that are well-distributed over the state space • If no solution has been found after a few steps, it’s better to start it all over again. Building a search tree would be much less efficient because of the high branching factor • Running time almost independent of the number of queens
  • 74. Steepest Descent may easily get stuck in local minima  Random restart (as in n-queen example)  Monte Carlo descent: chooses probabilistically among the successors
  • 75. Monte Carlo Descent  S  initial state  Repeat k times:  If GOAL?(S) then return S  S’  successor of S picked at random  if h(S’) ≤ h(S) then S  S’  else - ∆h = h(S’)-h(S) - with probability ~ exp(−∆h/T), where T is called the “temperature” S  S’ [Metropolis criterion] 3) Return failure Simulated annealing lowers T over the k iterations. It starts with a large T and slowly decreases T
  • 76. Search problems Blind search Heuristic search: best-first and A* Construction of heuristics Variants of A* Local search
  • 77. When to Use Search Techniques? 1) The search space is small, and • No other technique is available, or • Developing a more efficient technique is not worth the effort 2)The search space is large, and • No other available technique is available, and • There exist “good” heuristics