SlideShare ist ein Scribd-Unternehmen logo
1 von 71
Artificial Intelligence 
Chapter 7: Logical Agents 
Michael Scherger 
Department of Computer Science 
Kent State University 
February 20, 2006 AI: Chapter 7: Logical Agents 1
Contents 
• Knowledge Based Agents 
• Wumpus World 
• Logic in General – models and entailment 
• Propositional (Boolean) Logic 
• Equivalence, Validity, Satisfiability 
• Inference Rules and Theorem Proving 
– Forward Chaining 
– Backward Chaining 
– Resolution 
February 20, 2006 AI: Chapter 7: Logical Agents 2
Logical Agents 
• Humans can know “things” and “reason” 
– Representation: How are the things stored? 
– Reasoning: How is the knowledge used? 
• To solve a problem… 
• To generate more knowledge… 
• Knowledge and reasoning are important to artificial 
agents because they enable successful behaviors 
difficult to achieve otherwise 
– Useful in partially observable environments 
• Can benefit from knowledge in very general forms, 
combining and recombining information 
February 20, 2006 AI: Chapter 7: Logical Agents 3
Knowledge-Based Agents 
• Central component of a Knowledge-Based 
Agent is a Knowledge-Base 
– A set of sentences in a formal language 
• Sentences are expressed using a knowledge representation 
language 
• Two generic functions: 
– TELL - add new sentences (facts) to the KB 
• “Tell it what it needs to know” 
– ASK - query what is known from the KB 
• “Ask what to do next” 
February 20, 2006 AI: Chapter 7: Logical Agents 4
Knowledge-Based Agents 
• The agent must be able 
to: 
– Represent states and 
actions 
– Incorporate new percepts 
– Update internal 
representations of the 
world 
– Deduce hidden properties 
of the world 
– Deduce appropriate 
actions 
Inference Engine 
Knowledge-Base 
Domain- 
Independent 
Algorithms 
Domain- 
Specific 
Content 
February 20, 2006 AI: Chapter 7: Logical Agents 5
Knowledge-Based Agents 
February 20, 2006 AI: Chapter 7: Logical Agents 6
Knowledge-Based Agents 
• Declarative 
– You can build a knowledge-based agent 
simply by “TELLing” it what it needs to know 
• Procedural 
– Encode desired behaviors directly as program 
code 
• Minimizing the role of explicit representation and 
reasoning can result in a much more efficient 
system 
February 20, 2006 AI: Chapter 7: Logical Agents 7
Wumpus World 
• Performance Measure 
– Gold +1000, Death – 1000 
– Step -1, Use arrow -10 
• Environment 
– Square adjacent to the Wumpus are smelly 
– Squares adjacent to the pit are breezy 
– Glitter iff gold is in the same square 
– Shooting kills Wumpus if you are facing it 
– Shooting uses up the only arrow 
– Grabbing picks up the gold if in the same 
square 
– Releasing drops the gold in the same square 
• Actuators 
– Left turn, right turn, forward, grab, release, 
shoot 
• Sensors 
– Breeze, glitter, and smell 
• See page 197-8 for more details! 
February 20, 2006 AI: Chapter 7: Logical Agents 8
Wumpus World 
• Characterization of Wumpus World 
– Observable 
• partial, only local perception 
– Deterministic 
• Yes, outcomes are specified 
– Episodic 
• No, sequential at the level of actions 
– Static 
• Yes, Wumpus and pits do not move 
– Discrete 
• Yes 
– Single Agent 
• Yes 
February 20, 2006 AI: Chapter 7: Logical Agents 9
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 10
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 11
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 12
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 13
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 14
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 15
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 16
Wumpus World 
February 20, 2006 AI: Chapter 7: Logical Agents 17
Other Sticky Situations 
• Breeze in (1,2) and 
(2,1) 
– No safe actions 
• Smell in (1,1) 
– Cannot move 
February 20, 2006 AI: Chapter 7: Logical Agents 18
Logic 
• Knowledge bases consist 
of sentences in a formal 
language 
– Syntax 
• Sentences are well formed 
– Semantics 
• The “meaning” of the 
sentence 
• The truth of each 
sentence with respect 
to each possible world 
(model) 
• Example: 
x + 2 >= y is a sentence 
x2 + y > is not a sentence 
x + 2 >= y is true iff x + 2 is no 
less than y 
x + 2 >= y is true in a world 
where x = 7, y=1 
x + 2 >= y is false in world 
where x = 0, y =6 
February 20, 2006 AI: Chapter 7: Logical Agents 19
Logic 
• Entailment means that one thing follows 
logically from another 
a |= b 
• a |= b iff in every model in which a is true, b is 
also true 
• if a is true, then b must be true 
• the truth of b is “contained” in the truth of a 
February 20, 2006 AI: Chapter 7: Logical Agents 20
Logic 
• Example: 
– A KB containing 
• “Cleveland won” 
• “Dallas won” 
• Entails… 
– “Either Cleveland won or Dallas won” 
• Example: 
x + y = 4 entails 4 = x + y 
February 20, 2006 AI: Chapter 7: Logical Agents 21
Logic 
• A model is a formally 
structured world with 
respect to which truth 
can be evaluated 
– M is a model of 
sentence a if a is true 
in m 
• Then KB |= a if M(KB) 
Í M(a) 
M(a) 
x 
x x x x x x x M(x KB) 
x x x x xx 
x x x x x x x 
x x x x x x 
x x x x x 
x x x 
xxx x x x xx x x x 
xxx x x x x x x 
x x x x 
x 
February 20, 2006 AI: Chapter 7: Logical Agents 22
Logic 
• Entailment in Wumpus 
World 
• Situation after detecting 
nothing in [1,1], moving 
right, breeze in [2,1] 
• Consider possible models 
for ? assuming only pits 
• 3 Boolean choices => 8 
possible models 
February 20, 2006 AI: Chapter 7: Logical Agents 23
Logic 
February 20, 2006 AI: Chapter 7: Logical Agents 24
Logic 
• KB = wumpus world rules + observations 
• a1 = “[1,2] is safe”, KB |= a1, proved by model checking 
February 20, 2006 AI: Chapter 7: Logical Agents 25
Logic 
• KB = wumpus world rules + observations 
• a2 = “[2,2] is safe”, KB ¬|= a2 proved by model checking 
February 20, 2006 AI: Chapter 7: Logical Agents 26
Logic 
• Inference is the process of deriving a 
specific sentence from a KB (where the 
sentence must be entailed by the KB) 
– KB |-i a = sentence a can be derived from KB 
by procedure I 
• “KB’s are a haystack” 
– Entailment = needle in haystack 
– Inference = finding it 
February 20, 2006 AI: Chapter 7: Logical Agents 27
Logic 
• Soundness 
– i is sound if… 
– whenever KB |-i a is true, KB |= a is true 
• Completeness 
– i is complete if 
– whenever KB |= a is true, KB |-i a is true 
• If KB is true in the real world, then any sentence 
a derived from KB by a sound inference 
procedure is also true in the real world 
February 20, 2006 AI: Chapter 7: Logical Agents 28
Propositional Logic 
• AKA Boolean Logic 
• False and True 
• Proposition symbols P1, P2, etc are sentences 
• NOT: If S1 is a sentence, then ¬S1 is a sentence (negation) 
• AND: If S1, S2 are sentences, then S1 Ù S2 is a sentence (conjunction) 
• OR: If S1, S2 are sentences, then S1 Ú S2 is a sentence (disjunction) 
• IMPLIES: If S1, S2 are sentences, then S1 Þ S2 is a sentence (implication) 
• IFF: If S1, S2 are sentences, then S1 Û S2 is a sentence (biconditional) 
February 20, 2006 AI: Chapter 7: Logical Agents 29
Propositional Logic 
P Q ¬P PÙQ PÚQ PÞQ PÛQ 
False False True False False True True 
False True True False True True False 
True False False False True False False 
True True False True True True True 
February 20, 2006 AI: Chapter 7: Logical Agents 30
Wumpus World Sentences 
• Let Pi,j be True if there 
is a pit in [i,j] 
• Let Bi,j be True if there 
is a breeze in [i,j] 
• ¬P1,1 
• ¬ B1,1 
• B2,1 
• “Pits cause breezes in 
adjacent squares” 
B1,1 Û (P1,2 Ú P2,1) 
B2,1 Û (P1,1 Ú P2,1 Ú P3,1) 
• A square is breezy if 
and only if there is an 
adjacent pit 
February 20, 2006 AI: Chapter 7: Logical Agents 31
A Simple Knowledge Base 
February 20, 2006 AI: Chapter 7: Logical Agents 32
A Simple Knowledge Base 
• R1: ¬P1,1 
• R2: B1,1 Û (P1,2 Ú P2,1) 
• R3: B2,1 (P1,1 Ú P2,2 Ú P3,1) 
• R4: ¬ B1,1 
• R5: B2,1 
• KB consists of sentences 
R1 thru R5 
• R1 Ù R2 Ù R3 Ù R4 Ù R5 
February 20, 2006 AI: Chapter 7: Logical Agents 33
A Simple Knowledge Base 
• Every known inference algorithm for propositional logic has a worst-case 
complexity that is exponential in the size of the input. (co-NP complete) 
February 20, 2006 AI: Chapter 7: Logical Agents 34
Equivalence, Validity, Satisfiability 
February 20, 2006 AI: Chapter 7: Logical Agents 35
Equivalence, Validity, Satisfiability 
• A sentence if valid if it is true in all models 
– e.g. True, A Ú ¬A, A Þ A, (A Ù (A Þ B) Þ B 
• Validity is connected to inference via the Deduction 
Theorem 
– KB |- a iff (KB Þ a) is valid 
• A sentence is satisfiable if it is True in some model 
– e.g. A Ú B, C 
• A sentence is unstatisfiable if it is True in no models 
– e.g. A Ù ¬A 
• Satisfiability is connected to inference via the following 
– KB |= a iff (KB Ù ¬a) is unsatisfiable 
– proof by contradiction 
February 20, 2006 AI: Chapter 7: Logical Agents 36
Reasoning Patterns 
• Inference Rules 
– Patterns of inference that can be applied to derive chains of conclusions 
that lead to the desired goal. 
• Modus Ponens 
– Given: S1 Þ S2 and S1, derive S2 
• And-Elimination 
– Given: S1 Ù S2, derive S1 
– Given: S1 Ù S2, derive S2 
• DeMorgan’s Law 
– Given: Ø( A Ú B) derive ØA Ù ØB 
– Given: Ø( A Ù B) derive ØA Ú ØB 
February 20, 2006 AI: Chapter 7: Logical Agents 37
Reasoning Patterns 
• And Elimination 
a Ùb 
a 
• From a conjunction, 
any of the conjuncts 
can be inferred 
• (WumpusAhead Ù WumpusAlive), 
WumpusAlive can be inferred 
• Modus Ponens 
a Þ b ,      a 
b 
• Whenever sentences 
of the form a Þ b and 
a are given, then 
sentence b can be 
inferred 
• (WumpusAhead Ù WumpusAlive) 
Þ Shoot and (WumpusAhead Ù 
WumpusAlive), Shoot can be 
inferred 
February 20, 2006 AI: Chapter 7: Logical Agents 38
Example Proof By Deduction 
• Knowledge 
S1: B22 Û ( P21 Ú P23 Ú P12 Ú P32 ) rule 
S2: ØB22 observation 
• Inferences 
S3: (B22 Þ (P21 Ú P23 Ú P12 Ú P32 ))Ù 
((P21 Ú P23 Ú P12 Ú P32 ) Þ B22) [S1,bi elim] 
S4: ((P21 Ú P23 Ú P12 Ú P32 ) Þ B22) [S3, and elim] 
S5: (ØB22 Þ Ø( P21 Ú P23 Ú P12 Ú P32 )) [contrapos] 
S6: Ø(P21 Ú P23 Ú P12 Ú P32 ) [S2,S6, MP] 
S7: ØP21 Ù ØP23 Ù ØP12 Ù ØP32 [S6, DeMorg] 
February 20, 2006 AI: Chapter 7: Logical Agents 39
Evaluation of Deductive 
Inference 
• Sound 
– Yes, because the inference rules themselves are 
sound. (This can be proven using a truth table 
argument). 
• Complete 
– If we allow all possible inference rules, we’re 
searching in an infinite space, hence not complete 
– If we limit inference rules, we run the risk of leaving 
out the necessary one… 
• Monotonic 
– If we have a proof, adding information to the DB will 
not invalidate the proof 
February 20, 2006 AI: Chapter 7: Logical Agents 40
Resolution 
• Resolution allows a complete inference 
mechanism (search-based) using only one rule 
of inference 
• Resolution rule: 
– Given: P1 Ú P2 Ú P3 …Ú Pn, and ØP1 Ú Q1 …Ú Qm 
– Conclude: P2 Ú P3 …Ú Pn Ú Q1 …Ú Qm 
Complementary literals P1 and ØP1 “cancel out” 
• Why it works: 
– Consider 2 cases: P1 is true, and P1 is false 
February 20, 2006 AI: Chapter 7: Logical Agents 41
Resolution in Wumpus World 
• There is a pit at 2,1 or 2,3 or 1,2 or 3,2 
– P21 Ú P23 Ú P12 Ú P32 
• There is no pit at 2,1 
 ØP21 
• Therefore (by resolution) the pit must be at 
2,3 or 1,2 or 3,2 
– P23 Ú P12 Ú P32 
February 20, 2006 AI: Chapter 7: Logical Agents 42
Proof using Resolution 
• To prove a fact P, repeatedly apply resolution until either: 
– No new clauses can be added, (KB does not entail P) 
– The empty clause is derived (KB does entail P) 
• This is proof by contradiction: if we prove that KB Ù ØP derives a 
contradiction (empty clause) and we know KB is true, then ØP must 
be false, so P must be true! 
• To apply resolution mechanically, facts need to be in Conjunctive 
Normal Form (CNF) 
• To carry out the proof, need a search mechanism that will 
enumerate all possible resolutions. 
February 20, 2006 AI: Chapter 7: Logical Agents 43
CNF Example 
1. B22 Û ( P21 Ú P23 Ú P12 Ú P32 ) 
2. Eliminate Û , replacing with two implications 
(B22 Þ ( P21 Ú P23 Ú P12 Ú P32 )) Ù ((P21 Ú P23 Ú P12 Ú P32 ) Þ B22) 
1. Replace implication (A Þ B) by ØA Ú B 
(ØB22 Ú ( P21 Ú P23 Ú P12 Ú P32 )) Ù (Ø(P21 Ú P23 Ú P12 Ú P32 ) Ú B22) 
1. Move Ø “inwards” (unnecessary parens removed) 
(ØB22 Ú P21 Ú P23 Ú P12 Ú P32 ) Ù ( (ØP21 Ù ØP23 Ù ØP12 Ù ØP32 ) Ú B22) 
4. Distributive Law 
(ØB22 Ú P21 Ú P23 Ú P12 Ú P32 ) Ù (ØP21 Ú B22) Ù (ØP23 Ú B22) Ù (ØP12 Ú B22) Ù (ØP32 
Ú B22) 
(Final result has 5 clauses) 
February 20, 2006 AI: Chapter 7: Logical Agents 44
Resolution Example 
• Given B22 and ØP21 and ØP23 and ØP32 , 
prove P12 
• (ØB22 Ú P21 Ú P23 Ú P12 Ú P32 ) ; ØP12 
• (ØB22 Ú P21 Ú P23 Ú P32 ) ; ØP21 
• (ØB22 Ú P23 Ú P32 ) ; ØP23 
• (ØB22 Ú P32 ) ; ØP32 
• (ØB22) ; B22 
• [empty clause] 
February 20, 2006 AI: Chapter 7: Logical Agents 45
Evaluation of Resolution 
• Resolution is sound 
– Because the resolution rule is true in all cases 
• Resolution is complete 
– Provided a complete search method is used to find 
the proof, if a proof can be found it will 
– Note: you must know what you’re trying to prove in 
order to prove it! 
• Resolution is exponential 
– The number of clauses that we must search grows 
exponentially… 
February 20, 2006 AI: Chapter 7: Logical Agents 46
Horn Clauses 
• A Horn Clause is a CNF clause with exactly one positive 
literal 
– The positive literal is called the head 
– The negative literals are called the body 
– Prolog: head:- body1, body2, body3 … 
– English: “To prove the head, prove body1, …” 
– Implication: If (body1, body2 …) then head 
• Horn Clauses form the basis of forward and backward 
chaining 
• The Prolog language is based on Horn Clauses 
• Deciding entailment with Horn Clauses is linear in the 
size of the knowledge base 
February 20, 2006 AI: Chapter 7: Logical Agents 47
Reasoning with Horn Clauses 
• Forward Chaining 
– For each new piece of data, generate all new 
facts, until the desired fact is generated 
– Data-directed reasoning 
• Backward Chaining 
– To prove the goal, find a clause that contains 
the goal as its head, and prove the body 
recursively 
– (Backtrack when you chose the wrong clause) 
– Goal-directed reasoning 
February 20, 2006 AI: Chapter 7: Logical Agents 48
Forward Chaining 
• Fire any rule whose premises are satisfied in the KB 
• Add its conclusion to the KB until the query is found 
February 20, 2006 AI: Chapter 7: Logical Agents 49
Forward Chaining 
• AND-OR Graph 
– multiple links joined by an arc indicate conjunction – every link must be proved 
– multiple links without an arc indicate disjunction – any link can be proved 
February 20, 2006 AI: Chapter 7: Logical Agents 50
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 51
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 52
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 53
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 54
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 55
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 56
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 57
Forward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 58
Backward Chaining 
• Idea: work backwards from the query q: 
– To prove q by BC, 
• Check if q is known already, or 
• Prove by BC all premises of some rule concluding q 
• Avoid loops 
– Check if new subgoal is already on the goal stack 
• Avoid repeated work: check if new subgoal 
– Has already been proved true, or 
– Has already failed 
February 20, 2006 AI: Chapter 7: Logical Agents 59
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 60
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 61
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 62
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 63
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 64
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 65
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 66
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 67
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 68
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 69
Backward Chaining 
February 20, 2006 AI: Chapter 7: Logical Agents 70
Forward Chaining vs. Backward 
Chaining 
• Forward Chaining is data driven 
– Automatic, unconscious processing 
– E.g. object recognition, routine decisions 
– May do lots of work that is irrelevant to the goal 
• Backward Chaining is goal driven 
– Appropriate for problem solving 
– E.g. “Where are my keys?”, “How do I start the car?” 
• The complexity of BC can be much less than 
linear in size of the KB 
February 20, 2006 AI: Chapter 7: Logical Agents 71

Weitere ähnliche Inhalte

Was ist angesagt?

AI_ 8 Weak Slot and Filler Structure
AI_ 8 Weak Slot and Filler  StructureAI_ 8 Weak Slot and Filler  Structure
AI_ 8 Weak Slot and Filler StructureKhushali Kathiriya
 
knowledge representation using rules
knowledge representation using rulesknowledge representation using rules
knowledge representation using rulesHarini Balamurugan
 
Inference in First-Order Logic
Inference in First-Order Logic Inference in First-Order Logic
Inference in First-Order Logic Junya Tanaka
 
Resolution method in AI.pptx
Resolution method in AI.pptxResolution method in AI.pptx
Resolution method in AI.pptxAbdullah251975
 
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...Ashish Duggal
 
Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)SHUBHAM KUMAR GUPTA
 
Unification and Lifting
Unification and LiftingUnification and Lifting
Unification and LiftingMegha Sharma
 
First order predicate logic (fopl)
First order predicate logic (fopl)First order predicate logic (fopl)
First order predicate logic (fopl)chauhankapil
 
Artificial intelligence- Logic Agents
Artificial intelligence- Logic AgentsArtificial intelligence- Logic Agents
Artificial intelligence- Logic AgentsNuruzzaman Milon
 
Uninformed search /Blind search in AI
Uninformed search /Blind search in AIUninformed search /Blind search in AI
Uninformed search /Blind search in AIKirti Verma
 
Prolog example explanation(Family Tree)
Prolog example explanation(Family Tree)Prolog example explanation(Family Tree)
Prolog example explanation(Family Tree)Gayan Geethanjana
 
L03 ai - knowledge representation using logic
L03 ai - knowledge representation using logicL03 ai - knowledge representation using logic
L03 ai - knowledge representation using logicManjula V
 
AI Propositional logic
AI Propositional logicAI Propositional logic
AI Propositional logicSURBHI SAROHA
 

Was ist angesagt? (20)

First order logic
First order logicFirst order logic
First order logic
 
AI_ 8 Weak Slot and Filler Structure
AI_ 8 Weak Slot and Filler  StructureAI_ 8 Weak Slot and Filler  Structure
AI_ 8 Weak Slot and Filler Structure
 
knowledge representation using rules
knowledge representation using rulesknowledge representation using rules
knowledge representation using rules
 
Inference in First-Order Logic
Inference in First-Order Logic Inference in First-Order Logic
Inference in First-Order Logic
 
Resolution method in AI.pptx
Resolution method in AI.pptxResolution method in AI.pptx
Resolution method in AI.pptx
 
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...
 
N Queens problem
N Queens problemN Queens problem
N Queens problem
 
Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)Predicate logic_2(Artificial Intelligence)
Predicate logic_2(Artificial Intelligence)
 
AI_Session 28 planning graph.pptx
AI_Session 28 planning graph.pptxAI_Session 28 planning graph.pptx
AI_Session 28 planning graph.pptx
 
Unification and Lifting
Unification and LiftingUnification and Lifting
Unification and Lifting
 
First order predicate logic (fopl)
First order predicate logic (fopl)First order predicate logic (fopl)
First order predicate logic (fopl)
 
Artificial intelligence- Logic Agents
Artificial intelligence- Logic AgentsArtificial intelligence- Logic Agents
Artificial intelligence- Logic Agents
 
Uninformed search /Blind search in AI
Uninformed search /Blind search in AIUninformed search /Blind search in AI
Uninformed search /Blind search in AI
 
AI_Session 25 classical planning.pptx
AI_Session 25 classical planning.pptxAI_Session 25 classical planning.pptx
AI_Session 25 classical planning.pptx
 
Prolog example explanation(Family Tree)
Prolog example explanation(Family Tree)Prolog example explanation(Family Tree)
Prolog example explanation(Family Tree)
 
L03 ai - knowledge representation using logic
L03 ai - knowledge representation using logicL03 ai - knowledge representation using logic
L03 ai - knowledge representation using logic
 
AI Propositional logic
AI Propositional logicAI Propositional logic
AI Propositional logic
 
Uncertainty in AI
Uncertainty in AIUncertainty in AI
Uncertainty in AI
 
predicate logic example
predicate logic examplepredicate logic example
predicate logic example
 
Artificial Intelligence - Reasoning in Uncertain Situations
Artificial Intelligence - Reasoning in Uncertain SituationsArtificial Intelligence - Reasoning in Uncertain Situations
Artificial Intelligence - Reasoning in Uncertain Situations
 

Andere mochten auch

FORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCE
FORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCEFORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCE
FORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCEJohnLeonard Onwuzuruigbo
 
How expensive a logical agent is
How expensive a logical agent isHow expensive a logical agent is
How expensive a logical agent isRashmika Nawaratne
 
Introduction iii
Introduction iiiIntroduction iii
Introduction iiichandsek666
 
Jarrar: First Order Logic
Jarrar: First Order LogicJarrar: First Order Logic
Jarrar: First Order LogicMustafa Jarrar
 
Logical reasoning
Logical reasoning Logical reasoning
Logical reasoning Slideshare
 
Pronouns and the verb to be
Pronouns and the verb to bePronouns and the verb to be
Pronouns and the verb to beInma Dominguez
 
Propositional And First-Order Logic
Propositional And First-Order LogicPropositional And First-Order Logic
Propositional And First-Order Logicankush_kumar
 
Ppt on unemployment
Ppt on unemploymentPpt on unemployment
Ppt on unemploymentmanav500
 

Andere mochten auch (13)

FORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCE
FORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCEFORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCE
FORWARD CHAINING AND BACKWARD CHAINING SYSTEMS IN ARTIFICIAL INTELIGENCE
 
Logical agent8910
Logical agent8910Logical agent8910
Logical agent8910
 
How expensive a logical agent is
How expensive a logical agent isHow expensive a logical agent is
How expensive a logical agent is
 
Introduction iii
Introduction iiiIntroduction iii
Introduction iii
 
Jarrar: First Order Logic
Jarrar: First Order LogicJarrar: First Order Logic
Jarrar: First Order Logic
 
Logic agent
Logic agentLogic agent
Logic agent
 
Logical reasoning
Logical reasoning Logical reasoning
Logical reasoning
 
Pronouns and the verb to be
Pronouns and the verb to bePronouns and the verb to be
Pronouns and the verb to be
 
Knowledge Based Agent
Knowledge Based AgentKnowledge Based Agent
Knowledge Based Agent
 
AI: Logic in AI 2
AI: Logic in AI 2AI: Logic in AI 2
AI: Logic in AI 2
 
AI: Logic in AI
AI: Logic in AIAI: Logic in AI
AI: Logic in AI
 
Propositional And First-Order Logic
Propositional And First-Order LogicPropositional And First-Order Logic
Propositional And First-Order Logic
 
Ppt on unemployment
Ppt on unemploymentPpt on unemployment
Ppt on unemployment
 

Mehr von Yasir Khan (20)

Lecture 6
Lecture 6Lecture 6
Lecture 6
 
Lecture 4
Lecture 4Lecture 4
Lecture 4
 
Lecture 3
Lecture 3Lecture 3
Lecture 3
 
Lecture 2
Lecture 2Lecture 2
Lecture 2
 
Lec#1
Lec#1Lec#1
Lec#1
 
Ch10 (1)
Ch10 (1)Ch10 (1)
Ch10 (1)
 
Ch09
Ch09Ch09
Ch09
 
Ch05
Ch05Ch05
Ch05
 
Snooping protocols 3
Snooping protocols 3Snooping protocols 3
Snooping protocols 3
 
Snooping 2
Snooping 2Snooping 2
Snooping 2
 
Introduction 1
Introduction 1Introduction 1
Introduction 1
 
Hpc sys
Hpc sysHpc sys
Hpc sys
 
Hpc 6 7
Hpc 6 7Hpc 6 7
Hpc 6 7
 
Hpc 4 5
Hpc 4 5Hpc 4 5
Hpc 4 5
 
Hpc 3
Hpc 3Hpc 3
Hpc 3
 
Hpc 2
Hpc 2Hpc 2
Hpc 2
 
Hpc 1
Hpc 1Hpc 1
Hpc 1
 
Flynns classification
Flynns classificationFlynns classification
Flynns classification
 
Dir based imp_5
Dir based imp_5Dir based imp_5
Dir based imp_5
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 

Logical Agents

  • 1. Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent State University February 20, 2006 AI: Chapter 7: Logical Agents 1
  • 2. Contents • Knowledge Based Agents • Wumpus World • Logic in General – models and entailment • Propositional (Boolean) Logic • Equivalence, Validity, Satisfiability • Inference Rules and Theorem Proving – Forward Chaining – Backward Chaining – Resolution February 20, 2006 AI: Chapter 7: Logical Agents 2
  • 3. Logical Agents • Humans can know “things” and “reason” – Representation: How are the things stored? – Reasoning: How is the knowledge used? • To solve a problem… • To generate more knowledge… • Knowledge and reasoning are important to artificial agents because they enable successful behaviors difficult to achieve otherwise – Useful in partially observable environments • Can benefit from knowledge in very general forms, combining and recombining information February 20, 2006 AI: Chapter 7: Logical Agents 3
  • 4. Knowledge-Based Agents • Central component of a Knowledge-Based Agent is a Knowledge-Base – A set of sentences in a formal language • Sentences are expressed using a knowledge representation language • Two generic functions: – TELL - add new sentences (facts) to the KB • “Tell it what it needs to know” – ASK - query what is known from the KB • “Ask what to do next” February 20, 2006 AI: Chapter 7: Logical Agents 4
  • 5. Knowledge-Based Agents • The agent must be able to: – Represent states and actions – Incorporate new percepts – Update internal representations of the world – Deduce hidden properties of the world – Deduce appropriate actions Inference Engine Knowledge-Base Domain- Independent Algorithms Domain- Specific Content February 20, 2006 AI: Chapter 7: Logical Agents 5
  • 6. Knowledge-Based Agents February 20, 2006 AI: Chapter 7: Logical Agents 6
  • 7. Knowledge-Based Agents • Declarative – You can build a knowledge-based agent simply by “TELLing” it what it needs to know • Procedural – Encode desired behaviors directly as program code • Minimizing the role of explicit representation and reasoning can result in a much more efficient system February 20, 2006 AI: Chapter 7: Logical Agents 7
  • 8. Wumpus World • Performance Measure – Gold +1000, Death – 1000 – Step -1, Use arrow -10 • Environment – Square adjacent to the Wumpus are smelly – Squares adjacent to the pit are breezy – Glitter iff gold is in the same square – Shooting kills Wumpus if you are facing it – Shooting uses up the only arrow – Grabbing picks up the gold if in the same square – Releasing drops the gold in the same square • Actuators – Left turn, right turn, forward, grab, release, shoot • Sensors – Breeze, glitter, and smell • See page 197-8 for more details! February 20, 2006 AI: Chapter 7: Logical Agents 8
  • 9. Wumpus World • Characterization of Wumpus World – Observable • partial, only local perception – Deterministic • Yes, outcomes are specified – Episodic • No, sequential at the level of actions – Static • Yes, Wumpus and pits do not move – Discrete • Yes – Single Agent • Yes February 20, 2006 AI: Chapter 7: Logical Agents 9
  • 10. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 10
  • 11. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 11
  • 12. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 12
  • 13. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 13
  • 14. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 14
  • 15. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 15
  • 16. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 16
  • 17. Wumpus World February 20, 2006 AI: Chapter 7: Logical Agents 17
  • 18. Other Sticky Situations • Breeze in (1,2) and (2,1) – No safe actions • Smell in (1,1) – Cannot move February 20, 2006 AI: Chapter 7: Logical Agents 18
  • 19. Logic • Knowledge bases consist of sentences in a formal language – Syntax • Sentences are well formed – Semantics • The “meaning” of the sentence • The truth of each sentence with respect to each possible world (model) • Example: x + 2 >= y is a sentence x2 + y > is not a sentence x + 2 >= y is true iff x + 2 is no less than y x + 2 >= y is true in a world where x = 7, y=1 x + 2 >= y is false in world where x = 0, y =6 February 20, 2006 AI: Chapter 7: Logical Agents 19
  • 20. Logic • Entailment means that one thing follows logically from another a |= b • a |= b iff in every model in which a is true, b is also true • if a is true, then b must be true • the truth of b is “contained” in the truth of a February 20, 2006 AI: Chapter 7: Logical Agents 20
  • 21. Logic • Example: – A KB containing • “Cleveland won” • “Dallas won” • Entails… – “Either Cleveland won or Dallas won” • Example: x + y = 4 entails 4 = x + y February 20, 2006 AI: Chapter 7: Logical Agents 21
  • 22. Logic • A model is a formally structured world with respect to which truth can be evaluated – M is a model of sentence a if a is true in m • Then KB |= a if M(KB) Í M(a) M(a) x x x x x x x x M(x KB) x x x x xx x x x x x x x x x x x x x x x x x x x x x xxx x x x xx x x x xxx x x x x x x x x x x x February 20, 2006 AI: Chapter 7: Logical Agents 22
  • 23. Logic • Entailment in Wumpus World • Situation after detecting nothing in [1,1], moving right, breeze in [2,1] • Consider possible models for ? assuming only pits • 3 Boolean choices => 8 possible models February 20, 2006 AI: Chapter 7: Logical Agents 23
  • 24. Logic February 20, 2006 AI: Chapter 7: Logical Agents 24
  • 25. Logic • KB = wumpus world rules + observations • a1 = “[1,2] is safe”, KB |= a1, proved by model checking February 20, 2006 AI: Chapter 7: Logical Agents 25
  • 26. Logic • KB = wumpus world rules + observations • a2 = “[2,2] is safe”, KB ¬|= a2 proved by model checking February 20, 2006 AI: Chapter 7: Logical Agents 26
  • 27. Logic • Inference is the process of deriving a specific sentence from a KB (where the sentence must be entailed by the KB) – KB |-i a = sentence a can be derived from KB by procedure I • “KB’s are a haystack” – Entailment = needle in haystack – Inference = finding it February 20, 2006 AI: Chapter 7: Logical Agents 27
  • 28. Logic • Soundness – i is sound if… – whenever KB |-i a is true, KB |= a is true • Completeness – i is complete if – whenever KB |= a is true, KB |-i a is true • If KB is true in the real world, then any sentence a derived from KB by a sound inference procedure is also true in the real world February 20, 2006 AI: Chapter 7: Logical Agents 28
  • 29. Propositional Logic • AKA Boolean Logic • False and True • Proposition symbols P1, P2, etc are sentences • NOT: If S1 is a sentence, then ¬S1 is a sentence (negation) • AND: If S1, S2 are sentences, then S1 Ù S2 is a sentence (conjunction) • OR: If S1, S2 are sentences, then S1 Ú S2 is a sentence (disjunction) • IMPLIES: If S1, S2 are sentences, then S1 Þ S2 is a sentence (implication) • IFF: If S1, S2 are sentences, then S1 Û S2 is a sentence (biconditional) February 20, 2006 AI: Chapter 7: Logical Agents 29
  • 30. Propositional Logic P Q ¬P PÙQ PÚQ PÞQ PÛQ False False True False False True True False True True False True True False True False False False True False False True True False True True True True February 20, 2006 AI: Chapter 7: Logical Agents 30
  • 31. Wumpus World Sentences • Let Pi,j be True if there is a pit in [i,j] • Let Bi,j be True if there is a breeze in [i,j] • ¬P1,1 • ¬ B1,1 • B2,1 • “Pits cause breezes in adjacent squares” B1,1 Û (P1,2 Ú P2,1) B2,1 Û (P1,1 Ú P2,1 Ú P3,1) • A square is breezy if and only if there is an adjacent pit February 20, 2006 AI: Chapter 7: Logical Agents 31
  • 32. A Simple Knowledge Base February 20, 2006 AI: Chapter 7: Logical Agents 32
  • 33. A Simple Knowledge Base • R1: ¬P1,1 • R2: B1,1 Û (P1,2 Ú P2,1) • R3: B2,1 (P1,1 Ú P2,2 Ú P3,1) • R4: ¬ B1,1 • R5: B2,1 • KB consists of sentences R1 thru R5 • R1 Ù R2 Ù R3 Ù R4 Ù R5 February 20, 2006 AI: Chapter 7: Logical Agents 33
  • 34. A Simple Knowledge Base • Every known inference algorithm for propositional logic has a worst-case complexity that is exponential in the size of the input. (co-NP complete) February 20, 2006 AI: Chapter 7: Logical Agents 34
  • 35. Equivalence, Validity, Satisfiability February 20, 2006 AI: Chapter 7: Logical Agents 35
  • 36. Equivalence, Validity, Satisfiability • A sentence if valid if it is true in all models – e.g. True, A Ú ¬A, A Þ A, (A Ù (A Þ B) Þ B • Validity is connected to inference via the Deduction Theorem – KB |- a iff (KB Þ a) is valid • A sentence is satisfiable if it is True in some model – e.g. A Ú B, C • A sentence is unstatisfiable if it is True in no models – e.g. A Ù ¬A • Satisfiability is connected to inference via the following – KB |= a iff (KB Ù ¬a) is unsatisfiable – proof by contradiction February 20, 2006 AI: Chapter 7: Logical Agents 36
  • 37. Reasoning Patterns • Inference Rules – Patterns of inference that can be applied to derive chains of conclusions that lead to the desired goal. • Modus Ponens – Given: S1 Þ S2 and S1, derive S2 • And-Elimination – Given: S1 Ù S2, derive S1 – Given: S1 Ù S2, derive S2 • DeMorgan’s Law – Given: Ø( A Ú B) derive ØA Ù ØB – Given: Ø( A Ù B) derive ØA Ú ØB February 20, 2006 AI: Chapter 7: Logical Agents 37
  • 38. Reasoning Patterns • And Elimination a Ùb a • From a conjunction, any of the conjuncts can be inferred • (WumpusAhead Ù WumpusAlive), WumpusAlive can be inferred • Modus Ponens a Þ b , a b • Whenever sentences of the form a Þ b and a are given, then sentence b can be inferred • (WumpusAhead Ù WumpusAlive) Þ Shoot and (WumpusAhead Ù WumpusAlive), Shoot can be inferred February 20, 2006 AI: Chapter 7: Logical Agents 38
  • 39. Example Proof By Deduction • Knowledge S1: B22 Û ( P21 Ú P23 Ú P12 Ú P32 ) rule S2: ØB22 observation • Inferences S3: (B22 Þ (P21 Ú P23 Ú P12 Ú P32 ))Ù ((P21 Ú P23 Ú P12 Ú P32 ) Þ B22) [S1,bi elim] S4: ((P21 Ú P23 Ú P12 Ú P32 ) Þ B22) [S3, and elim] S5: (ØB22 Þ Ø( P21 Ú P23 Ú P12 Ú P32 )) [contrapos] S6: Ø(P21 Ú P23 Ú P12 Ú P32 ) [S2,S6, MP] S7: ØP21 Ù ØP23 Ù ØP12 Ù ØP32 [S6, DeMorg] February 20, 2006 AI: Chapter 7: Logical Agents 39
  • 40. Evaluation of Deductive Inference • Sound – Yes, because the inference rules themselves are sound. (This can be proven using a truth table argument). • Complete – If we allow all possible inference rules, we’re searching in an infinite space, hence not complete – If we limit inference rules, we run the risk of leaving out the necessary one… • Monotonic – If we have a proof, adding information to the DB will not invalidate the proof February 20, 2006 AI: Chapter 7: Logical Agents 40
  • 41. Resolution • Resolution allows a complete inference mechanism (search-based) using only one rule of inference • Resolution rule: – Given: P1 Ú P2 Ú P3 …Ú Pn, and ØP1 Ú Q1 …Ú Qm – Conclude: P2 Ú P3 …Ú Pn Ú Q1 …Ú Qm Complementary literals P1 and ØP1 “cancel out” • Why it works: – Consider 2 cases: P1 is true, and P1 is false February 20, 2006 AI: Chapter 7: Logical Agents 41
  • 42. Resolution in Wumpus World • There is a pit at 2,1 or 2,3 or 1,2 or 3,2 – P21 Ú P23 Ú P12 Ú P32 • There is no pit at 2,1 ØP21 • Therefore (by resolution) the pit must be at 2,3 or 1,2 or 3,2 – P23 Ú P12 Ú P32 February 20, 2006 AI: Chapter 7: Logical Agents 42
  • 43. Proof using Resolution • To prove a fact P, repeatedly apply resolution until either: – No new clauses can be added, (KB does not entail P) – The empty clause is derived (KB does entail P) • This is proof by contradiction: if we prove that KB Ù ØP derives a contradiction (empty clause) and we know KB is true, then ØP must be false, so P must be true! • To apply resolution mechanically, facts need to be in Conjunctive Normal Form (CNF) • To carry out the proof, need a search mechanism that will enumerate all possible resolutions. February 20, 2006 AI: Chapter 7: Logical Agents 43
  • 44. CNF Example 1. B22 Û ( P21 Ú P23 Ú P12 Ú P32 ) 2. Eliminate Û , replacing with two implications (B22 Þ ( P21 Ú P23 Ú P12 Ú P32 )) Ù ((P21 Ú P23 Ú P12 Ú P32 ) Þ B22) 1. Replace implication (A Þ B) by ØA Ú B (ØB22 Ú ( P21 Ú P23 Ú P12 Ú P32 )) Ù (Ø(P21 Ú P23 Ú P12 Ú P32 ) Ú B22) 1. Move Ø “inwards” (unnecessary parens removed) (ØB22 Ú P21 Ú P23 Ú P12 Ú P32 ) Ù ( (ØP21 Ù ØP23 Ù ØP12 Ù ØP32 ) Ú B22) 4. Distributive Law (ØB22 Ú P21 Ú P23 Ú P12 Ú P32 ) Ù (ØP21 Ú B22) Ù (ØP23 Ú B22) Ù (ØP12 Ú B22) Ù (ØP32 Ú B22) (Final result has 5 clauses) February 20, 2006 AI: Chapter 7: Logical Agents 44
  • 45. Resolution Example • Given B22 and ØP21 and ØP23 and ØP32 , prove P12 • (ØB22 Ú P21 Ú P23 Ú P12 Ú P32 ) ; ØP12 • (ØB22 Ú P21 Ú P23 Ú P32 ) ; ØP21 • (ØB22 Ú P23 Ú P32 ) ; ØP23 • (ØB22 Ú P32 ) ; ØP32 • (ØB22) ; B22 • [empty clause] February 20, 2006 AI: Chapter 7: Logical Agents 45
  • 46. Evaluation of Resolution • Resolution is sound – Because the resolution rule is true in all cases • Resolution is complete – Provided a complete search method is used to find the proof, if a proof can be found it will – Note: you must know what you’re trying to prove in order to prove it! • Resolution is exponential – The number of clauses that we must search grows exponentially… February 20, 2006 AI: Chapter 7: Logical Agents 46
  • 47. Horn Clauses • A Horn Clause is a CNF clause with exactly one positive literal – The positive literal is called the head – The negative literals are called the body – Prolog: head:- body1, body2, body3 … – English: “To prove the head, prove body1, …” – Implication: If (body1, body2 …) then head • Horn Clauses form the basis of forward and backward chaining • The Prolog language is based on Horn Clauses • Deciding entailment with Horn Clauses is linear in the size of the knowledge base February 20, 2006 AI: Chapter 7: Logical Agents 47
  • 48. Reasoning with Horn Clauses • Forward Chaining – For each new piece of data, generate all new facts, until the desired fact is generated – Data-directed reasoning • Backward Chaining – To prove the goal, find a clause that contains the goal as its head, and prove the body recursively – (Backtrack when you chose the wrong clause) – Goal-directed reasoning February 20, 2006 AI: Chapter 7: Logical Agents 48
  • 49. Forward Chaining • Fire any rule whose premises are satisfied in the KB • Add its conclusion to the KB until the query is found February 20, 2006 AI: Chapter 7: Logical Agents 49
  • 50. Forward Chaining • AND-OR Graph – multiple links joined by an arc indicate conjunction – every link must be proved – multiple links without an arc indicate disjunction – any link can be proved February 20, 2006 AI: Chapter 7: Logical Agents 50
  • 51. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 51
  • 52. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 52
  • 53. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 53
  • 54. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 54
  • 55. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 55
  • 56. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 56
  • 57. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 57
  • 58. Forward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 58
  • 59. Backward Chaining • Idea: work backwards from the query q: – To prove q by BC, • Check if q is known already, or • Prove by BC all premises of some rule concluding q • Avoid loops – Check if new subgoal is already on the goal stack • Avoid repeated work: check if new subgoal – Has already been proved true, or – Has already failed February 20, 2006 AI: Chapter 7: Logical Agents 59
  • 60. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 60
  • 61. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 61
  • 62. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 62
  • 63. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 63
  • 64. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 64
  • 65. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 65
  • 66. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 66
  • 67. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 67
  • 68. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 68
  • 69. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 69
  • 70. Backward Chaining February 20, 2006 AI: Chapter 7: Logical Agents 70
  • 71. Forward Chaining vs. Backward Chaining • Forward Chaining is data driven – Automatic, unconscious processing – E.g. object recognition, routine decisions – May do lots of work that is irrelevant to the goal • Backward Chaining is goal driven – Appropriate for problem solving – E.g. “Where are my keys?”, “How do I start the car?” • The complexity of BC can be much less than linear in size of the KB February 20, 2006 AI: Chapter 7: Logical Agents 71

Hinweis der Redaktion

  1. Skipping 7.6+