SlideShare ist ein Scribd-Unternehmen logo
1 von 128
Unit 4
NATURAL LANGUAGE PROCESSING
AND PLANNING
1
Syllabus
• Overview of NLP tasks,
• Parsing,
• Machine translation,
• Components of Planning System,
• Planning agent,
• State-Goal & Action Representation,
• Forward planning,
• backward chaining,
• Planning example
– partial-order planner,
– Block world.
2
Natural language
• Natural languages are languages that living creatures use for
communication,
• A machine is considered to be really intelligent only when it
can understand and interpret or speak the matter of natural
language.
• The capability to understand, interpret and communicate
through natural language is a very important criteria of
intelligent behavior.
3
Why Natural language processing?
• Huge amount of data?
– Internet=at least 2.5 billion pages
• Applications for processing large amounts of texts.
– Classify text into categories
– Index and search large texts
– Automatic translation
– Speech understanding: Understanding phone conversation
– Information extraction: Extract useful information from
resumes
– Automatic summarization
– Question answering
– Knowledge acquisition: knowledge from expert
– Text generations/dialogs
• All these requires natural language expertise.
4
Methods of natural language processing
i. Keyboard analysis or pattern matching technique
ii. Syntactic analysis
5
Pattern matching technique
• Early NLP programs adopted this method. In this technique,
the system scans the input sentences for “selective” keywords
and once they are encountered, the system responds with a
“built-in” reply.
6
Flowchart of Pattern matching technique
7
Syntactic analysis
• This method examines the structure of a sentence and performs
detailed analysis of the sentence and semantics of the
statement.
• In order to perform this, the system is expected to have
through knowledge of the grammar of the language.
• The basic unit of any language is sentence, made up of group
of words, having their own meanings and linked together to
present an idea or thought.
• Apart from having meanings, words fall under categories
called parts of speech.
• In English languages, there are eight different parts of speech.
They are nouns, pronoun, adjectives, verb, adverb,
prepositions, conjunction and interjections.
8
Syntactic analysis
• In English language, a sentence S is made up of a noun phrase (NP)
and a verb phrase (VP), i.e.
S=NP+VP
The given noun phrase (NP) normally can have an article or
delimiter(D) or an adjective(ADJ) and the noun(N), i.e.
NP=D+ADJ+N
Also a noun phrase may have a prepositional phrase(PP) which has a
preposition(P), a delimiter (D) and the noun(N),i.e.
PP=D+P+N
The verb phrase(VP) has a verb(V) and the object of the verb. The
object of the verb may be a noun(N) and its determiner, i.e.
VP=V+N+D
These are some of the rules of the English grammar that helps one to
construct a small parser for NLP.
9
Areas of computer science research
10
11
Stages/Phases/Tasks of NLP
• Phonological
• Morophological and lexical
• Syntactic
• Semantic
• Discourse integration
• Pragmatic
12
Morphologic
al Analysis
Syntactic
Analysis
Semantic
Analysis
Discourse
Analysis
Pragmatic
Analysis
Internal
representati
on
lexicon
user
Surface
form
Perform
action
stems
parse
tree
Resolve
references
13
Phases of NLP
• Phonological : This is knowledge which relates sounds to
the words we recognize. A phoneme is the smallest unit of
sound. Phones are aggregated into word sounds.
• Morophological: This is lexical knowledge which relates to
word consturctions from basic units called morphemes. A
morpheme is the smallest unit of meaning.
– for example, the consturction of friendly from the the
root friend and the suffix ly.
• Syntactic: This knowledge relates to how words are put
together or structured of form grammatically correct
sentences in the language. (it is done using parsing)
14
Level of Knowledge used in Language
Understanding
• Semantic: This knowledge is concerned with the
meanings of words and phrases and how they combine to
form sentence meanings.
• Discourse integration : the meaning of an individual
sentence may depend on the sentences thar precede it and
may influence the meaning of the sentences that allow it.
• Pragmatic: This is high level knowledge which relates to
the use of senetences in different contexts and how the
context affects the meaning of the sentences.
15
I want to print
Ali’s .init file
I (pronoun)
want (verb)
to (prep)
to(infinitive)
print (verb)
Ali (noun)
‘s (possessive)
.init (adj)
file (noun)
file (verb)
Surface form
stems
16
I (pronoun)
want (verb)
to (prep)
to(infinitive)
print (verb)
Ali (noun)
‘s (possessive)
.init (adj)
file (noun)
file (verb)
S
NP
VP
NP
NP
NP VP
SVPRO
PRO V
ADJ
ADJ N
I
want
I print
Ali’s
.init file
stems
Parse
tree
17
Issues in syntax
• By parsing we understand the role of each of these words.
• It help figuring out (automatically) questions like who did
what and when?
• Anaphora Resolution
“ the dog entered my room . It scared me”
The system must understand “it” is related to “dog”
Preposition attachment
“I saw the man in the park with a telescope (ambiguity)
18
Issues in syntax
19
Issues in syntax
20
Parse tree
• Mohan slept on the bench
21
Parse tree
• The boy ate an apple.
22
Parse tree
• The green cow munched the grass.
23
Parse tree
Bill loves the Dog.
24
S
NP
VP
NP
NP
NP VP
SVPRO
PRO V
ADJ
ADJ N
I
want
I print
Ali’s
.init file
Parse tree
I
want print
Ali
.init
file
who
what
who
Who’s
what
type
Semantic Net
25
Issues in semantics
• Understand language! How“?
“Plant” =industrial plant
“plant” = living organism
Words are ambiguous
Importance of semantics?
• Machine translation: wrong translations
• Information retrieval: wrong information
• Anaphora resolution: wrong referents
26
Issues in semantics
• How to learn the meaning of words?
• From dictionaries:
– Plant , works, industrial plant(building for carrying on
industrial labor “they built a large plant to manufacture
automobiles”)
– Plant, flora, plant life(a living organism lacking the power
of locomotion)
• Example:
– They are producing about 1000 automobiles in the new
plant
– The sea flora consists in 1000 different plant species
– The plant was close to the farm of animals
27
Issues in semantics
• Learn from annotated examples:
• Assume 100 examples containing “plant” previously tagged by
a human.
• Train a learning algorithm.
• Precisions in the range 60%-70%
• How to choose the learning algorithm?
• How to obtain the 100 tagged examples.
28
I
want print
Ali
.init
file
who
what
who
Who’s
what
type
Semantic Net
To whom the pronoun ‘I’
refers
To whom the proper
noun ‘Ali’ refers
What are the files to be
printed
Execute the command
lpr /ali/stuff.init
29
The boy saw the man on the
mountain with a telescope
Prepositional
phrase
attachment
30
Parsing
• Parsing is the process of analying a sentence by taking it
apart word-by-word and determining its structure from its
constituent parts and subparts.
• Parsing -- the act of analyzing the grammaticality of an
utterance according to some specific grammar
– we read the words in some order (from left to right;
from right to left; or in random order) and analyzed
them one-by-one
• Each parse is a different method of analyzing some target
sentence according to some specified grammar
31
Parsing …
• Example: we have two sentences:
– “Have students in section 2 of Computer Science 203
take the exam.”
– “Have students in section 2 of Computer Science 203
taken the exam?”
• first ten words: “Have students in section 2 of
Computer Science 203” are exactly the same
although the meanings of the two sentences are
completely different
• if an incorrect guess is made, we can still use the
first ten words when we backtrack
– this will require a lot less work
32
Parsing Process
Input String Parser
Output
representation
structure
Lexicon
33
Lexicon
• The lexicon is a dictionary of words, where each word
contains some syntactic, semantic and possible some
pragmatic information.
• The information in the lexicon is needed to help determine
the function and meanings of the words in a sentences.
• Each entry in a lexicon will contain a root word called
head.
34
Typical entries in a Lexicon
Word Type Features
a Determiner {3s}
be Verb Trans: intransitive
boy Noun {3s}
can Verb {1s, 2s, 3s, 1p, 2p, 3p}
carried Verb Form: past, past participle
orange
Adjective
Noun {3s}
we Pronoun {1p}
35
Scaling Up the Lexicon
• In real text-understanding systems, the input is a sequence
of characters from which the words must be extracted
• Four step process for doing this consists of:
– tokenization
– morphological analysis
– dictionary lookup
– error recovery
• Since many natural languages are fundamentally different,
these steps would be much harder to apply to some
languages than others
36
Scaling Up the Lexicon (Cont)
• a) Tokenization
– process of dividing the input into distinct tokens --
words and punctuation marks.
– this is not easy in some languages , like Japanese,
where there are no spaces between words
– this process is much easier in English although it is not
trivial by any means
– examples of complications may include:
A hyphen at the end of the line may be an interword
or an intraword dash
– tokenization routines are designed to be fast, with the
idea that as long as they are consistent in breaking up
the input text into tokens, any problems can always be
handled at some later stage of processing 37
Scaling Up the Lexicon (Cont)
• b) Morphological Analysis
– the process of describing a word in terms of the prefixes, suffixes
and root forms that comprise it
– there are three ways that words can be composed:
• Inflectional Morphology
– reflects that changes to a word that are needed in a
particular grammatical context (Ex: most nouns take the
suffix “s” when they are plural)
• Derivational Morphology
– derives a new word from another word that is usually of a
different category (Ex: the noun “softness” is derived from
the adjective “short”)
• Compounding
– takes two words and puts them together (Ex:
“bookkeeper” is a compound of “book” and “keeper”)
– used a lot in morphologically complex languages such as German,
Finish, Turkish, Inuit, and Yupik 38
Scaling Up the Lexicon (Cont)
• c) Dictionary Lookup
– is performed on every token (except for special ones such as
punctuation)
– the task is to find the word in the dictionary and return its
definition
– two ways to do dictionary lookup:
• store morphologically complex words first:
– complex words are written to dictionary and the looked up
when needed
• do morphological analysis first:
– process the word before looking anything up
– Ex: “walked” -- strip of “ed” and look up “walk”
» if the verb is not marked as irregular, then “walked”
would be the past tense of “walk”
– any implementation of the table abstract data type can serve as a
dictionary: hash tables, binary trees, b-tries, and trees 39
Scaling Up the Lexicon (Cont)
• d) Error Recovery
– is undertaken when a word is not found in the dictionary
– there are four types of error recovery:
• morphological rules can guess at the word’s syntactic class
– Ex: “smarply” is not in the dictionary but it is probably an
adverb
• capitalization is a clue that a word is a proper name
• other specialized formats denote dates, times, social security
numbers, etc
• spelling correction routines can be used to find a word in the
dictionary that is close to the input word
– there are two popular models for defining “closeness” in
words:
» Letter-Based Model
» Sound-Based Model
40
Parsing Approach
• Top Down Parsing
– Begin with the start symbol and apply the grammar
rules forward until the symbols at the terminals of the
tree correspond to the components of the sentence
being parsed.
• Bottom Up Parsing
– Begin with the sentence to be parsed and apply the
grammar rules backward until a single tree whose
terminals are the words of the sentence and whose top
node is the start symbol has been produced.
41
Parser classification
• Parser can be classified in two category depending on the
parsing strategy employed.
– deterministic
– nondeterministic
42
Deterministic parser
• A deterministic parser permits only one choice (arc) for each
word category.
• Each arc will have a different test condition
• If an incorrect test choice is accepted from some state, the
parse will fail since the parser cannot backtrack to an
alternative choice.
43
A Deterministic network
N1 N2 N3
N4
N5 N6 N7
article noun verb
article
noun
aux
verb
verb
44
A Nondeterministic Network
• Nondeterministic parsers permit different arcs to be labeled
with the same test.
• Consequently, the next test from any given state may not be
uniquely determined by the state and the current input word.
45
A nondeterministic network
N1 N2 N3
N4
N5
N6
article noun verb
noun
verb
46
Machine translation
47
Machine Translation (MT)
• It refers to the process of automated translation of text from
one language to another.
• Achieving human level translation quality is a holy grail in NLP,
primarily because generating good translations needs a very
good understanding of the source document.
• Existing MT systems can generate rough translations that
frequently at least convey the gist(idea) of a document.
• High quality translations possible when specialized to narrow
domains, e.g. weather forcasts.
• Some MT systems used in computer-aided translation in
which a bilingual human post-edits the output to produce
more readable accurate translations.
48
Machine Translation
• Automatically translate one natural language
into another.
49
Mary didn’t slap the green witch.
Maria no dió una bofetada a la bruja verde.
50
Ambiguity Resolution
is Required for Translation
• Syntactic and semantic ambiguities must be properly
resolved for correct translation:
Word Alignment
• Shows mapping between words in one
language and the other.
51
Mary didn’t slap the green witch.
Maria no dió una bofetada a la bruja verde.
Approaches of MT
52
• There are several challenges in the way of
developing successful MT systems.
• Two important directions in the development
of MT systems are
• Rule based MT (RBMT)
• Corpus based MT (CBMT)
Approaches of MT
53
•RBMT is generated on the basis of morphological, syntactic,
and semantic analysis of both the source and the target
languages.
•Corpus-based machine translation (CBMT) is generated on
the analysis of bilingual text corpora.
•The former belongs to the domain of rationalism and the
latter empiricism .
• Given large-scale and fine-grained linguistic rules, RBMT
systems are capable of producing translations with reasonable
quality, but constructing the system is very time-consuming
and labor-intensive because such linguistic resources need to
be hand-crafted, frequently referred to as knowledge
acquisition problem.
Approaches of MT
54
•Moreover, it is of great difficulty to correct the input or add new
rules to the system to generate a translation.
• By contrast, however, adding more examples to a CBMT system
can improve the system since it is based on the data, though the
accumulation and management of the huge bilingual data corpus
can also be costly.
Rule based MT
55
Types of Rule based MT
– Direct MT (DMT)
– Transfer MT (TMT)
– Interlingua MT (IMT)
56
Direct MT (DMT)
• DMT is a word-by-word translation approach
with some simple grammatical adjustments.
• A DMT system is designed for a specific source
and target language pair and the translation
unit of which is usually a word.
57
Transfer RBMT Systems
• A Transfer-based machine translation system
involves three stages.
• The first stage makes analysis of the source text
and converts it into abstract representations; the
second stage converts those into equivalent
target language-oriented representations; and
the third generates the final target text.
• The representation is specific for each language
pair.
58
Interlingua MT (IMT)
• The IMT operates over two phases: analyzing the SL
text into an abstract universal language-independent
representation of meaning, i.e. the interlingua, which is
the phase of analysis; generating this meaning using
the lexical units and the syntactic constructions of the
TL (target language) , which is the phase of synthesis.
• Though no transfer component has to be created for
each language pair by adopting the approach of IMT,
the definition of an interlingua is of great difficulty and
even maybe impossible for a wider domain.
59
Example can illustrate the general
frame of RBMT
• A girl eats an apple. Source Language = English; Demanded
Target Language = German
• Minimally, to get a German translation of this English
sentence one needs:
– A dictionary that will map each English word to an
appropriate German word.
– Rules representing regular English sentence structure.
– Rules representing regular German sentence structure.
And finally, we need rules according to which one can relate
these two structures together.
60
Stages of translation
61
• 1st: getting basic part-of-speech information of each source word:
a = indef.article; girl = noun; eats = verb; an = indef.article; apple = noun
• 2nd: getting syntactic information about the verb “to eat”:
NP-eat-NP; here: eat – Present Simple, 3rd Person Singular, Active Voice
• 3rd: parsing the source sentence:
• (NP an apple) = the object of eat
Often only partial parsing is sufficient to get to the syntactic structure of
the source sentence and to map it onto the structure of the target
sentence.
Stages of translation
• 4th: translate English words into German
• a (category = indef.article) => ein (category = indef.article)
• girl (category = noun) => Mädchen (category = noun)
• eat (category = verb) => essen (category = verb)
• an (category = indef. article) => ein (category = indef.article)
• apple (category = noun) => Apfel (category = noun)
• 5th: Mapping dictionary entries into appropriate
inflected forms (final generation):
• A girl eats an apple. => Ein Mädchen isst einen Apfel.
62
The RBMT system contains
i. a SL morphological analyser - analyses a source language word and
provides the morphological information;
ii. a SL parser - is a syntax analyser which analyses source language
sentences;
iii. a translator - used to translate a source language word into the target
language;
iv. a TL morphological generator - works as a generator of appropriate
target language words for the given grammatica information;
v. a TL parser - works as a composer of suitable target language sentences;
vi. Several dictionaries - more specifically a minimum of three dictionaries:
a) a SL dictionary - needed by the source language morphological
analyser for morphological analysis,
b) a bilingual dictionary - used by the translator to translate source
language words into target language words,
c) a TL dictionary - needed by the target language morphological
generator to generate target language words.
63
The RBMT system makes use of the
following:
i. a Source Grammar for the input language which builds syntactic
constructions from input sentences;
ii. a Source Lexicon which captures all of the allowable vocabulary in the
domain;
iii. Source Mapping Rules which indicate how syntactic heads and
grammatical functions in the source language are mapped onto domain
concepts and semantic roles in the interlingua;
iv. a Domain Model/Ontology which defines the classes of domain
concepts and restricts the fillers of semantic roles for each class;
v. Target Mapping Rules which indicate how domain concepts and
semantic roles in the interlingua are mapped onto syntactic heads and
grammatical functions in the target language;
vi. a Target Lexicon which contains appropriate target lexemes for each
domain concept;
vii. a Target Grammar for the target language which realizes target syntactic
constructions as linearized output sentences.[4]
64
Advantages
i. No bilingual texts are required: This makes it possible to create
translation systems for languages that have no texts in common, or even
no digitized data whatsoever.
ii. Domain independent: Rules are usually written in a domain
independent manner, so the vast majority of rules will "just work" in
every domain, and only a few specific cases per domain may need rules
written for them.
iii. No quality ceiling: Every error can be corrected with a targeted rule,
even if the trigger case is extremely rare. This is in contrast to statistical
systems where infrequent forms will be washed away by default.
iv. Total control: Because all rules are hand-written, you can easily debug a
rule based system to see exactly where a given error enters the system,
and why.
v. Reusability: Because RBMT systems are generally built from a strong
source language analysis that is fed to a transfer step and target
language generator, the source language analysis and target language
generation parts can be shared between multiple translation systems,
requiring only the transfer step to be specialized. Additionally, source
language analysis for one language can be reused to bootstrap a closely
related language analysis.
65
Shortcomings
i. Insufficient amount of really good dictionaries.
Building new dictionaries is expensive.
ii. Some linguistic information still needs to be set
manually.
iii. It is hard to deal with rule interactions in big
systems, ambiguity, and idiomatic expressions.
iv. Failure to adapt to new domains. Although
RBMT systems usually provide a mechanism to
create new rules and extend and adapt the
lexicon, changes are usually very costly and the
results, frequently, do not pay off.
66
Planning
67
Syllabus
• Components of Planning System,
• Planning agent,
• State-Goal & Action Representation,
• Forward planning,
• backward chaining,
• Planning example
– partial-order planner,
– Block world.
68
Planning
• Decomposing the original problem into appropriate subparts
and on ways of recording and handling interactions among the
subparts as they are detected during the problem-solving
process are often called as planning.
• Planning refers to the process of computing several steps of a
problem-solving procedure before executing any of them.
69
Planning
•Definition : Planning is arranging a sequence of actions to
achieve a goal.
•Uses core areas of AI like searching and reasoning &
•Is the core for areas like NLP, Computer Vision.
•Robotics
•Examples : Navigation , Manoeuvring, Language Processing
(Generation)
Kinematics (ME)
Planning (CSE)
70
Components of a planning system
1. Choose the best rule to apply next based on the best
available heuristic information.
2. Apply the chosen rule to compute the new problem state
that arises from its application.
3. Detect when a solution has been found.
4. Detect dead ends so that they can be abandoned and the
system’s effort directed in more fruitful directions.
5. Detect when an almost correct solution has been found
and employ special techniques to make it totally correct.
71
1. Choose the rules to apply
• First to isolate a set of differences between desired goal
state and then to identify those rules that are relevant to
reduce those differences.
• If several rules, a variety of other heuristic information
can be exploited to choose among them.
72
2. Applying Rules
• In simple systems, applying rules is easy. Each rule simply
specified the problem state that would result from its
application.
• In complex systems, we must be able to deal with rules that
specify only a small part of the complete problem state.
• One way is to describe, for each action, each of the changes it
makes to the state description.
73
3. Detecting a solution
• Find a solution to a problem when it has found a sequence
of operators that transforms the initial problem state into
the goal state.
• One of the representative systems for planning systems is,
predicate logic. Suppose, we have the predicate P(x), we
can prove P(x) given the assertions that describe that state
and the axioms that define the world model.
74
4. Detecting Dead Ends
• The exploring path that can never lead to a solution.
• No indication of goal node.
• If the search process is reasoning forward from the initial
state, it can prune any path that leads to a state from which
the goal state cannot be reached.
• If search process is reasoning backward, it can also
terminate a path either because it is sure that the initial
state cannot be reached.
75
5. Repairing an Almost Correct
Solution
• Assume that the problems are completely decomposable,
proceed to solve the sub problems separately, and then
check that when the sub solutions are combined, they do
infact yield a solution to the original problem.
76
A Planning agent
• The purpose of planning is to find a sequence of actions that
achieves a given goal when performed starting in a given
state. In other words, given a set of operator instances, an
initial state description, and a goal state description or
predicate, the planning agent computes a plan.
• Problem solving agents are able to plan ahead –to consider
the consequences of sequences of actions before acting.
• Knowledge based agents can select actions based on explicit,
logical representations of the current state and the effects of
actions. This allows the agent to succeed in complex,
inaccessible environments that are too difficult for a problem
solving agent.
• Problem solving agents + knowledge based agents=Planning
agents
77
A Planning agent
• Algorithm for a simple planning agent
i. Generate a goal to achieve
ii. Construct a plan to achieve goal from current state
iii. Execute plan until finished
iv. Begin again with new goal
The planning agent first generates a goal to achieve,
and then constructs a plan to achieve it from the
current state . Once it has a plan, it keeps executing
it until the plan is finished, then begins again with a
new goal.
78
Planning Languages
• Languages must represent..
– States
– Goals
– Actions
79
State Representation
• Planner decompose the world into logical conditions
and represent a state as a conjunction of positive
literals
• Using
– Logical Propositions: Poor  Unknown
– FOL literals: At(Plane1,OMA)  At(Plan2,JFK)
• FOL literals must be ground & function-free
– Not allowed: At(x,y) or At(Father(Fred),Sydney)
• Closed World Assumption
– What is not stated are assumed false
80
Goal Representation
• A goal is a partially specified state,
represented as a conjunction of positive
ground literals,
– Example: Rich  Famous  Miserable satisfies the
goal Rich  Famous
81
Action Representation
• An action is specified in terms of the preconditions
that must hold before it can be executed and the
effects that ensure when it is executed.
82
Planning algorithms
• Planning algorithms are search procedures
• The most straight forward approach
• Which state to search?
– State-space search
• Each node is a state of the KB
• Plan = path through the states
83
Planning algorithms
• The most straight forward approach for
planning algorithm is to use state space
search. Because the description of action in a
planning problem specify both preconditions
and effects, it is possible to search in either
– Forward direction from the initial state or
– Backward from the goal.
84
Forward state-space search
• Planning with forward state space search is sometimes
called progression planning, because it moves in the
forward direction.
• Forward state-space search refers to the search algorithms
that start with the given state as the start state , generate
the set of successor states, and search through them
generating more successors till they find a state that
satisfies the goal conditions.
• Initial state of the search is the initial state from the
planning problem. In general, each state will be a set of
positive literals.
• The actions that are applicable to a state if all the
preconditions are satisfied. The Succesor state is built by
updating KB with add and delete lists.
• The goal test checks whether the state satisfies the goal of
the problem.
85
Forward search in the Blocks world
…
…
86
34
Forward state-space search
• Advantages
– No functions in the declarations of goals 
search state is finite
– Sound
– Complete (if algorithm used to do the search is
complete)
• Limitations
– Irrelevant actions  not efficient
– Need heuristic or pruning procedure
34
Backward state-space search
• It is also called as regression
• Initial state: goal state of the problem
• Actions:
– Choose an action that
• Is relevant; has one of the goal literals in its effect set
• Is consistent; does not negate another literal
– Construct new search state
• Remove all positive effects of A that appear in goal
• Add all preconditions, unless already appears
• Goal test: state is the initial world state
34
Backward state-space search
• Possible because of STRIPS-like language
– Goals are listed
– Predecessors are listed for each action/state
• Advantages
– Consider only relevant actions  much smaller
branching factor
– Ways to reduce even more the branching factors
• Limitations
– Still need heuristic to be more efficient
34
Heuristics for state-space search
• Valid both for forward and backward searches
• Valid for many planning problems
• Possible approaches
– Divide and conquer
– Derive a relaxed problem
– Combine both
34
Heuristics for state-space search
• Divide and conquer
– Subgoal independence assumption
• What if there are negative interactions between the
subgoals of the problems?
• What if there are redundant actions in the subgoals?
• Derive a relaxed problem
– Remove all preconditions from the actions
– Remove all negative effects from the actions
(empty delete list)
Block World Problem
• There is a flat surface on which blocks can be placed.
• There are a number of square blocks, all the same size.
• They can be stacked one upon the other.
• There is robot arm that can manipulate the blocks
92
Actions of the robot arm
• UNSTACK(A,B): pick up block A from its current position on
Block B. The arm must be empty and block A must have no
blocks on top of it.
• STACK(A,B): place block A on block B. The arm must already
be holding and the surface of B must be clear.
• PICKUP(A): pick up block A from the table and hold it. The arm
must be empty and there must be nothing on top of block A.
• PUTDOWN(A): put block A down on the table. The arm must
have been holding block A.
• Notice that the robot arm can hold only one block at a time.
93
Predicates
• In order to specify both the conditions under which an
operation may be performed and the results of performing
it, we need the following predicates:
• ON(A,B): block A is on block B.
• ONTABLE(A): block A is on the table.
• CLEAR(A): there is nothing on top of block A.
• HOLDING(A): the arm is holding block A
• ARMEMPTY: the arm is holding nothing
A
B
94
A simple Search Tree
A B
1
UNSTACK(A,B)
2
PUTDOWN(A)
3 ONTABLE(B)^CLEAR(A)^CLEAR
(B)^ONTABLE(A)
A
B
95
Simple position
• ON(A,B) ^ ONTABLE(B)^CLEAR(A)
If we execute UNSTACK(A,B) in this state
Then,
HOLDING(A)^CLEAR(B)
A
B
96
Goal Stack Planning
• It is the earliest technique to be developed for solving compound
goals that may interact.
• This was the approach used by STRIPS.
• In this method, the problem solver makes use of a single stack that
contains both goals and operators that have been proposed to
satisfy these goals.
• The problem solver also relies on a database that describes the
current situation and a set of operators described as
PRECONDITION, ADD, and DELETE lists.
• The GSP method attacks problems involving conjoined goals by
solving the goals one at a time, in order.
• A plan generated by this method contains a sequence of operators
for attaining the first goal, followed by a complete sequence for the
second goal etc.
97
Goal Stack Planning continued….
• At each succeeding step of the problem solving process, the
top goal on the stack will be pursued.
• Sequence of operators is applied to the state description,
yielding new description.
• Next, the goal that is then at the top of the stack is explored.
• This process continues until the goal stack is empty.
• Then as one last check, the original goal is compared to the
final state derived from the application of the chosen
operators.
98
Goal Stack Planning: Example
Start: ON(B,A)^ONTABLE(A) ^ ONTABLE(C) ^ONTABLE(D) ^ARMEMPTY
Goal: ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D)
99
Solution
To start with goal stack is simply:
• ON(C,A)^ON(B,D)^ONTABLE(A)^ONTABLE(D)
• But we want to separate this problem into four sub problems, one for each
component of the original goal.
• Two of the sub problems ONTABLE(A) and ONTABLE(D) are already true in the
initial state. So we will work on only the remaining two.
• Depending on the order in which we want to tackle the sub problems, there are
two goal stacks that could be created as our first step, where each line represents
one goal on the stack .
100
Exploring Operators
• Pursuing alternative 1, we check for operators that could
cause ON(C,A). Of the 4 operators, there is only one
STACK. So it yields:
• STACK(C,A)
• ON(B,D)
• ON(C,A)^ON(B,D)^ONTABLE(A)^ONTABLE(D)
• Preconditions for STACK(C,A) should be satisfied, we
must establish them as sub goals:
• CLEAR(A)
• HOLDING(C)
• CLEAR(A)^HOLDING(C)
• STACK(C,A)
• ON(B,D)
• ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D)
• Here we exploit the Heuristic that if HOLDING is one
of the several goals to be achieved at once, it should be
tackled last. 101
Goal stack Planning contd…
• Next we see if CLEAR(A) is true. It is not. The only operator
that could make it true is UNSTACK(B,A). This produces the
goal stack:
• ON(B,A)
• CLEAR(B)
• ARMEMPTY
• ON(B,A)^CLEAR(B)^ARMEMPTY
• UNSTACK(B,A)
• HOLDING(C)
• CLEAR(A)^HOLDING(C)
• STACK(C,A)
• ON(B,D)
• ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D)
102
• When we compare the top element of the goal stack, ON(B,A), to the world
model, we see that it is satisfied. So we pop it off and consider the next
goal, CLEAR(B).
• It, too, is already true in the world model, although it was not stated
explicitly as one of the initial predicates. So pop it from the stack.
• The third precondition for UNSTACK(B, A) remains. It is ARMEMPTY, and
it is also true in the current world model, sp it can be popped off the stack.
• The next element on the stack is the combined goal representing all of the
preconditions for UNSTACK(B, A) . Combined goal can be popped off.
• Now the top element of the stack is the operator UNSTACK(B, A) . We are
now guaranteed that its preconditions are satisfied, so it can be applied to
produce a new world model from which the rest of the problem solving can
process. At this point, the database corresponding to the world model is
• ONTABLE(A) ^ ONTABLE(C) ^ ONTABLE(D) ^ HOLDING(B)^CLEAR(A)
Goal stack Planning contd…
103
Goal stack Planning contd…The goal stack now is
• HOLDING(C)
• CLEAR(A)^HOLDING(C)
• STACK(C,A)
• ON(B,D)
• ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D)
We now attempt to satisfy the goal HOLDING(C). Preconditions for
HOLDING(C)
• ONTABLE(C)
• CLEAR( C )
• ARMEMPTY
• ONTABLE(C)^ CLEAR( C ) ^ARMEMPTY
• PICKUP(C)
• CLEAR(A)^HOLDING(C)
• STACK(C,A)
• ON(B,D)
• ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D)
104
Goal stack Planning contd…
• The top element of the goal stack is ONTABLE(C), which is
true, so pop it. The next element is CLEAR( C ), which is also
true, so pop it.
• The remaining precondition of PICKUP(C) is ARMEMPTY,
which is not true, since HOLDING(B) is true.
• There are two operators that could be applied to make
ARMEMPTY true: STACK(B,X) and PUTDOWN(B)
105
Choosing Alternative
• We will use the alternative 1. So we apply the operator
STACK(B,D). This makes the Goal stack:
• CLEAR(D)
• HOLDING(B)
• CLEAR(D)^HOLDING(B)
• STACK(B,D)
• ONTABLE(C)^CLEAR(C)^ARMEMPTY
• PICKUP(C)
• CLEAR(A)^HOLDING(C)
• STACK(C,A)
• ON(B,D)
• ON(C,A)^ON(B,D)^ONTABLE(A)^ONTABLE(D)
106
Goal stack Planning contd…
• CLEAR(D) and HOLDING(B) both are true. Now
the operation STACK(B,D) can be performed,
producing the world model.
• ONTABLE(A) ^ ONTABLE(C) ^ ONTABLE(D)
^ON(B,D)^ARMEMPTY
• PICKUP(C ) are now satisfied. Then all of the
preconditions of STACK(C ) are true.
• Now we begin work on the second part of
original goal ON(B,D). But it is already been
satisfied.
107
Complete plan
1. UNSTACK(B,A)
2. STACK(B,D)
3. PICKUP(C)
4. STACK(C,A)
108
Non-linear Planning using
constraint posting
109
Nonlinear Planning
• A plan generated by goal stack planning contains a
sequence; of operators for attaining the first goal followed
by a sequence for the second goal.
• Difficult problems cause goal interactions.
• The operators used to solve one subproblem may interfere
with the solution to a previous subproblem.
• Most problems require an interwined plan in which
multiple subproblems are worked on simultaneously.
• Such a plan is called nonlinear plan because it is not
composed of a linear sequence of complete subplans.
110
Example: Sussman anomaly problem
I. It is an anomaly because a linear planner cannot solve the problem.
II. This problem can be solved, but it cannot be attacked by first
applying all the operators to achieve one goal, and then applying
operators to achieve another goal.
III. Begin with null plan (no operators).
IV. Look at the goal state and find the operators that can achieve them.
V. There are two operators (steps) STACK(A, B) and STACK(B,C)
which have post conditions as ON(A,B) and ON(B, C).
111
Non-linear planner
• NAOH
• NONLIN
• MOLGEN
• TWEAK(uses constraint posting as a central technique)
112
Nonlinear Planning - Constraint
Posting
• Idea of constraint posting is to build up a plan by
incrementally
– hypothesizing operators,
– partial ordering between operators and
– binding of variables within operators
• At any given time in planning process, a solution is a partially
ordered.
• To generate actual plan, convert the partial order into total
orders.
113
Partial Ordering
• Any planning algorithm
– that can place two actions into a plan without
specifying which comes first is called a partial-
order planner.
– actions dependent on each other are ordered in
relation to themselves but not necessarily in
relation to other independent actions.
• The solution is represented as a graph of actions, not
a sequence of actions.
114
Heuristics for Planning using
Constraint Posting ( TWEAK)
1. Step addition – creating new steps for a plan.
2. Promotion – Constraining one step to come before
another in a final plan.
3. Declobbering – Placing one ( possibly new ) step S2
between two old steps S1 and S3 such that S2 reasserts
some precondition of S3 that was neglected (or
“clobbered”) by S1.
4. Simple establishment – Assigning a value to a variable,
in order to ensure the preconditions of some step.
5. Separation – Preventing the assignment of certain
values to a variable.
115
Algorithm
1. Initialize S to be set of propositions in the goal state.
2. Remove some unachieved proposition P from S.
3. Achieve P by using step addition, promotion, declobbering,
simple establishment.
4. Review all the steps in the plan, including any new steps
introduced by step addition to see if any of their
preconditions are unachieved.
5. Add to S the new set of unachieved preconditions.
6. If S = , complete the plan by converting the partial order of
steps into a total order and instantiate any variables as
necessary and exit.
7. Otherwise go to step 2.
116
Non-linear plan to solve Sussman
anomaly problem
Pre condition CLEAR(B) CLEAR(C )
*HOLDING(A) *HOLDING(B)
_________________________________________
Operator STACK(A, B) STACK(B,C)
_________________________________________________
ON(A, B) ON(B,C)
Post condition ARMEMPTY ARMEMPTY
~ CLEAR(B) ~ CLEAR(C )
~ HOLDING(A) ~ HOLDING(B)
______________________________________________
• Here unachieved conditions are marked with *.
• HOLDING in both the cases is not true as ARMEMPTY is true initially.
• Delete post conditions are marked with a negative symbol.
• Introduce new operator (step) to achieve these goals.
• This is called operator (step) addition.
• Add PICKUP operator on both the goals.
117
Non-linear plan to solve Sussman
anomaly problem
Pre Condition *CLEAR(A) *CLEAR(B )
ONTABLE(A) ONTABLE(B)
*ARMEMPTY *ARMEMPTY
Operator PICKUP(A) PICKUP(B)
Post Condition HOLDING(A) HOLDING(B)
~ ONTABLE(A) ~ ONTBLE(B)
~ ARMEMPTY ~ ARMEMPTY
~ CLEAR(A) ~ CLEAR(B )
______________________________________________________
Pre Condition CLEAR(B) CLEAR(C )
*HOLDING(A) *HOLDING(B)
Operator STACK(A, B) STACK(B,C)
ON(A, B) ON(B,C)
Post Condition ARMEMPTY ARMEMPTY
~ CLEAR(B) ~ CLEAR(C )
~ HOLDING(A) ~ HOLDING(B)
118
Non-linear plan to solve Sussman
anomaly problem
• It is clear that in a final plan, PICKUP must precede STACK operator.
• Introduce the ordering as follows:
– Whenever we employ operator, we need to introduce ordering
constraints called promotion.
___________________Plan 1____________________
PICKUP(A)  STACK(A, B)
PICKUP(B)  STACK(B, C)
_______________________________________
• Here we partially ordered operators and four unachieved pre
conditions:- CLEAR(A), CLEAR(B ), ARMEMPTY on both the paths
– CLEAR(A) is unachieved as C is on A in initial state.
– Also CLEAR(B) is unachieved even though top of B is clear in initial
state but there exist a operator STACK(A,B) with post condition as
~CLEAR(B).
Initial State: ON(C, A) ONTABLE(A)  ONTABLE(B)  ARMEMPTY 
CLEAR(C)  CL(EARB)
119
Non-linear plan to solve Sussman
anomaly problem
• If we make sure that PICKUP(B) precede STACK(A, B) then CLEAR(B)
is achieved. So post the following constraints.
___________________Plan 1____________________
PICKUP(A)  STACK(A, B)
PICKUP(B)  STACK(B, C)
____________________Plan2____________________
PICKUP(B)  STACK(A, B)
______________________________________
• Note that pre condition CLEAR(A) of PICKUP(A) still is unachieved.
– Let us achieve ARMEMPTY preconditions of each Pick up operators
before CLEAR(A).
– Initial state has ARMEMPTY. So one PICKUP can achieve its pre
condition but other PICKUP operator could be prevented from
being executed.
– Assume ARMEMPTY is achieved as pre condition of PICKUP(B) as its
other preconditions have been achieved. So put constraint.
120
Non-linear plan to solve Sussman
anomaly problem
• Similarly, following plans are generated
___________________Plan3___________________
PICKUP(B)  PICKUP(A) (pre conds of PICKUP(A) are not still achieved.)
_______________________________________
• Since PICKUP(B) makes ~ARMEMPTY and STACK(B,C) will make
ARMEMPTY which is precondition of PICKUP(A), we can put the
following constraint.
_________________Plan4________________________
PICKUP(B)  STACK(B, C)  PICKUP(A)
____________________________________________
– Here PICKUP(B) is said to clobber pre condition of PICKUP(A)
and STACK(B, C) is said to declobber it. (removing deadlock)
121
Non-linear plan to solve Sussman
anomaly problem
• The only unachieved precondition left is *CLEAR(A) from the
PICKUP(A) step. We can use step addition to achieve it.
*ON(x,A)
*CLEAR(x)
*ARMEMPTY
…………………………………………
~ ARMEMPTY
CLEAR(A)
HOLDING(A)
~ ON(x,A)
122
Non-linear plan to solve Sussman
anomaly problem
• Unfortunately, we now have three new
unachieved preconditions.
• We can achieve ON(x,A) easily by constraining
the value of x to block c.
• This works because block c is on block A. this
heuristic is called simple establishment.
• X=C in step UNSTACK(x, A)
• There are still steps that deny the preconditions
CLEAR(C) and ARMEMPTY, but we can use
promotion to take care of them.
123
Non-linear plan to solve Sussman
anomaly problem
_________________Plan 5_________________
UNSTACK(C, A)  STACK(B, C)
UNSTACK (C, A)  PICKUP(A)
UNSTACK (C, A)  PICKUP(B)
________________________________________
• The step PICKUP(B) requires ARMEMPTY, but this is denied
by the new UNSTACK(X, A) step. One way to solve this
problem is to add a new declobbering step to plan.
_______________Plan 6_________________
UNSTACK (C, A)  PUTDOWN(C)  PICKUP(B)
_____________________________________
124
Non-linear plan to solve Sussman
anomaly problem
• Combine the following partial plans to generate final plan.
____________________________________________
PICKUP(A)  STACK(A, B)
PICKUP(B)  STACK(B, C) _____________________________________________
PICKUP (B)  STACK(A, B) ____________________________________________
PICKUP (B)  PICKUP (A)
(pre conditions of PICKUP (A) are not still achieved.)
_____________________________________________
PICKUP(B)  STACK(B, C)  PICKUP(A)
__________________________________________
UNSTACK(C, A)  STACK(B, C)
UNSTACK (C, A)  PICKUP (A)
UNSTACK (C, A)  PICKUP (B)
____________________________________________
UNSTACK(C, A)  PUTDOWN(C)  PICKUP(B)
____________________________________________
Final plan: UNSTACK(C,A)  PUTDOWN(C)  PICKUP(B)  STACK(B,C)  PICKUP(A)  STACK(A,B)
125
Problem
• Show how the STRIPS
would solve this problem?
• Show how the TWEAK
would solve this problem?
A
B
C
Start:
ON(C,D)^ON(A,B)^ONTABLE
(D) ^ ONTABLE(B)
^ARMEMPTY
C
BA
Goal:
ON(C,B)^ON(D,A)^ONTABLE
(A)^ONTABLE(B)
D
D
126
Some examples
• Suppose that we want to move all the furniture out of a
room. This problem can be decomposed into a set of
smaller problems, each involving moving one piece of
furniture out of the room. But if there is a bookcase
behind the couch, then we must move the couch before
the bookcase.
• Suppose we have a fixed supply of paint; some white,
some pink and some red. We want to paint a room so that
it has light red walls and a white ceiling. We could
produce light red paint by adding some white paint to red.
But then we could not paint the ceiling white. So this
approach should be abandoned in favor of mixing the pink
and red paints together.
127
Example problem of cleaning a kitchen
• Cleaning the stove or refrigerator will get the floor dirty.
• To clean the oven, it is necessary to apply oven cleaner
and then to remove the cleaner.
• Before the floor can be washed, it must be swept.
• Before the floor can be swept, the garbage must be taken
out.
• Cleaning the refrigerator generates garbage and messes up
the counters.
• Washing the counters or the floor gets the sink dirty.
• Show how the technique of planning using goal stack
could be used to solve this problem.
128

Weitere ähnliche Inhalte

Was ist angesagt?

Symbol table in compiler Design
Symbol table in compiler DesignSymbol table in compiler Design
Symbol table in compiler DesignKuppusamy P
 
Issues in knowledge representation
Issues in knowledge representationIssues in knowledge representation
Issues in knowledge representationSravanthi Emani
 
Introduction to prolog
Introduction to prologIntroduction to prolog
Introduction to prologHarry Potter
 
Informed and Uninformed search Strategies
Informed and Uninformed search StrategiesInformed and Uninformed search Strategies
Informed and Uninformed search StrategiesAmey Kerkar
 
Knowledge Representation & Reasoning
Knowledge Representation & ReasoningKnowledge Representation & Reasoning
Knowledge Representation & ReasoningSajid Marwat
 
Artificial Intelligence Notes Unit 2
Artificial Intelligence Notes Unit 2Artificial Intelligence Notes Unit 2
Artificial Intelligence Notes Unit 2DigiGurukul
 
Artificial Intelligence Notes Unit 1
Artificial Intelligence Notes Unit 1 Artificial Intelligence Notes Unit 1
Artificial Intelligence Notes Unit 1 DigiGurukul
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)swapnac12
 
Token, Pattern and Lexeme
Token, Pattern and LexemeToken, Pattern and Lexeme
Token, Pattern and LexemeA. S. M. Shafi
 
Logics for non monotonic reasoning-ai
Logics for non monotonic reasoning-aiLogics for non monotonic reasoning-ai
Logics for non monotonic reasoning-aiShaishavShah8
 
Chapter1 Formal Language and Automata Theory
Chapter1 Formal Language and Automata TheoryChapter1 Formal Language and Automata Theory
Chapter1 Formal Language and Automata TheoryTsegazeab Asgedom
 
I. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHMI. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHMvikas dhakane
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AIVishal Singh
 
Natural language processing
Natural language processingNatural language processing
Natural language processingHansi Thenuwara
 
ProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) IntroductionProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) Introductionwahab khan
 
knowledge representation using rules
knowledge representation using rulesknowledge representation using rules
knowledge representation using rulesHarini Balamurugan
 

Was ist angesagt? (20)

Symbol table in compiler Design
Symbol table in compiler DesignSymbol table in compiler Design
Symbol table in compiler Design
 
Issues in knowledge representation
Issues in knowledge representationIssues in knowledge representation
Issues in knowledge representation
 
Introduction to prolog
Introduction to prologIntroduction to prolog
Introduction to prolog
 
Informed and Uninformed search Strategies
Informed and Uninformed search StrategiesInformed and Uninformed search Strategies
Informed and Uninformed search Strategies
 
Knowledge Representation & Reasoning
Knowledge Representation & ReasoningKnowledge Representation & Reasoning
Knowledge Representation & Reasoning
 
Artificial Intelligence Notes Unit 2
Artificial Intelligence Notes Unit 2Artificial Intelligence Notes Unit 2
Artificial Intelligence Notes Unit 2
 
Artificial Intelligence Notes Unit 1
Artificial Intelligence Notes Unit 1 Artificial Intelligence Notes Unit 1
Artificial Intelligence Notes Unit 1
 
Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)Performance analysis(Time & Space Complexity)
Performance analysis(Time & Space Complexity)
 
Token, Pattern and Lexeme
Token, Pattern and LexemeToken, Pattern and Lexeme
Token, Pattern and Lexeme
 
Recognition-of-tokens
Recognition-of-tokensRecognition-of-tokens
Recognition-of-tokens
 
Predicate logic
 Predicate logic Predicate logic
Predicate logic
 
Logics for non monotonic reasoning-ai
Logics for non monotonic reasoning-aiLogics for non monotonic reasoning-ai
Logics for non monotonic reasoning-ai
 
Chapter1 Formal Language and Automata Theory
Chapter1 Formal Language and Automata TheoryChapter1 Formal Language and Automata Theory
Chapter1 Formal Language and Automata Theory
 
LISP: Introduction to lisp
LISP: Introduction to lispLISP: Introduction to lisp
LISP: Introduction to lisp
 
I. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHMI. AO* SEARCH ALGORITHM
I. AO* SEARCH ALGORITHM
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AI
 
Natural language processing
Natural language processingNatural language processing
Natural language processing
 
5 csp
5 csp5 csp
5 csp
 
ProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) IntroductionProLog (Artificial Intelligence) Introduction
ProLog (Artificial Intelligence) Introduction
 
knowledge representation using rules
knowledge representation using rulesknowledge representation using rules
knowledge representation using rules
 

Ähnlich wie Artificial Intelligence Notes Unit 4

Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)Abdullah al Mamun
 
Natural Language Processing Course in AI
Natural Language Processing Course in AINatural Language Processing Course in AI
Natural Language Processing Course in AISATHYANARAYANAKB
 
Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Mustafa Jarrar
 
Natural Language Processing (NLP).pptx
Natural Language Processing (NLP).pptxNatural Language Processing (NLP).pptx
Natural Language Processing (NLP).pptxSHIBDASDUTTA
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language ProcessingToine Bogers
 
Natural language processing (nlp)
Natural language processing (nlp)Natural language processing (nlp)
Natural language processing (nlp)Kuppusamy P
 
https://www.slideshare.net/amaresimachew/hot-topics-132093738
https://www.slideshare.net/amaresimachew/hot-topics-132093738https://www.slideshare.net/amaresimachew/hot-topics-132093738
https://www.slideshare.net/amaresimachew/hot-topics-132093738Assosa University
 
Natural language processing
Natural language processingNatural language processing
Natural language processingBasha Chand
 
Sanskrit in Natural Language Processing
Sanskrit in Natural Language ProcessingSanskrit in Natural Language Processing
Sanskrit in Natural Language ProcessingHitesh Joshi
 
NLP pipeline in machine translation
NLP pipeline in machine translationNLP pipeline in machine translation
NLP pipeline in machine translationMarcis Pinnis
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language ProcessingRishikese MR
 
A Simple Explanation of XLNet
A Simple Explanation of XLNetA Simple Explanation of XLNet
A Simple Explanation of XLNetDomyoung Lee
 
5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf  5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf SVTaylor123
 

Ähnlich wie Artificial Intelligence Notes Unit 4 (20)

Nlp
NlpNlp
Nlp
 
Natural Language Processing (NLP)
Natural Language Processing (NLP)Natural Language Processing (NLP)
Natural Language Processing (NLP)
 
Natural Language Processing Course in AI
Natural Language Processing Course in AINatural Language Processing Course in AI
Natural Language Processing Course in AI
 
Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing Adnan: Introduction to Natural Language Processing
Adnan: Introduction to Natural Language Processing
 
Natural Language Processing (NLP).pptx
Natural Language Processing (NLP).pptxNatural Language Processing (NLP).pptx
Natural Language Processing (NLP).pptx
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Natural language processing (nlp)
Natural language processing (nlp)Natural language processing (nlp)
Natural language processing (nlp)
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
https://www.slideshare.net/amaresimachew/hot-topics-132093738
https://www.slideshare.net/amaresimachew/hot-topics-132093738https://www.slideshare.net/amaresimachew/hot-topics-132093738
https://www.slideshare.net/amaresimachew/hot-topics-132093738
 
Natural language processing
Natural language processingNatural language processing
Natural language processing
 
Syntax.ppt
Syntax.pptSyntax.ppt
Syntax.ppt
 
Sanskrit in Natural Language Processing
Sanskrit in Natural Language ProcessingSanskrit in Natural Language Processing
Sanskrit in Natural Language Processing
 
NLP
NLPNLP
NLP
 
NLP
NLPNLP
NLP
 
NLP pipeline in machine translation
NLP pipeline in machine translationNLP pipeline in machine translation
NLP pipeline in machine translation
 
REPORT.doc
REPORT.docREPORT.doc
REPORT.doc
 
L1 nlp intro
L1 nlp introL1 nlp intro
L1 nlp intro
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
A Simple Explanation of XLNet
A Simple Explanation of XLNetA Simple Explanation of XLNet
A Simple Explanation of XLNet
 
5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf  5810 oral lang anly transcr wkshp (fall 2014) pdf
5810 oral lang anly transcr wkshp (fall 2014) pdf
 

Mehr von DigiGurukul

Dempster Shafer Theory AI CSE 8th Sem
Dempster Shafer Theory AI CSE 8th SemDempster Shafer Theory AI CSE 8th Sem
Dempster Shafer Theory AI CSE 8th SemDigiGurukul
 
Fuzzy logic Notes AI CSE 8th Sem
Fuzzy logic Notes AI CSE 8th SemFuzzy logic Notes AI CSE 8th Sem
Fuzzy logic Notes AI CSE 8th SemDigiGurukul
 
Artificial Intelligence Notes Unit 3
Artificial Intelligence Notes Unit 3Artificial Intelligence Notes Unit 3
Artificial Intelligence Notes Unit 3DigiGurukul
 
Enterprise Resource Planning(ERP) Unit – v
Enterprise Resource Planning(ERP) Unit – vEnterprise Resource Planning(ERP) Unit – v
Enterprise Resource Planning(ERP) Unit – vDigiGurukul
 
Enterprise Resource Planning(ERP) Unit – i
Enterprise Resource Planning(ERP) Unit – iEnterprise Resource Planning(ERP) Unit – i
Enterprise Resource Planning(ERP) Unit – iDigiGurukul
 
Enterprise Resource Planning(ERP) Unit – iii
Enterprise Resource Planning(ERP) Unit – iiiEnterprise Resource Planning(ERP) Unit – iii
Enterprise Resource Planning(ERP) Unit – iiiDigiGurukul
 
Enterprise Resource Planning(ERP) Unit – ii
Enterprise Resource Planning(ERP) Unit – iiEnterprise Resource Planning(ERP) Unit – ii
Enterprise Resource Planning(ERP) Unit – iiDigiGurukul
 
Enterprise Resource Planning(ERP) Unit – iv
Enterprise Resource Planning(ERP) Unit – ivEnterprise Resource Planning(ERP) Unit – iv
Enterprise Resource Planning(ERP) Unit – ivDigiGurukul
 

Mehr von DigiGurukul (8)

Dempster Shafer Theory AI CSE 8th Sem
Dempster Shafer Theory AI CSE 8th SemDempster Shafer Theory AI CSE 8th Sem
Dempster Shafer Theory AI CSE 8th Sem
 
Fuzzy logic Notes AI CSE 8th Sem
Fuzzy logic Notes AI CSE 8th SemFuzzy logic Notes AI CSE 8th Sem
Fuzzy logic Notes AI CSE 8th Sem
 
Artificial Intelligence Notes Unit 3
Artificial Intelligence Notes Unit 3Artificial Intelligence Notes Unit 3
Artificial Intelligence Notes Unit 3
 
Enterprise Resource Planning(ERP) Unit – v
Enterprise Resource Planning(ERP) Unit – vEnterprise Resource Planning(ERP) Unit – v
Enterprise Resource Planning(ERP) Unit – v
 
Enterprise Resource Planning(ERP) Unit – i
Enterprise Resource Planning(ERP) Unit – iEnterprise Resource Planning(ERP) Unit – i
Enterprise Resource Planning(ERP) Unit – i
 
Enterprise Resource Planning(ERP) Unit – iii
Enterprise Resource Planning(ERP) Unit – iiiEnterprise Resource Planning(ERP) Unit – iii
Enterprise Resource Planning(ERP) Unit – iii
 
Enterprise Resource Planning(ERP) Unit – ii
Enterprise Resource Planning(ERP) Unit – iiEnterprise Resource Planning(ERP) Unit – ii
Enterprise Resource Planning(ERP) Unit – ii
 
Enterprise Resource Planning(ERP) Unit – iv
Enterprise Resource Planning(ERP) Unit – ivEnterprise Resource Planning(ERP) Unit – iv
Enterprise Resource Planning(ERP) Unit – iv
 

Kürzlich hochgeladen

microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...Pooja Nehwal
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 

Kürzlich hochgeladen (20)

microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...Russian Call Girls in Andheri Airport Mumbai WhatsApp  9167673311 💞 Full Nigh...
Russian Call Girls in Andheri Airport Mumbai WhatsApp 9167673311 💞 Full Nigh...
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 

Artificial Intelligence Notes Unit 4

  • 1. Unit 4 NATURAL LANGUAGE PROCESSING AND PLANNING 1
  • 2. Syllabus • Overview of NLP tasks, • Parsing, • Machine translation, • Components of Planning System, • Planning agent, • State-Goal & Action Representation, • Forward planning, • backward chaining, • Planning example – partial-order planner, – Block world. 2
  • 3. Natural language • Natural languages are languages that living creatures use for communication, • A machine is considered to be really intelligent only when it can understand and interpret or speak the matter of natural language. • The capability to understand, interpret and communicate through natural language is a very important criteria of intelligent behavior. 3
  • 4. Why Natural language processing? • Huge amount of data? – Internet=at least 2.5 billion pages • Applications for processing large amounts of texts. – Classify text into categories – Index and search large texts – Automatic translation – Speech understanding: Understanding phone conversation – Information extraction: Extract useful information from resumes – Automatic summarization – Question answering – Knowledge acquisition: knowledge from expert – Text generations/dialogs • All these requires natural language expertise. 4
  • 5. Methods of natural language processing i. Keyboard analysis or pattern matching technique ii. Syntactic analysis 5
  • 6. Pattern matching technique • Early NLP programs adopted this method. In this technique, the system scans the input sentences for “selective” keywords and once they are encountered, the system responds with a “built-in” reply. 6
  • 7. Flowchart of Pattern matching technique 7
  • 8. Syntactic analysis • This method examines the structure of a sentence and performs detailed analysis of the sentence and semantics of the statement. • In order to perform this, the system is expected to have through knowledge of the grammar of the language. • The basic unit of any language is sentence, made up of group of words, having their own meanings and linked together to present an idea or thought. • Apart from having meanings, words fall under categories called parts of speech. • In English languages, there are eight different parts of speech. They are nouns, pronoun, adjectives, verb, adverb, prepositions, conjunction and interjections. 8
  • 9. Syntactic analysis • In English language, a sentence S is made up of a noun phrase (NP) and a verb phrase (VP), i.e. S=NP+VP The given noun phrase (NP) normally can have an article or delimiter(D) or an adjective(ADJ) and the noun(N), i.e. NP=D+ADJ+N Also a noun phrase may have a prepositional phrase(PP) which has a preposition(P), a delimiter (D) and the noun(N),i.e. PP=D+P+N The verb phrase(VP) has a verb(V) and the object of the verb. The object of the verb may be a noun(N) and its determiner, i.e. VP=V+N+D These are some of the rules of the English grammar that helps one to construct a small parser for NLP. 9
  • 10. Areas of computer science research 10
  • 11. 11
  • 12. Stages/Phases/Tasks of NLP • Phonological • Morophological and lexical • Syntactic • Semantic • Discourse integration • Pragmatic 12
  • 14. Phases of NLP • Phonological : This is knowledge which relates sounds to the words we recognize. A phoneme is the smallest unit of sound. Phones are aggregated into word sounds. • Morophological: This is lexical knowledge which relates to word consturctions from basic units called morphemes. A morpheme is the smallest unit of meaning. – for example, the consturction of friendly from the the root friend and the suffix ly. • Syntactic: This knowledge relates to how words are put together or structured of form grammatically correct sentences in the language. (it is done using parsing) 14
  • 15. Level of Knowledge used in Language Understanding • Semantic: This knowledge is concerned with the meanings of words and phrases and how they combine to form sentence meanings. • Discourse integration : the meaning of an individual sentence may depend on the sentences thar precede it and may influence the meaning of the sentences that allow it. • Pragmatic: This is high level knowledge which relates to the use of senetences in different contexts and how the context affects the meaning of the sentences. 15
  • 16. I want to print Ali’s .init file I (pronoun) want (verb) to (prep) to(infinitive) print (verb) Ali (noun) ‘s (possessive) .init (adj) file (noun) file (verb) Surface form stems 16
  • 17. I (pronoun) want (verb) to (prep) to(infinitive) print (verb) Ali (noun) ‘s (possessive) .init (adj) file (noun) file (verb) S NP VP NP NP NP VP SVPRO PRO V ADJ ADJ N I want I print Ali’s .init file stems Parse tree 17
  • 18. Issues in syntax • By parsing we understand the role of each of these words. • It help figuring out (automatically) questions like who did what and when? • Anaphora Resolution “ the dog entered my room . It scared me” The system must understand “it” is related to “dog” Preposition attachment “I saw the man in the park with a telescope (ambiguity) 18
  • 21. Parse tree • Mohan slept on the bench 21
  • 22. Parse tree • The boy ate an apple. 22
  • 23. Parse tree • The green cow munched the grass. 23
  • 24. Parse tree Bill loves the Dog. 24
  • 25. S NP VP NP NP NP VP SVPRO PRO V ADJ ADJ N I want I print Ali’s .init file Parse tree I want print Ali .init file who what who Who’s what type Semantic Net 25
  • 26. Issues in semantics • Understand language! How“? “Plant” =industrial plant “plant” = living organism Words are ambiguous Importance of semantics? • Machine translation: wrong translations • Information retrieval: wrong information • Anaphora resolution: wrong referents 26
  • 27. Issues in semantics • How to learn the meaning of words? • From dictionaries: – Plant , works, industrial plant(building for carrying on industrial labor “they built a large plant to manufacture automobiles”) – Plant, flora, plant life(a living organism lacking the power of locomotion) • Example: – They are producing about 1000 automobiles in the new plant – The sea flora consists in 1000 different plant species – The plant was close to the farm of animals 27
  • 28. Issues in semantics • Learn from annotated examples: • Assume 100 examples containing “plant” previously tagged by a human. • Train a learning algorithm. • Precisions in the range 60%-70% • How to choose the learning algorithm? • How to obtain the 100 tagged examples. 28
  • 29. I want print Ali .init file who what who Who’s what type Semantic Net To whom the pronoun ‘I’ refers To whom the proper noun ‘Ali’ refers What are the files to be printed Execute the command lpr /ali/stuff.init 29
  • 30. The boy saw the man on the mountain with a telescope Prepositional phrase attachment 30
  • 31. Parsing • Parsing is the process of analying a sentence by taking it apart word-by-word and determining its structure from its constituent parts and subparts. • Parsing -- the act of analyzing the grammaticality of an utterance according to some specific grammar – we read the words in some order (from left to right; from right to left; or in random order) and analyzed them one-by-one • Each parse is a different method of analyzing some target sentence according to some specified grammar 31
  • 32. Parsing … • Example: we have two sentences: – “Have students in section 2 of Computer Science 203 take the exam.” – “Have students in section 2 of Computer Science 203 taken the exam?” • first ten words: “Have students in section 2 of Computer Science 203” are exactly the same although the meanings of the two sentences are completely different • if an incorrect guess is made, we can still use the first ten words when we backtrack – this will require a lot less work 32
  • 33. Parsing Process Input String Parser Output representation structure Lexicon 33
  • 34. Lexicon • The lexicon is a dictionary of words, where each word contains some syntactic, semantic and possible some pragmatic information. • The information in the lexicon is needed to help determine the function and meanings of the words in a sentences. • Each entry in a lexicon will contain a root word called head. 34
  • 35. Typical entries in a Lexicon Word Type Features a Determiner {3s} be Verb Trans: intransitive boy Noun {3s} can Verb {1s, 2s, 3s, 1p, 2p, 3p} carried Verb Form: past, past participle orange Adjective Noun {3s} we Pronoun {1p} 35
  • 36. Scaling Up the Lexicon • In real text-understanding systems, the input is a sequence of characters from which the words must be extracted • Four step process for doing this consists of: – tokenization – morphological analysis – dictionary lookup – error recovery • Since many natural languages are fundamentally different, these steps would be much harder to apply to some languages than others 36
  • 37. Scaling Up the Lexicon (Cont) • a) Tokenization – process of dividing the input into distinct tokens -- words and punctuation marks. – this is not easy in some languages , like Japanese, where there are no spaces between words – this process is much easier in English although it is not trivial by any means – examples of complications may include: A hyphen at the end of the line may be an interword or an intraword dash – tokenization routines are designed to be fast, with the idea that as long as they are consistent in breaking up the input text into tokens, any problems can always be handled at some later stage of processing 37
  • 38. Scaling Up the Lexicon (Cont) • b) Morphological Analysis – the process of describing a word in terms of the prefixes, suffixes and root forms that comprise it – there are three ways that words can be composed: • Inflectional Morphology – reflects that changes to a word that are needed in a particular grammatical context (Ex: most nouns take the suffix “s” when they are plural) • Derivational Morphology – derives a new word from another word that is usually of a different category (Ex: the noun “softness” is derived from the adjective “short”) • Compounding – takes two words and puts them together (Ex: “bookkeeper” is a compound of “book” and “keeper”) – used a lot in morphologically complex languages such as German, Finish, Turkish, Inuit, and Yupik 38
  • 39. Scaling Up the Lexicon (Cont) • c) Dictionary Lookup – is performed on every token (except for special ones such as punctuation) – the task is to find the word in the dictionary and return its definition – two ways to do dictionary lookup: • store morphologically complex words first: – complex words are written to dictionary and the looked up when needed • do morphological analysis first: – process the word before looking anything up – Ex: “walked” -- strip of “ed” and look up “walk” » if the verb is not marked as irregular, then “walked” would be the past tense of “walk” – any implementation of the table abstract data type can serve as a dictionary: hash tables, binary trees, b-tries, and trees 39
  • 40. Scaling Up the Lexicon (Cont) • d) Error Recovery – is undertaken when a word is not found in the dictionary – there are four types of error recovery: • morphological rules can guess at the word’s syntactic class – Ex: “smarply” is not in the dictionary but it is probably an adverb • capitalization is a clue that a word is a proper name • other specialized formats denote dates, times, social security numbers, etc • spelling correction routines can be used to find a word in the dictionary that is close to the input word – there are two popular models for defining “closeness” in words: » Letter-Based Model » Sound-Based Model 40
  • 41. Parsing Approach • Top Down Parsing – Begin with the start symbol and apply the grammar rules forward until the symbols at the terminals of the tree correspond to the components of the sentence being parsed. • Bottom Up Parsing – Begin with the sentence to be parsed and apply the grammar rules backward until a single tree whose terminals are the words of the sentence and whose top node is the start symbol has been produced. 41
  • 42. Parser classification • Parser can be classified in two category depending on the parsing strategy employed. – deterministic – nondeterministic 42
  • 43. Deterministic parser • A deterministic parser permits only one choice (arc) for each word category. • Each arc will have a different test condition • If an incorrect test choice is accepted from some state, the parse will fail since the parser cannot backtrack to an alternative choice. 43
  • 44. A Deterministic network N1 N2 N3 N4 N5 N6 N7 article noun verb article noun aux verb verb 44
  • 45. A Nondeterministic Network • Nondeterministic parsers permit different arcs to be labeled with the same test. • Consequently, the next test from any given state may not be uniquely determined by the state and the current input word. 45
  • 46. A nondeterministic network N1 N2 N3 N4 N5 N6 article noun verb noun verb 46
  • 48. Machine Translation (MT) • It refers to the process of automated translation of text from one language to another. • Achieving human level translation quality is a holy grail in NLP, primarily because generating good translations needs a very good understanding of the source document. • Existing MT systems can generate rough translations that frequently at least convey the gist(idea) of a document. • High quality translations possible when specialized to narrow domains, e.g. weather forcasts. • Some MT systems used in computer-aided translation in which a bilingual human post-edits the output to produce more readable accurate translations. 48
  • 49. Machine Translation • Automatically translate one natural language into another. 49 Mary didn’t slap the green witch. Maria no dió una bofetada a la bruja verde.
  • 50. 50 Ambiguity Resolution is Required for Translation • Syntactic and semantic ambiguities must be properly resolved for correct translation:
  • 51. Word Alignment • Shows mapping between words in one language and the other. 51 Mary didn’t slap the green witch. Maria no dió una bofetada a la bruja verde.
  • 52. Approaches of MT 52 • There are several challenges in the way of developing successful MT systems. • Two important directions in the development of MT systems are • Rule based MT (RBMT) • Corpus based MT (CBMT)
  • 53. Approaches of MT 53 •RBMT is generated on the basis of morphological, syntactic, and semantic analysis of both the source and the target languages. •Corpus-based machine translation (CBMT) is generated on the analysis of bilingual text corpora. •The former belongs to the domain of rationalism and the latter empiricism . • Given large-scale and fine-grained linguistic rules, RBMT systems are capable of producing translations with reasonable quality, but constructing the system is very time-consuming and labor-intensive because such linguistic resources need to be hand-crafted, frequently referred to as knowledge acquisition problem.
  • 54. Approaches of MT 54 •Moreover, it is of great difficulty to correct the input or add new rules to the system to generate a translation. • By contrast, however, adding more examples to a CBMT system can improve the system since it is based on the data, though the accumulation and management of the huge bilingual data corpus can also be costly.
  • 56. Types of Rule based MT – Direct MT (DMT) – Transfer MT (TMT) – Interlingua MT (IMT) 56
  • 57. Direct MT (DMT) • DMT is a word-by-word translation approach with some simple grammatical adjustments. • A DMT system is designed for a specific source and target language pair and the translation unit of which is usually a word. 57
  • 58. Transfer RBMT Systems • A Transfer-based machine translation system involves three stages. • The first stage makes analysis of the source text and converts it into abstract representations; the second stage converts those into equivalent target language-oriented representations; and the third generates the final target text. • The representation is specific for each language pair. 58
  • 59. Interlingua MT (IMT) • The IMT operates over two phases: analyzing the SL text into an abstract universal language-independent representation of meaning, i.e. the interlingua, which is the phase of analysis; generating this meaning using the lexical units and the syntactic constructions of the TL (target language) , which is the phase of synthesis. • Though no transfer component has to be created for each language pair by adopting the approach of IMT, the definition of an interlingua is of great difficulty and even maybe impossible for a wider domain. 59
  • 60. Example can illustrate the general frame of RBMT • A girl eats an apple. Source Language = English; Demanded Target Language = German • Minimally, to get a German translation of this English sentence one needs: – A dictionary that will map each English word to an appropriate German word. – Rules representing regular English sentence structure. – Rules representing regular German sentence structure. And finally, we need rules according to which one can relate these two structures together. 60
  • 61. Stages of translation 61 • 1st: getting basic part-of-speech information of each source word: a = indef.article; girl = noun; eats = verb; an = indef.article; apple = noun • 2nd: getting syntactic information about the verb “to eat”: NP-eat-NP; here: eat – Present Simple, 3rd Person Singular, Active Voice • 3rd: parsing the source sentence: • (NP an apple) = the object of eat Often only partial parsing is sufficient to get to the syntactic structure of the source sentence and to map it onto the structure of the target sentence.
  • 62. Stages of translation • 4th: translate English words into German • a (category = indef.article) => ein (category = indef.article) • girl (category = noun) => Mädchen (category = noun) • eat (category = verb) => essen (category = verb) • an (category = indef. article) => ein (category = indef.article) • apple (category = noun) => Apfel (category = noun) • 5th: Mapping dictionary entries into appropriate inflected forms (final generation): • A girl eats an apple. => Ein Mädchen isst einen Apfel. 62
  • 63. The RBMT system contains i. a SL morphological analyser - analyses a source language word and provides the morphological information; ii. a SL parser - is a syntax analyser which analyses source language sentences; iii. a translator - used to translate a source language word into the target language; iv. a TL morphological generator - works as a generator of appropriate target language words for the given grammatica information; v. a TL parser - works as a composer of suitable target language sentences; vi. Several dictionaries - more specifically a minimum of three dictionaries: a) a SL dictionary - needed by the source language morphological analyser for morphological analysis, b) a bilingual dictionary - used by the translator to translate source language words into target language words, c) a TL dictionary - needed by the target language morphological generator to generate target language words. 63
  • 64. The RBMT system makes use of the following: i. a Source Grammar for the input language which builds syntactic constructions from input sentences; ii. a Source Lexicon which captures all of the allowable vocabulary in the domain; iii. Source Mapping Rules which indicate how syntactic heads and grammatical functions in the source language are mapped onto domain concepts and semantic roles in the interlingua; iv. a Domain Model/Ontology which defines the classes of domain concepts and restricts the fillers of semantic roles for each class; v. Target Mapping Rules which indicate how domain concepts and semantic roles in the interlingua are mapped onto syntactic heads and grammatical functions in the target language; vi. a Target Lexicon which contains appropriate target lexemes for each domain concept; vii. a Target Grammar for the target language which realizes target syntactic constructions as linearized output sentences.[4] 64
  • 65. Advantages i. No bilingual texts are required: This makes it possible to create translation systems for languages that have no texts in common, or even no digitized data whatsoever. ii. Domain independent: Rules are usually written in a domain independent manner, so the vast majority of rules will "just work" in every domain, and only a few specific cases per domain may need rules written for them. iii. No quality ceiling: Every error can be corrected with a targeted rule, even if the trigger case is extremely rare. This is in contrast to statistical systems where infrequent forms will be washed away by default. iv. Total control: Because all rules are hand-written, you can easily debug a rule based system to see exactly where a given error enters the system, and why. v. Reusability: Because RBMT systems are generally built from a strong source language analysis that is fed to a transfer step and target language generator, the source language analysis and target language generation parts can be shared between multiple translation systems, requiring only the transfer step to be specialized. Additionally, source language analysis for one language can be reused to bootstrap a closely related language analysis. 65
  • 66. Shortcomings i. Insufficient amount of really good dictionaries. Building new dictionaries is expensive. ii. Some linguistic information still needs to be set manually. iii. It is hard to deal with rule interactions in big systems, ambiguity, and idiomatic expressions. iv. Failure to adapt to new domains. Although RBMT systems usually provide a mechanism to create new rules and extend and adapt the lexicon, changes are usually very costly and the results, frequently, do not pay off. 66
  • 68. Syllabus • Components of Planning System, • Planning agent, • State-Goal & Action Representation, • Forward planning, • backward chaining, • Planning example – partial-order planner, – Block world. 68
  • 69. Planning • Decomposing the original problem into appropriate subparts and on ways of recording and handling interactions among the subparts as they are detected during the problem-solving process are often called as planning. • Planning refers to the process of computing several steps of a problem-solving procedure before executing any of them. 69
  • 70. Planning •Definition : Planning is arranging a sequence of actions to achieve a goal. •Uses core areas of AI like searching and reasoning & •Is the core for areas like NLP, Computer Vision. •Robotics •Examples : Navigation , Manoeuvring, Language Processing (Generation) Kinematics (ME) Planning (CSE) 70
  • 71. Components of a planning system 1. Choose the best rule to apply next based on the best available heuristic information. 2. Apply the chosen rule to compute the new problem state that arises from its application. 3. Detect when a solution has been found. 4. Detect dead ends so that they can be abandoned and the system’s effort directed in more fruitful directions. 5. Detect when an almost correct solution has been found and employ special techniques to make it totally correct. 71
  • 72. 1. Choose the rules to apply • First to isolate a set of differences between desired goal state and then to identify those rules that are relevant to reduce those differences. • If several rules, a variety of other heuristic information can be exploited to choose among them. 72
  • 73. 2. Applying Rules • In simple systems, applying rules is easy. Each rule simply specified the problem state that would result from its application. • In complex systems, we must be able to deal with rules that specify only a small part of the complete problem state. • One way is to describe, for each action, each of the changes it makes to the state description. 73
  • 74. 3. Detecting a solution • Find a solution to a problem when it has found a sequence of operators that transforms the initial problem state into the goal state. • One of the representative systems for planning systems is, predicate logic. Suppose, we have the predicate P(x), we can prove P(x) given the assertions that describe that state and the axioms that define the world model. 74
  • 75. 4. Detecting Dead Ends • The exploring path that can never lead to a solution. • No indication of goal node. • If the search process is reasoning forward from the initial state, it can prune any path that leads to a state from which the goal state cannot be reached. • If search process is reasoning backward, it can also terminate a path either because it is sure that the initial state cannot be reached. 75
  • 76. 5. Repairing an Almost Correct Solution • Assume that the problems are completely decomposable, proceed to solve the sub problems separately, and then check that when the sub solutions are combined, they do infact yield a solution to the original problem. 76
  • 77. A Planning agent • The purpose of planning is to find a sequence of actions that achieves a given goal when performed starting in a given state. In other words, given a set of operator instances, an initial state description, and a goal state description or predicate, the planning agent computes a plan. • Problem solving agents are able to plan ahead –to consider the consequences of sequences of actions before acting. • Knowledge based agents can select actions based on explicit, logical representations of the current state and the effects of actions. This allows the agent to succeed in complex, inaccessible environments that are too difficult for a problem solving agent. • Problem solving agents + knowledge based agents=Planning agents 77
  • 78. A Planning agent • Algorithm for a simple planning agent i. Generate a goal to achieve ii. Construct a plan to achieve goal from current state iii. Execute plan until finished iv. Begin again with new goal The planning agent first generates a goal to achieve, and then constructs a plan to achieve it from the current state . Once it has a plan, it keeps executing it until the plan is finished, then begins again with a new goal. 78
  • 79. Planning Languages • Languages must represent.. – States – Goals – Actions 79
  • 80. State Representation • Planner decompose the world into logical conditions and represent a state as a conjunction of positive literals • Using – Logical Propositions: Poor  Unknown – FOL literals: At(Plane1,OMA)  At(Plan2,JFK) • FOL literals must be ground & function-free – Not allowed: At(x,y) or At(Father(Fred),Sydney) • Closed World Assumption – What is not stated are assumed false 80
  • 81. Goal Representation • A goal is a partially specified state, represented as a conjunction of positive ground literals, – Example: Rich  Famous  Miserable satisfies the goal Rich  Famous 81
  • 82. Action Representation • An action is specified in terms of the preconditions that must hold before it can be executed and the effects that ensure when it is executed. 82
  • 83. Planning algorithms • Planning algorithms are search procedures • The most straight forward approach • Which state to search? – State-space search • Each node is a state of the KB • Plan = path through the states 83
  • 84. Planning algorithms • The most straight forward approach for planning algorithm is to use state space search. Because the description of action in a planning problem specify both preconditions and effects, it is possible to search in either – Forward direction from the initial state or – Backward from the goal. 84
  • 85. Forward state-space search • Planning with forward state space search is sometimes called progression planning, because it moves in the forward direction. • Forward state-space search refers to the search algorithms that start with the given state as the start state , generate the set of successor states, and search through them generating more successors till they find a state that satisfies the goal conditions. • Initial state of the search is the initial state from the planning problem. In general, each state will be a set of positive literals. • The actions that are applicable to a state if all the preconditions are satisfied. The Succesor state is built by updating KB with add and delete lists. • The goal test checks whether the state satisfies the goal of the problem. 85
  • 86. Forward search in the Blocks world … … 86
  • 87. 34 Forward state-space search • Advantages – No functions in the declarations of goals  search state is finite – Sound – Complete (if algorithm used to do the search is complete) • Limitations – Irrelevant actions  not efficient – Need heuristic or pruning procedure
  • 88. 34 Backward state-space search • It is also called as regression • Initial state: goal state of the problem • Actions: – Choose an action that • Is relevant; has one of the goal literals in its effect set • Is consistent; does not negate another literal – Construct new search state • Remove all positive effects of A that appear in goal • Add all preconditions, unless already appears • Goal test: state is the initial world state
  • 89. 34 Backward state-space search • Possible because of STRIPS-like language – Goals are listed – Predecessors are listed for each action/state • Advantages – Consider only relevant actions  much smaller branching factor – Ways to reduce even more the branching factors • Limitations – Still need heuristic to be more efficient
  • 90. 34 Heuristics for state-space search • Valid both for forward and backward searches • Valid for many planning problems • Possible approaches – Divide and conquer – Derive a relaxed problem – Combine both
  • 91. 34 Heuristics for state-space search • Divide and conquer – Subgoal independence assumption • What if there are negative interactions between the subgoals of the problems? • What if there are redundant actions in the subgoals? • Derive a relaxed problem – Remove all preconditions from the actions – Remove all negative effects from the actions (empty delete list)
  • 92. Block World Problem • There is a flat surface on which blocks can be placed. • There are a number of square blocks, all the same size. • They can be stacked one upon the other. • There is robot arm that can manipulate the blocks 92
  • 93. Actions of the robot arm • UNSTACK(A,B): pick up block A from its current position on Block B. The arm must be empty and block A must have no blocks on top of it. • STACK(A,B): place block A on block B. The arm must already be holding and the surface of B must be clear. • PICKUP(A): pick up block A from the table and hold it. The arm must be empty and there must be nothing on top of block A. • PUTDOWN(A): put block A down on the table. The arm must have been holding block A. • Notice that the robot arm can hold only one block at a time. 93
  • 94. Predicates • In order to specify both the conditions under which an operation may be performed and the results of performing it, we need the following predicates: • ON(A,B): block A is on block B. • ONTABLE(A): block A is on the table. • CLEAR(A): there is nothing on top of block A. • HOLDING(A): the arm is holding block A • ARMEMPTY: the arm is holding nothing A B 94
  • 95. A simple Search Tree A B 1 UNSTACK(A,B) 2 PUTDOWN(A) 3 ONTABLE(B)^CLEAR(A)^CLEAR (B)^ONTABLE(A) A B 95
  • 96. Simple position • ON(A,B) ^ ONTABLE(B)^CLEAR(A) If we execute UNSTACK(A,B) in this state Then, HOLDING(A)^CLEAR(B) A B 96
  • 97. Goal Stack Planning • It is the earliest technique to be developed for solving compound goals that may interact. • This was the approach used by STRIPS. • In this method, the problem solver makes use of a single stack that contains both goals and operators that have been proposed to satisfy these goals. • The problem solver also relies on a database that describes the current situation and a set of operators described as PRECONDITION, ADD, and DELETE lists. • The GSP method attacks problems involving conjoined goals by solving the goals one at a time, in order. • A plan generated by this method contains a sequence of operators for attaining the first goal, followed by a complete sequence for the second goal etc. 97
  • 98. Goal Stack Planning continued…. • At each succeeding step of the problem solving process, the top goal on the stack will be pursued. • Sequence of operators is applied to the state description, yielding new description. • Next, the goal that is then at the top of the stack is explored. • This process continues until the goal stack is empty. • Then as one last check, the original goal is compared to the final state derived from the application of the chosen operators. 98
  • 99. Goal Stack Planning: Example Start: ON(B,A)^ONTABLE(A) ^ ONTABLE(C) ^ONTABLE(D) ^ARMEMPTY Goal: ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D) 99
  • 100. Solution To start with goal stack is simply: • ON(C,A)^ON(B,D)^ONTABLE(A)^ONTABLE(D) • But we want to separate this problem into four sub problems, one for each component of the original goal. • Two of the sub problems ONTABLE(A) and ONTABLE(D) are already true in the initial state. So we will work on only the remaining two. • Depending on the order in which we want to tackle the sub problems, there are two goal stacks that could be created as our first step, where each line represents one goal on the stack . 100
  • 101. Exploring Operators • Pursuing alternative 1, we check for operators that could cause ON(C,A). Of the 4 operators, there is only one STACK. So it yields: • STACK(C,A) • ON(B,D) • ON(C,A)^ON(B,D)^ONTABLE(A)^ONTABLE(D) • Preconditions for STACK(C,A) should be satisfied, we must establish them as sub goals: • CLEAR(A) • HOLDING(C) • CLEAR(A)^HOLDING(C) • STACK(C,A) • ON(B,D) • ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D) • Here we exploit the Heuristic that if HOLDING is one of the several goals to be achieved at once, it should be tackled last. 101
  • 102. Goal stack Planning contd… • Next we see if CLEAR(A) is true. It is not. The only operator that could make it true is UNSTACK(B,A). This produces the goal stack: • ON(B,A) • CLEAR(B) • ARMEMPTY • ON(B,A)^CLEAR(B)^ARMEMPTY • UNSTACK(B,A) • HOLDING(C) • CLEAR(A)^HOLDING(C) • STACK(C,A) • ON(B,D) • ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D) 102
  • 103. • When we compare the top element of the goal stack, ON(B,A), to the world model, we see that it is satisfied. So we pop it off and consider the next goal, CLEAR(B). • It, too, is already true in the world model, although it was not stated explicitly as one of the initial predicates. So pop it from the stack. • The third precondition for UNSTACK(B, A) remains. It is ARMEMPTY, and it is also true in the current world model, sp it can be popped off the stack. • The next element on the stack is the combined goal representing all of the preconditions for UNSTACK(B, A) . Combined goal can be popped off. • Now the top element of the stack is the operator UNSTACK(B, A) . We are now guaranteed that its preconditions are satisfied, so it can be applied to produce a new world model from which the rest of the problem solving can process. At this point, the database corresponding to the world model is • ONTABLE(A) ^ ONTABLE(C) ^ ONTABLE(D) ^ HOLDING(B)^CLEAR(A) Goal stack Planning contd… 103
  • 104. Goal stack Planning contd…The goal stack now is • HOLDING(C) • CLEAR(A)^HOLDING(C) • STACK(C,A) • ON(B,D) • ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D) We now attempt to satisfy the goal HOLDING(C). Preconditions for HOLDING(C) • ONTABLE(C) • CLEAR( C ) • ARMEMPTY • ONTABLE(C)^ CLEAR( C ) ^ARMEMPTY • PICKUP(C) • CLEAR(A)^HOLDING(C) • STACK(C,A) • ON(B,D) • ON(C,A)^ON(B,D)^ ONTABLE(A)^ONTABLE(D) 104
  • 105. Goal stack Planning contd… • The top element of the goal stack is ONTABLE(C), which is true, so pop it. The next element is CLEAR( C ), which is also true, so pop it. • The remaining precondition of PICKUP(C) is ARMEMPTY, which is not true, since HOLDING(B) is true. • There are two operators that could be applied to make ARMEMPTY true: STACK(B,X) and PUTDOWN(B) 105
  • 106. Choosing Alternative • We will use the alternative 1. So we apply the operator STACK(B,D). This makes the Goal stack: • CLEAR(D) • HOLDING(B) • CLEAR(D)^HOLDING(B) • STACK(B,D) • ONTABLE(C)^CLEAR(C)^ARMEMPTY • PICKUP(C) • CLEAR(A)^HOLDING(C) • STACK(C,A) • ON(B,D) • ON(C,A)^ON(B,D)^ONTABLE(A)^ONTABLE(D) 106
  • 107. Goal stack Planning contd… • CLEAR(D) and HOLDING(B) both are true. Now the operation STACK(B,D) can be performed, producing the world model. • ONTABLE(A) ^ ONTABLE(C) ^ ONTABLE(D) ^ON(B,D)^ARMEMPTY • PICKUP(C ) are now satisfied. Then all of the preconditions of STACK(C ) are true. • Now we begin work on the second part of original goal ON(B,D). But it is already been satisfied. 107
  • 108. Complete plan 1. UNSTACK(B,A) 2. STACK(B,D) 3. PICKUP(C) 4. STACK(C,A) 108
  • 110. Nonlinear Planning • A plan generated by goal stack planning contains a sequence; of operators for attaining the first goal followed by a sequence for the second goal. • Difficult problems cause goal interactions. • The operators used to solve one subproblem may interfere with the solution to a previous subproblem. • Most problems require an interwined plan in which multiple subproblems are worked on simultaneously. • Such a plan is called nonlinear plan because it is not composed of a linear sequence of complete subplans. 110
  • 111. Example: Sussman anomaly problem I. It is an anomaly because a linear planner cannot solve the problem. II. This problem can be solved, but it cannot be attacked by first applying all the operators to achieve one goal, and then applying operators to achieve another goal. III. Begin with null plan (no operators). IV. Look at the goal state and find the operators that can achieve them. V. There are two operators (steps) STACK(A, B) and STACK(B,C) which have post conditions as ON(A,B) and ON(B, C). 111
  • 112. Non-linear planner • NAOH • NONLIN • MOLGEN • TWEAK(uses constraint posting as a central technique) 112
  • 113. Nonlinear Planning - Constraint Posting • Idea of constraint posting is to build up a plan by incrementally – hypothesizing operators, – partial ordering between operators and – binding of variables within operators • At any given time in planning process, a solution is a partially ordered. • To generate actual plan, convert the partial order into total orders. 113
  • 114. Partial Ordering • Any planning algorithm – that can place two actions into a plan without specifying which comes first is called a partial- order planner. – actions dependent on each other are ordered in relation to themselves but not necessarily in relation to other independent actions. • The solution is represented as a graph of actions, not a sequence of actions. 114
  • 115. Heuristics for Planning using Constraint Posting ( TWEAK) 1. Step addition – creating new steps for a plan. 2. Promotion – Constraining one step to come before another in a final plan. 3. Declobbering – Placing one ( possibly new ) step S2 between two old steps S1 and S3 such that S2 reasserts some precondition of S3 that was neglected (or “clobbered”) by S1. 4. Simple establishment – Assigning a value to a variable, in order to ensure the preconditions of some step. 5. Separation – Preventing the assignment of certain values to a variable. 115
  • 116. Algorithm 1. Initialize S to be set of propositions in the goal state. 2. Remove some unachieved proposition P from S. 3. Achieve P by using step addition, promotion, declobbering, simple establishment. 4. Review all the steps in the plan, including any new steps introduced by step addition to see if any of their preconditions are unachieved. 5. Add to S the new set of unachieved preconditions. 6. If S = , complete the plan by converting the partial order of steps into a total order and instantiate any variables as necessary and exit. 7. Otherwise go to step 2. 116
  • 117. Non-linear plan to solve Sussman anomaly problem Pre condition CLEAR(B) CLEAR(C ) *HOLDING(A) *HOLDING(B) _________________________________________ Operator STACK(A, B) STACK(B,C) _________________________________________________ ON(A, B) ON(B,C) Post condition ARMEMPTY ARMEMPTY ~ CLEAR(B) ~ CLEAR(C ) ~ HOLDING(A) ~ HOLDING(B) ______________________________________________ • Here unachieved conditions are marked with *. • HOLDING in both the cases is not true as ARMEMPTY is true initially. • Delete post conditions are marked with a negative symbol. • Introduce new operator (step) to achieve these goals. • This is called operator (step) addition. • Add PICKUP operator on both the goals. 117
  • 118. Non-linear plan to solve Sussman anomaly problem Pre Condition *CLEAR(A) *CLEAR(B ) ONTABLE(A) ONTABLE(B) *ARMEMPTY *ARMEMPTY Operator PICKUP(A) PICKUP(B) Post Condition HOLDING(A) HOLDING(B) ~ ONTABLE(A) ~ ONTBLE(B) ~ ARMEMPTY ~ ARMEMPTY ~ CLEAR(A) ~ CLEAR(B ) ______________________________________________________ Pre Condition CLEAR(B) CLEAR(C ) *HOLDING(A) *HOLDING(B) Operator STACK(A, B) STACK(B,C) ON(A, B) ON(B,C) Post Condition ARMEMPTY ARMEMPTY ~ CLEAR(B) ~ CLEAR(C ) ~ HOLDING(A) ~ HOLDING(B) 118
  • 119. Non-linear plan to solve Sussman anomaly problem • It is clear that in a final plan, PICKUP must precede STACK operator. • Introduce the ordering as follows: – Whenever we employ operator, we need to introduce ordering constraints called promotion. ___________________Plan 1____________________ PICKUP(A)  STACK(A, B) PICKUP(B)  STACK(B, C) _______________________________________ • Here we partially ordered operators and four unachieved pre conditions:- CLEAR(A), CLEAR(B ), ARMEMPTY on both the paths – CLEAR(A) is unachieved as C is on A in initial state. – Also CLEAR(B) is unachieved even though top of B is clear in initial state but there exist a operator STACK(A,B) with post condition as ~CLEAR(B). Initial State: ON(C, A) ONTABLE(A)  ONTABLE(B)  ARMEMPTY  CLEAR(C)  CL(EARB) 119
  • 120. Non-linear plan to solve Sussman anomaly problem • If we make sure that PICKUP(B) precede STACK(A, B) then CLEAR(B) is achieved. So post the following constraints. ___________________Plan 1____________________ PICKUP(A)  STACK(A, B) PICKUP(B)  STACK(B, C) ____________________Plan2____________________ PICKUP(B)  STACK(A, B) ______________________________________ • Note that pre condition CLEAR(A) of PICKUP(A) still is unachieved. – Let us achieve ARMEMPTY preconditions of each Pick up operators before CLEAR(A). – Initial state has ARMEMPTY. So one PICKUP can achieve its pre condition but other PICKUP operator could be prevented from being executed. – Assume ARMEMPTY is achieved as pre condition of PICKUP(B) as its other preconditions have been achieved. So put constraint. 120
  • 121. Non-linear plan to solve Sussman anomaly problem • Similarly, following plans are generated ___________________Plan3___________________ PICKUP(B)  PICKUP(A) (pre conds of PICKUP(A) are not still achieved.) _______________________________________ • Since PICKUP(B) makes ~ARMEMPTY and STACK(B,C) will make ARMEMPTY which is precondition of PICKUP(A), we can put the following constraint. _________________Plan4________________________ PICKUP(B)  STACK(B, C)  PICKUP(A) ____________________________________________ – Here PICKUP(B) is said to clobber pre condition of PICKUP(A) and STACK(B, C) is said to declobber it. (removing deadlock) 121
  • 122. Non-linear plan to solve Sussman anomaly problem • The only unachieved precondition left is *CLEAR(A) from the PICKUP(A) step. We can use step addition to achieve it. *ON(x,A) *CLEAR(x) *ARMEMPTY ………………………………………… ~ ARMEMPTY CLEAR(A) HOLDING(A) ~ ON(x,A) 122
  • 123. Non-linear plan to solve Sussman anomaly problem • Unfortunately, we now have three new unachieved preconditions. • We can achieve ON(x,A) easily by constraining the value of x to block c. • This works because block c is on block A. this heuristic is called simple establishment. • X=C in step UNSTACK(x, A) • There are still steps that deny the preconditions CLEAR(C) and ARMEMPTY, but we can use promotion to take care of them. 123
  • 124. Non-linear plan to solve Sussman anomaly problem _________________Plan 5_________________ UNSTACK(C, A)  STACK(B, C) UNSTACK (C, A)  PICKUP(A) UNSTACK (C, A)  PICKUP(B) ________________________________________ • The step PICKUP(B) requires ARMEMPTY, but this is denied by the new UNSTACK(X, A) step. One way to solve this problem is to add a new declobbering step to plan. _______________Plan 6_________________ UNSTACK (C, A)  PUTDOWN(C)  PICKUP(B) _____________________________________ 124
  • 125. Non-linear plan to solve Sussman anomaly problem • Combine the following partial plans to generate final plan. ____________________________________________ PICKUP(A)  STACK(A, B) PICKUP(B)  STACK(B, C) _____________________________________________ PICKUP (B)  STACK(A, B) ____________________________________________ PICKUP (B)  PICKUP (A) (pre conditions of PICKUP (A) are not still achieved.) _____________________________________________ PICKUP(B)  STACK(B, C)  PICKUP(A) __________________________________________ UNSTACK(C, A)  STACK(B, C) UNSTACK (C, A)  PICKUP (A) UNSTACK (C, A)  PICKUP (B) ____________________________________________ UNSTACK(C, A)  PUTDOWN(C)  PICKUP(B) ____________________________________________ Final plan: UNSTACK(C,A)  PUTDOWN(C)  PICKUP(B)  STACK(B,C)  PICKUP(A)  STACK(A,B) 125
  • 126. Problem • Show how the STRIPS would solve this problem? • Show how the TWEAK would solve this problem? A B C Start: ON(C,D)^ON(A,B)^ONTABLE (D) ^ ONTABLE(B) ^ARMEMPTY C BA Goal: ON(C,B)^ON(D,A)^ONTABLE (A)^ONTABLE(B) D D 126
  • 127. Some examples • Suppose that we want to move all the furniture out of a room. This problem can be decomposed into a set of smaller problems, each involving moving one piece of furniture out of the room. But if there is a bookcase behind the couch, then we must move the couch before the bookcase. • Suppose we have a fixed supply of paint; some white, some pink and some red. We want to paint a room so that it has light red walls and a white ceiling. We could produce light red paint by adding some white paint to red. But then we could not paint the ceiling white. So this approach should be abandoned in favor of mixing the pink and red paints together. 127
  • 128. Example problem of cleaning a kitchen • Cleaning the stove or refrigerator will get the floor dirty. • To clean the oven, it is necessary to apply oven cleaner and then to remove the cleaner. • Before the floor can be washed, it must be swept. • Before the floor can be swept, the garbage must be taken out. • Cleaning the refrigerator generates garbage and messes up the counters. • Washing the counters or the floor gets the sink dirty. • Show how the technique of planning using goal stack could be used to solve this problem. 128