3. WHAT IS INTELLIGENCE?
Intelligence:
“the capacity to learn and solve problems” (Websters
dictionary)
in particular,
the ability to solve novel problems
the ability to act rationally
the ability to act like humans
Artificial Intelligence
build and understand intelligent entities or agents
2 main approaches: “engineering” versus “cognitive
modeling”
4. WHAT’S INVOLVED IN INTELLIGENCE?
Ability to interact with the real world
to perceive, understand, and act
e.g., speech recognition and understanding and synthesis
e.g., image understanding
e.g., ability to take actions, have an effect
Reasoning and Planning
modeling the external world, given input
solving new problems, planning, and making decisions
ability to deal with unexpected problems, uncertainties
Learning and Adaptation
we are continuously learning and adapting
our internal models are always being “updated”
e.g., a baby learning to categorize and recognize
animals
6. ACTING HUMANLY: TURING TEST
Turing (1950) "Computing machinery and
intelligence":
"Can machines think?" "Can machines
behave intelligently?"
Operational test for intelligent behavior: the
Imitation Game
7
7. ACTING HUMANLY: TURING TEST
Predicted that by 2000, a machine might have a
30% chance of fooling a lay person for 5
minutes
Anticipated all major arguments against AI in
following 50 years
Suggested major components of AI: knowledge,
reasoning, language understanding, learning
8
8. THINKING HUMANLY: COGNITIVE
MODELING
1960s "cognitive revolution": information-
processing psychology
Requires scientific theories of internal activities of
the brain
-- How to validate? Requires
Predicting and testing behavior of human subjects (top-down) or
Direct identification from neurological data (bottom-up)
Both approaches (roughly, Cognitive Science and
Cognitive Neuroscience)
are now distinct from AI
9
9. THINKING RATIONALLY: "LAWS OF
THOUGHT"
Aristotle: what are correct arguments/thought processes?
Several Greek schools developed various forms of logic:
notation and rules of derivation for thoughts; may or may
not have proceeded to the idea of mechanization
Direct line through mathematics and philosophy to modern
AI
Problems:
Not all intelligent behavior is mediated by logical deliberation
What is the purpose of thinking? What thoughts should I have?
10
10. ACTING RATIONALLY: RATIONAL AGENT
Rational behavior: doing the right thing
The right thing: that which is expected to
maximize goal achievement, given the available
information
Doesn't necessarily involve thinking – e.g.,
blinking reflex – but thinking should be in the
service of rational action
11
11. RATIONAL AGENTS
An agent is an entity that perceives and acts
This course is about designing rational agents
Abstractly, an agent is a function from percept
histories to actions.
For any given class of environments and tasks,
we seek the agent (or class of agents) with the
best performance
Caveat: computational limitations make perfect
rationality unachievable
design best program for given machine resources 12
12. FOUNDATION OF AI
Philosophy
made AI conceivable by considering the ideas that the mind is
in some ways like a machine, that it operates on knowledge
encoded in some internal language, and that thought can be
used to choose what actions to take
Mathematics
provided the tools to manipulate statements of logical
certainty as well as uncertain, probabilistic statements. They
also set the groundwork for understanding computation and
reasoning about algorithms.
13
13. FOUNDATION OF AI
Economics
formalized the problem of making decisions that maximize
the expected outcome to the decision maker
Neuroscience
how the brain works and the ways in which it is similar to and
different from computers
Psychology
idea that humans and animals can be considered information
processing machines
14
14. FOUNDATION OF AI
Computer engineering
provided the ever-more-powerful machines that make Al
applications possible
Control theory
designing devices that act optimally on the basis of
feedback from the environment. Initially, the
mathematical tools of control theory were quite different
from AI, but the fields are coming closer together
Linguistics
Used knowledge representation which is the study of
how to put knowledge into a form that a computer can
reason with
15
15. HISTORY OF AI
1943
McCulloch & Pitts: Boolean circuit model of brain
1950
Turing's “Computing Machinery and Intelligence”
1956
Dartmouth meeting: "Artificial Intelligence" adopted
1950s
Early AI programs, including Samuel's checkers
program, Newell & Simon's Logic Theorist, Gelernter's
Geometry Engine
16
16. HISTORY OF AI
1965
Robinson's complete algorithm for logical reasoning
1966 – 1973
AI discovers computational complexity Neural network
research almost disappears
1969 – 1979
Early development of knowledge-based systems
1980
AI becomes an industry
1986
Neural networks return to popularity
17
17. HISTORY OF AI
1986
Neural networks return to popularity
1987
AI becomes a science
1995
The emergence of intelligent agents
18
18. AGENTS AND ENVIRONMENTS
An agent is anything that can be viewed as
perceiving its environment through
sensors and acting upon that
environment through actuators
Human agent has eyes, ears, and other organs for
sensors and hands, legs, mouth, and other body parts
for actuator
Robotic agent has cameras and infrared range finders
for sensors and various motors for actuators
Software agent receives keystrokes, file contents, and
network packets as sensory inputs and acts on the
environment by displaying on the screen, writing files,
and sending network packets
19
20. AGENTS AND ENVIRONMENTS
Percept refer to the agent's perceptual
inputs at any given instant.
Agent's percept sequence is the complete
history of everything the agent has ever
perceived.
In general, an agent's choice of action at
any given instant can depend on the entire
percept sequence observed to date, but not
on anything it hasn't perceived.
21
21. AGENTS AND ENVIRONMENTS
Mathematically speaking, an agent's behavior
is described by the agent function that maps
any given percept sequence to an action.
[f: P* A]
Internally, the agent function for an artificial
agent will be implemented by an agent
program.
The agent function is an abstract mathematical
description; the agent program is a concrete
implementation, running within some physical
system.
22
22. VACUUM CLEANER WORLD
This particular world has just two locations:
squares A and B.
Percepts: location and contents, e.g., [A,
Dirty]
Actions: Left, Right, Suck, NoOp
23
23. VACUUM CLEANER WORLD
Partial tabulation of a simple agent function
Percept sequence Action
[A, Clean]
[A, Dirty]
[B, Clean]
[B, Dirty]
[A, Clean], [A, Clean]
[A, Clean], [A, Dirty]
…
[A, Clean], [A, Clean], [A, Clean]
[A, Clean], [A, Clean], [A, Dirty]
…
Right
Suck
Left
Suck
Right
Suck
…
Right
Suck
…
24
24. VACUUM CLEANER WORLD
Various vacuum-world agents can be
defined simply by filling in the right-hand
column in various ways.
Obvious question is: What is the right way to
fill out the table?
In other words, what makes an agent good
or bad, intelligent or stupid?
Answer: Good behavior: concept of
rationality
25
25. RATIONAL AGENTS
An agent should strive to “do the right thing”,
based on what it can perceive and the
actions it can perform. The right action is the
one that will cause the agent to be most
successful.
Performance measure: An objective criterion
for success of an agent's behavior
e.g., performance measure of a vacuum-
cleaner agent could be amount of dirt
cleaned up, amount of time taken, amount
of electricity consumed, amount of noise
generated, etc.
26
26. RATIONALITY
What is rational at any given time depends on:
Performance measure that defines the criterion of success
Agent’s prior knowledge of the environment
Actions that the agent can perform
Agent’s percept sequence to date
This leads to a definition of a rational agent
For each possible percept sequence, a rational agent should
select an action that is expected to maximize its performance
measure, given the evidence provided by the percept
sequence and whatever built-in knowledge the agent has.
27
27. NATURE OF ENVIRONMENT
Task environments are essentially the
“problems” to which rational agents are the
“solutions”
To understand task environments, we
should:
Know how to specify task environments
Know the different properties of task environments
28
28. SPECIFYING TASK ENVIRONMENTS
Specifying task environment includes describing
PEAS
Performance
Environment
Actuators
Sensor
In designing an agent, the first step must
always be to specify the task environment.
29
30. PEAS
Agent: Interactive English tutor
Performance measure:
Maximize student's score on test
Environment:
Set of students
Actuators:
Screen display (exercises, suggestions, corrections)
Sensors:
Keyboard
31
31. PROPERTIES OF ENVIRONMENTS
Fully Observable/Partially Observable
If an agent’s sensors give it access to the complete state of the
environment needed to choose an action, the environment is
fully observable.
Such environments are convenient, since the agent is freed from
the task of keeping track of the changes in the environment.
Deterministic
An environment is deterministic if the next state of the
environment is completely determined by the current state of the
environment and the action of the agent.
In a fully observable and deterministic environment, the agent
need not deal with uncertainty.
32. PROPERTIES OF ENVIRONMENTS
Static/Dynamic.
A static environment does not change while the agent
is thinking.
The passage of time as an agent deliberates is
irrelevant.
The agent doesn’t need to observe the world during
deliberation.
Discrete/Continuous.
If the number of distinct percepts and actions is
limited, the environment is discrete, otherwise it is
continuous.
33. STRUCTURE OF AGENTS
The job of AI is to design an agent program
that implements the agent function – the
mapping from percepts to actions
This program will run on some sort of
computing device with physical sensors and
actuators – this is called the architecture
Agent = architecture + program
34
34. TYPES OF AGENT
There are four basic types of agent in order of
increasing generality:
Simple reflex agents
select actions on the basis of the current percept,
ignoring the rest of the percept history
Model-based reflex agents
Maintain internal state to track aspects of the world
that are not evident in the current percept
Goal-based agents
Act to achieve their goals
Utility-based agents
Try to maximize their own expected “happiness”
Next
35
39. LEARNING AGENT
Learning agent can be divided into:
Learning element which is responsible for making
improvements
Performance element which is responsible for selecting
external actions. It takes in percepts and decides on
actions
Critic is use by learning element to provides feedback
on how the agent is doing and determines how the
performance element should be modified to do better in
the future
Problem generator is responsible for suggesting actions
that will lead to new and informative experiences.
40