SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere Nutzervereinbarung und die Datenschutzrichtlinie.
SlideShare verwendet Cookies, um die Funktionalität und Leistungsfähigkeit der Webseite zu verbessern und Ihnen relevante Werbung bereitzustellen. Wenn Sie diese Webseite weiter besuchen, erklären Sie sich mit der Verwendung von Cookies auf dieser Seite einverstanden. Lesen Sie bitte unsere unsere Datenschutzrichtlinie und die Nutzervereinbarung.
A fascinating View of the Artificial Intelligence Journey.
Ramón López de Mántaras, Ph.D.
Technical and Business Perspectives on the Current and Future Impact of Machine Learning - MLVLC
October 20, 2015
Published in the book “Machine Intelligence 5” in 1969, Bernard Meltzer and Donald Michie (eds)
Lashley’s talk sobre las limitaciones del conductismo in 1948 laid out the foundations for what would become cognitive science. Dartmouth: McCarthy on an artificial language to program computers to solve problems requiring self-reference and conjectures, Minsky on the first ideas about a machine acquiring an abstract model of the environment in which it is placed (that later influenced his seminal paper “Steps towards Artificial Intelligence”), Newell-Simon-Shaw on the famous Logic Theorist, Selfridge on his ideas on an architecture for pattern recognition called Pandemonium, Solomonoff of automated induction, Rochester on NNs, Shannon on the potential of Information theory to model the brain, Samuel on his chekers learning system, Bernstein on chess playing
These AI founders and pioneers all had in mind that the goal of AI was the “strong AI”. Vernor Vinge in 1981 even predicted the singularity will happen by 2030, but the AI field moved, after the so-called “AI winter” in the early 80’s, towards the “weak AI”. The exagerated claims about what AI would achieve provoked the AI winter. After that researchers started tackling specific problems trying to assist humans with AI instead of replacing them
Associativity, Commutativity and the Robbins axiom: NOT (NOT(A OR B) OR NOT (A OR NOT B)) = A William McCune proved the conjecture in 1996, using the automated theorem prover EQP.
EQP, an abbreviation for equational prover, is an automated theorem proving program for equational logic
First-order equational logic consists of quantifier-free terms of ordinary first-order logic, with equality as the only predicate symbol.
Neuroscience, it’s going to require decades to understand the deep principles of how the brain works. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.
Eso es así debido a que el desarrollo mental que requiere toda inteligencia compleja depende de las interacciones con el entorno y estas interacciones dependen a su vez del cuerpo, en particular del sistema perceptivo y del sistema motor. Ello junto el hecho de que las máquinas muy probablemente seguirán procesos de socialización y culturización distintos a los nuestros incide todavía más en el hecho de que, por muy sofisticadas que lleguen a ser, serán inteligencias distintas a las nuestras. El hecho de ser inteligencias ajenas a la humana y por lo tanto ajenas a los valores y necesidades humanas nos debería hacer reflexionar sobre posibles limitaciones éticas al desarrollo de la Inteligencia Artificial.
From Turing To Humanoid Robots - Ramón López de Mántaras
From Turing to Humanoid Robots:
A fascinating view of the AI journey
Ramon Lopez de Mantaras
Artificial Intelligence Research Institute (IIIA)
- Turing on AI
From Turing to Dartmouth
- Two views on AI: Weak AI vs. Strong AI
-The road traveled
Achievements of (Weak) AI
-The (long) road ahead
From Integrated Systems to Strong AI
Turing on AI
In 1948 Turing predicted that by the
end of the 20th century there would be
intelligent computers capable of
performing logical deductions, acquire
new knowledge inductively, by
experience and by evolution and
capable of communicating by means of
humanized interfaces. He also
speculated about a connection between
randomness and creative intelligence by
suggesting to add radium in to the ACE
in the hope that the random decay of
radiation would give its inputs the
In his famous 1950 paper he also
speculated about the emulation of the
mind of a child and giving it an
appropriate education to obtain an
adult mind (mental development)
From Turing to Dartmouth
1948 Hixon Symposium on Cerebral Mechanisms in
Behavior in Caltech (McCulloch on NNs, von
Neumann, Lashley on limitations of behaviourism)
Session on Learning Machines at the 1955 Western
Computer Conference in L.A. (Clark & Farley on
Hebbian learning in NNs; Selfridge on image
classification; Newell on chess; Pitts on NNs)
1956 Summer Research Project on Artificial
Intelligence in Dartmouth College (McCarthy,
Minsky, Newell, Simon, Shaw, Selfridge, Solomonoff,
Rochester, Shannon, Samuel, Bernstein)
Two views on AI
-The view of the founding fathers:
The science and engineering of
replicating, even surpassing
(singularity?), human-level intelligence
in machines (“strong AI”)
-The view in the early 80’s (after the “AI
The science and engineering
of designing machines with the
capability to perform tasks
that, when done by humans,
we agree that they require
intelligence (“weak AI”)
Strong versus Weak AI
The Strong AI case
Strong AI refers to AI that matches (or even exceeds)
general human-level intelligence (intelligent machines will
have mental states, consciousness, etc.)
Example: The robots from the movies (HAL
9000, Matrix, Terminator, I Robot, etc.)
The goal of human-level intelligence remains elusive but
has inspired and still inspires our work on AI even
though most efforts are on building weak AI (or “idiots
Strong versus Weak AI
The Weak AI case (or the “idiots savants”)
Machines already exhibit specialized intelligences without
worrying about having mental states, consciousness, etc.
All current forms of AI are “weak AI”
We have achieved impressive results along the traveled
“weak AI” road
The road traveled
AI is everywhere (though most of the time is not visible!):
-Fuel injection systems in our cars designed using AI algorithms.
-Jet turbines are designed using genetic algorithms.
-10.000 engineers carry out 2.600 maintenance works nightly on Hong Kong’s
subway, scheduled by an AI system
-There are a millions of AI-powered specialized robots in people’s homes and
robots running on the surface of Mars.
-Computer games (NPCs) use many AI techniques (including ML)
-Web search engines use AI techniques
-Automatic detection of credit card fraudulent transactions use ML algorithms
-Routing of cell phone calls is based on AI
-Detection of consumer habits is based on AI (ML)
-Complex mathematical theorems have been proven by automatic theorem
provers (i.e. Robbins conjecture)
-There are robots that play soccer
-An ML system revals passing patterns in soccer teams
-There are AI systems composing beautiful music and systems performing
music expressively (among other artistic applications)
The road traveled
We have achieved many of the things that the field’s founders used
as motivators, but not always in the way the “founding fathers” imagined:
-there is an impressive variety of application achievements. Most of them
based on the availability of very large sets of data processed by very high
performance computers, and not on emulating human’s mental processes:
-the world’s best chess players are computers
-self-driving cars have successfully run milions of miles (gathers 1 Gb/sec of data
to make predictions about its surroundings)
-there are high-performance speech recognition systems.
-Watson outperformed the best “Jeopardy” players (and now… turns medic)
-an ML system, trained on data from 133.000 patients, can predict heart attacks 4
hours before they happen
The road traveled
In spite of all these great successes along specialized lines in each of
the areas of AI, we do not seem to be getting any closer to “general
1-We have given up the explainability of the AI systems (as well as
the cognitive plausability of AI models)
the “reasoning” made by today’s massive data- driven
AI is a massively complex statistical analysis of an immense
number of datapoints. We have traded the “why” for simply the
2-We have focused on the isolated components of AI but not on
the whole AI itself
We have wonderful bricks but, to build the house,
we need an architecture and the cement to tie the bricks
together (sensing, knowledge acquisition & representation,
reasoning, communication, action, planning, etc)
The road ahead: Integrated systems
Intelligence seems to emerge from a complex combination of many
specialized abilities, such as sensing, reasoning, learning, planning,
socializing, and communicating.
But not a mere juxtaposition of these abilities!
Rather, there is some set of deep interdependencies that tie these
elements together. For example:
-learning must result on knowledge that needs to be
represented so that reasoners, planners, etc can use it
-perception requires reasoning and learning and
Most important challenge:
We need to think about how all the components of an artificial
intelligence should work together and how they need to be
connected (the architecture!). We need to focus on
comprehensive, totally integrated systems.
Integrated systems might be a necessary step towards strong
(human-level) AI (assuming this is a realistic goal!).
The road ahead
Example of Integrated System
Building a multipurpose, social, robot that can accumulate diverse
knowledge over long periods of time (never ending learning) and that can
use it effectively to decide what to do and how to do it.
-A robot’s knowledge must be grounded in the physical world and capable
of learning by interacting with the world (“embodied cognition”)
-Because learning is prone to error, and the world is not deterministic,
reasoning with such learned knowledge must deal with uncertainty
-The representation languages must be expressive enough to represent the
complex connections between objects, places, actions, people, time, and
causation (understanding these requires “common sense” knowledge).
-Also requires natural language understanding (Watson does not
understand anything!) and scene understanding (these require “common
-We should be able to evaluate the progress (beyond Turing test)
Big failures in scene understanding!
A red and white bus in front
of a building
The road ahead
The field is ripe to develop such systems because:
-we have a variety of scalable learning methods that are both relational and
statistical, for instance SRL.
-the development and rapid deployment of ubiquitous sensing and actuator
devices makes it possible to create AI systems robustly grounded in direct
experience with the world and learn from interacting with the world (i.e.
work on Developmental Robotics)
-there are a growing number of successful applications of behavior
understanding based on computer vision and ML
-we have made substantial progress in automatically extracting from the web
named-entities and facts relating these entities using “Learning by Reading”
(work of T. Mitchell, et al.)
-we have made substantial progress in other ML techniques and particularly
on learning by experience, by imitation, transfer learning, deep learning, and
“never ending learning” (for instance CMU’s NELL ans NEIL systems)
-we have made substantial progress in MAS to model social cognition and
-we have an ever increasing amount of computational power.
Developmental Robotics: Learning the musical instrument and
playing by imitation
(in collaboration with Imperial College)
ahead: Very ambitious predictions
-Robotic scientists that will serve as companions in discovery
by formulating hypothesis and pursuing their confirmation (initial
work on the ADAM and EVE systems by R. King et al. "The
Automation of Science". Science 324 (5923): 85–89)
-AI will play a central role in solving challenges in energy, the
environment, and in healthcare.
-A team of robots will beat the world’s human soccer
champion team. (H. Kitano)
-AI and other sciences (biology, material sciences,
nanotechnology, economics,…) will come together and will have
wide-ranging influences on our ideas about AI and on the
machines we will build.
Hydrogen muscle for silent robots
Copper and nickel-based metal hydride powder is compressed into peanut-sized pellets and
secured in a vessel. Hydrogen is pumped in to “charge” the pellets with the gas. A heater coil
surrounds the vessel. Heat breaks the weak chemical bonds and releases the stored hydrogen.
(Kim & Vanderhoff, Smart Mat. and Struct., 18, 2009 DOI: 10.1088/0964-1726/18/12/125014)
Inflatable rubber tube
surrounded by Kevlar
Chen, Briscoe, Armes, Klein; Lubrication at Physiological Pressures by Polyzwitterionic Brushes,
Science 323, 2009
group attracts 25
Performs well in pressures up to 5 megapascals
60 nm backbone
Touch sensitive artificial skin
Capacitive copper contacts
A layer of silicone rubber acts as a spacer
between those contacts and an outer layer
of Lycra that carries a metal contact above
each copper contact. The whole constitutes
a pressure-sensing capacitor that can detect
a touch as light as 1 gram.
(Schmitz et al. IEEE Transactions on Robotics, 27(3).
Carbon, or metal, charged polymer coats the
fingers and palm. The transversal electrical
resistance varies as a function of the pressure.
Detected a touch greater than 20 grams. Applied
to tactile object recognition.
(López de Mántaras. PhD Thesis, Univ. Paul Sabatier.
-AI is a well stablished research discipline with demonstrated
successess and clear trajectories for its immediate future (but no
“singularity”: the brain is much too complex!).
-AI techniques are everywhere (although often are invisible): AI
Algorithms increasingly run our lives: They find books, movies,
jobs, and dates for us, manage our investments, and discover new
-Most exciting opportunities for research lie on the interdisciplinary
boundaries of AI with biology, linguistics, economics, material
sciences, etc. That will provide insights and technologies towards
building large-scale integrated systems.
-AI is mature enough to undertake research on integrated
systems (perhaps leading towards the goals of “strong AI”) and
not only working on massive data-driven AI.
-Fragmentation of the field, funding, and inadequate education
curricula are also strong limiting factors
…progress will be slow because there is no direct
significant funding to pursue the “strong AI” goals of
human-level intelligence (although there is, and will be,
funding for integrated projets particularly in robotics
because it requires significant integration)
because the field is dominated by massive data-driven AI
…and because AI suffers from fragmentation (separate
conferences and over-specialized college curricula)
No matter how sophisticated will be future Artificial
Intelligences they will be necessarily different to human
THE BODY SHAPES THE WAY WE THINK
These artificial intelligences will be alien to human needs and
therefore we should put limits on the developments of AI
”KEEP CALM AND FORGET ABOUT THE