This document summarizes Aaron Sloman's presentation at AISB'08 on what designers of artificial companions need to understand about biological ones. Sloman discusses the difficulty of the task and argues that current AI is not capable of replicating the capabilities of young children. He outlines "Type 1" goals for artificial companions focused on engagement, and "Type 2" goals focused on enabling functions like helping users, which are much harder to achieve. Sloman asserts progress requires understanding how human capabilities like understanding environments, minds, and developing new motives emerge from biological and developmental factors. The key is replicating some of the generic learning abilities of young humans to build more advanced functions layer by layer.
Virtuality, causation and the mind-body relationshipAaron Sloman
Extends my previous introductions to virtual machines and their role both in artefacts and products of biological evolution. This attempts to correct various erroneous assumptions about computation, functionalism, supervenience, life, information, and causation. See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
Grady Booch, IBM Fellow and IBM’s Chief Scientist for Watson, presented “Embodied Cognition with Project Intu” as part of the Cognitive Systems Institute Speaker Series on December 8, 2016
Virtuality, causation and the mind-body relationshipAaron Sloman
Extends my previous introductions to virtual machines and their role both in artefacts and products of biological evolution. This attempts to correct various erroneous assumptions about computation, functionalism, supervenience, life, information, and causation. See also http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
How to Build a Research Roadmap (avoiding tempting dead-ends)Aaron Sloman
What's a Research Roadmap For?
Why do we need one?
How can we avoid the usual trap of making bold promises to do X, Y and Z,
then hope that our previous promises will not be remembered the next time we apply for funds to do X, Y and Z?
How can we produce a sensible, well informed roadmap?
Originally presented at the euCognition Research Roadmap discussion in Munich on 12 Jan 2007
This suggests a way to avoid tempting dead ends (repeating old promises that proved unrealistic) by examining many long term goals, including describing existing human and animal competences not yet achieved by robots, then working backwards systematically by investigating requirements for those competences, and requirements for meeting those requirements, etc. Insread of generating a single linear roadmap this should produce a partially ordered network of intermediate targets, leading back, to short term goals that may be achievable starting from where we are.
Such a roadmap will inevitably have mistakes: over-optimistic goals, missing preconditions, unrecognised opportunities. But if the work is done in many teams in a fully open manner with as much collaboration as possible, it should be possible to make faster, deeper, progress than can be achieved by brain-storming discussions of where we can get in a few years.
Grady Booch, IBM Fellow and IBM’s Chief Scientist for Watson, presented “Embodied Cognition with Project Intu” as part of the Cognitive Systems Institute Speaker Series on December 8, 2016
Do Intelligent Machines, Natural or Artificial, Really Need Emotions?Aaron Sloman
(Updated on 14 Jan 2014 -- with substantial revisions.)
Many people believe that emotions are required for intelligence. I argue that this is mostly based on (a) wishful thinking and (b) a failure adequately to analyse the variety of types of affective states and processes that can arise in different sorts of architectures produced by biological evolution or required for artificial systems. This work is a development of ideas presented by Herbert Simon in the 1960s in his 'Motivational and emotional controls of cognition'.
Distinguishes Humean (statistics-based) notions of causation and Kantian (deterministic, structure-based) notions of causation, arguing that intelligent robots and animals need both, but each requires a combination of competences, and various kinds of partial competence of both kinds are possible.
Possibilities between form and function (Or between shape and affordances)Aaron Sloman
I discuss the need for an intelligent system, whether it is a robot, or some sort of digital companion equipped with a vision system, to include in its ontology a range of concepts that appear not to have been noticed by most researchers in robotics, vision, and human psychology. These are concepts that lie between (a) concepts of "form", concerned with spatially located objects, object parts, features, and relationships and (b) concepts of affordances and functions, concerned with how things in the environment make possible or constrain actions that are possible for a perceiver and which can support or hinder the goals of the perceiver.
Those intermediate concepts are concerned with processes that *are* occurring and processes that *can* occur, and the causal relationships between physical structures/forms/configurations and the possibilities for and constraints on such processes, independently of whether they are processes involving anyone's actions or goals.
These intermediate concepts relate motions and constraints on motion to both geometric and topological structures in the environment and the kinds of 'stuff' of which things are composed, since, for example, rigid, flexible, and fluid stuffs support and constrain different sorts of motions.
They underlie affordance concepts. Attempts to study affordances without taking account of the intermediate concepts are bound to prove shallow and inadequate.
Notes for invited talk at Dagstuhl Seminar: ``From Form to Function'' Oct 18-23, 2009 http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=09431
Slides prepared for a broadcast presentation to members of Computing at School http://www.computingatschool.org.uk/, about why computing education should be about more than the science and technology required for useful or entertaining applications. Instead, learning about forms of information processing systems can give us new, deeper ways of thinking about many old phenomena, e.g. the nature of mind and the evolution of minds of various kinds. This supports the claim that the study of computation is as much a science as physics or psychology, rather than just a branch of engineering -- as famously suggested by Fred Brooks.
An Introduction to emotions from a neuropsychological perspective.
Presentation for talk at CBCS, Allahabad
(C) Sumitava Mukherjee
[smukh@cognobytes.com/ smukh@cbcs.ac.in
URL : http://people.cognobytes.com/smukh]
5/23/2009 - First draft posted.
5/24/2009 - Replaced with draft 2 adding some recommendations for OLPC country deployment. Downloads as a 27mb PDF file.
http://www.olpclearningclub.org
One of three presentations we did for the Canadian Network for Innovation in Education (CNIE) online 2021 conference.
A workshop to approach how to encourage creativity in the context of educational applications based on Artificial Intelligence (AI), which are personalising learning sequences adapted to each student’s competency level, learning style, and rhythm, and can adjust the physical environment to provide greatest comfort for learning. Smart learning spaces use accumulated data from each student as well as “big data” from all users to improve the accuracy of its choices. This can introduce a “digital bubble” that limits, shapes, and defines the space where the learner can grow and explore, produced when AI takes control of the student’s immediate learning zone. To benefit from AI-based personalisation, we need strategies for avoiding risks of isolation and cognitive bias; we need to create a hybrid learning environment that federates teachers, learners, and AI agents.
In this environment, creativity is not just a global competence. It is the core skill, needed in all types of lifelong learning scenarios, to meet the challenges of the SDG’s, including inclusion and equity. As educators we need to help learners to live in a world where intelligent non-human agents are commonplace. This means learning new ways of collaborating with each other and with machines. Faced with so much disruption from environmental, social, and technological challenges, we need to integrate notions of mediation, co-working and negotiation, and foster flexibility of response in a smart pedagogy that encourages creativity along with communication, digital culture, and collaborative problem-solving – a pedagogy that highlights the importance of surprise, inquiring minds, ethics, aesthetics, self-realization, motivation, joy, and other essentially human learning characteristics.
Kantian Philosophy of Mathematics and Young Robots: Could a baby robot grow u...Aaron Sloman
There is a sequel to this, with more emphasis on 'toddler theorems' and kinds of child science here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#toddler
It is not yet stable enough to be uploaded to slideshare.
The webinar gave participants an exploration into how to use and incorporate coding activities in everyday learning as well as identifying web 2.0 tools and apps to support engaging students in coding activities across the school. The session also provided practical examples of how to implement coding activities and highlighted the value of coding in relation to curriculum needs.
Ontologies for baby animals and robots From "baby stuff" to the world of adul...Aaron Sloman
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.
Why symbol-grounding is both impossible and unnecessary, and why theory-tethe...Aaron Sloman
Introduction to key ideas of semantic models,
implicit definitions and symbol tethering, using ideas from philosophy of science and model theoretic semantics to explain why symbol ground theory is misguided: there is no need for all symbols used by an intelligent agent to be 'grounded' in terms of experience, or sensory-motor patterns. Rather, most of the meaning of a symbol may come from its role in a powerful explanatory theory, though the theory should have some connection with experiments and observations in order to be applicable to the world. That is not the same as requiring every symbol to be linked to experiences, experiments or measurements.
Symbol grounding theory is a modern version of the philosophical theory of 'concept empiricism', which was refuted by the philosopher Immanuel Kant in the 18th century.
Do Intelligent Machines, Natural or Artificial, Really Need Emotions?Aaron Sloman
(Updated on 14 Jan 2014 -- with substantial revisions.)
Many people believe that emotions are required for intelligence. I argue that this is mostly based on (a) wishful thinking and (b) a failure adequately to analyse the variety of types of affective states and processes that can arise in different sorts of architectures produced by biological evolution or required for artificial systems. This work is a development of ideas presented by Herbert Simon in the 1960s in his 'Motivational and emotional controls of cognition'.
Distinguishes Humean (statistics-based) notions of causation and Kantian (deterministic, structure-based) notions of causation, arguing that intelligent robots and animals need both, but each requires a combination of competences, and various kinds of partial competence of both kinds are possible.
Possibilities between form and function (Or between shape and affordances)Aaron Sloman
I discuss the need for an intelligent system, whether it is a robot, or some sort of digital companion equipped with a vision system, to include in its ontology a range of concepts that appear not to have been noticed by most researchers in robotics, vision, and human psychology. These are concepts that lie between (a) concepts of "form", concerned with spatially located objects, object parts, features, and relationships and (b) concepts of affordances and functions, concerned with how things in the environment make possible or constrain actions that are possible for a perceiver and which can support or hinder the goals of the perceiver.
Those intermediate concepts are concerned with processes that *are* occurring and processes that *can* occur, and the causal relationships between physical structures/forms/configurations and the possibilities for and constraints on such processes, independently of whether they are processes involving anyone's actions or goals.
These intermediate concepts relate motions and constraints on motion to both geometric and topological structures in the environment and the kinds of 'stuff' of which things are composed, since, for example, rigid, flexible, and fluid stuffs support and constrain different sorts of motions.
They underlie affordance concepts. Attempts to study affordances without taking account of the intermediate concepts are bound to prove shallow and inadequate.
Notes for invited talk at Dagstuhl Seminar: ``From Form to Function'' Oct 18-23, 2009 http://www.dagstuhl.de/en/program/calendar/semhp/?semnr=09431
Slides prepared for a broadcast presentation to members of Computing at School http://www.computingatschool.org.uk/, about why computing education should be about more than the science and technology required for useful or entertaining applications. Instead, learning about forms of information processing systems can give us new, deeper ways of thinking about many old phenomena, e.g. the nature of mind and the evolution of minds of various kinds. This supports the claim that the study of computation is as much a science as physics or psychology, rather than just a branch of engineering -- as famously suggested by Fred Brooks.
An Introduction to emotions from a neuropsychological perspective.
Presentation for talk at CBCS, Allahabad
(C) Sumitava Mukherjee
[smukh@cognobytes.com/ smukh@cbcs.ac.in
URL : http://people.cognobytes.com/smukh]
5/23/2009 - First draft posted.
5/24/2009 - Replaced with draft 2 adding some recommendations for OLPC country deployment. Downloads as a 27mb PDF file.
http://www.olpclearningclub.org
One of three presentations we did for the Canadian Network for Innovation in Education (CNIE) online 2021 conference.
A workshop to approach how to encourage creativity in the context of educational applications based on Artificial Intelligence (AI), which are personalising learning sequences adapted to each student’s competency level, learning style, and rhythm, and can adjust the physical environment to provide greatest comfort for learning. Smart learning spaces use accumulated data from each student as well as “big data” from all users to improve the accuracy of its choices. This can introduce a “digital bubble” that limits, shapes, and defines the space where the learner can grow and explore, produced when AI takes control of the student’s immediate learning zone. To benefit from AI-based personalisation, we need strategies for avoiding risks of isolation and cognitive bias; we need to create a hybrid learning environment that federates teachers, learners, and AI agents.
In this environment, creativity is not just a global competence. It is the core skill, needed in all types of lifelong learning scenarios, to meet the challenges of the SDG’s, including inclusion and equity. As educators we need to help learners to live in a world where intelligent non-human agents are commonplace. This means learning new ways of collaborating with each other and with machines. Faced with so much disruption from environmental, social, and technological challenges, we need to integrate notions of mediation, co-working and negotiation, and foster flexibility of response in a smart pedagogy that encourages creativity along with communication, digital culture, and collaborative problem-solving – a pedagogy that highlights the importance of surprise, inquiring minds, ethics, aesthetics, self-realization, motivation, joy, and other essentially human learning characteristics.
Kantian Philosophy of Mathematics and Young Robots: Could a baby robot grow u...Aaron Sloman
There is a sequel to this, with more emphasis on 'toddler theorems' and kinds of child science here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#toddler
It is not yet stable enough to be uploaded to slideshare.
The webinar gave participants an exploration into how to use and incorporate coding activities in everyday learning as well as identifying web 2.0 tools and apps to support engaging students in coding activities across the school. The session also provided practical examples of how to implement coding activities and highlighted the value of coding in relation to curriculum needs.
Ontologies for baby animals and robots From "baby stuff" to the world of adul...Aaron Sloman
In contrast with ontology developers concerned with a symbolic or digital environment (e.g. the internet), I draw attention to some features of our 3-D spatio-temporal environment that challenge young humans and other intelligent animals and will also challenge future robots. Evolution provides most animals with an ontology that suffices for life, whereas some animals, including humans, also have mechanisms for substantive ontology extension based on results of interacting with the environment. Future human-like robots will also need this. Since pre-verbal human children and many intelligent non-human animals, including hunting mammals, nest-building birds and primates can interact, often creatively, with complex structures and processes in a 3-D environment, that suggests (a) that they use ontologies that include kinds of material (stuff), kinds of structure, kinds of relationship, kinds of process (some of which are process-fragments composed of bits of stuff changing their properties, structures or relationships), and kinds of causal interaction and (b) since they don't use a human communicative language they must use information encoded in some form that existed prior to human communicative languages both in our evolutionary history and in individual development. Since evolution could not have anticipated the ontologies required for all human cultures, including advanced scientific cultures, individuals must have ways of achieving substantive ontology extension. The research reported here aims mainly to develop requirements for explanatory designs. The attempt to develop forms of representation, mechanisms and architectures that meet those requirements will be a long term research project.
Why symbol-grounding is both impossible and unnecessary, and why theory-tethe...Aaron Sloman
Introduction to key ideas of semantic models,
implicit definitions and symbol tethering, using ideas from philosophy of science and model theoretic semantics to explain why symbol ground theory is misguided: there is no need for all symbols used by an intelligent agent to be 'grounded' in terms of experience, or sensory-motor patterns. Rather, most of the meaning of a symbol may come from its role in a powerful explanatory theory, though the theory should have some connection with experiments and observations in order to be applicable to the world. That is not the same as requiring every symbol to be linked to experiences, experiments or measurements.
Symbol grounding theory is a modern version of the philosophical theory of 'concept empiricism', which was refuted by the philosopher Immanuel Kant in the 18th century.
Vision is our most developed sense and one upon which we rely to make many decisions, conscious or otherwise. Many of our everyday interactions, such as driving a car, greeting familiar faces on the street, or deciding which dish to order at a restaurant, are guided by our visual sense. For the most part, this works well. But sometimes we are reminded of our visual system’s limitations and surprising behavior through optical illusions that exploit misjudgments in size, distance, depth, color and brightness, among many others. This lecture presents and explains a diverse collection of visual perception phenomena that challenge our common knowledge of how well we detect, recognize, compare, measure, interpret, and make decisions upon the information that arrives at our brain through our eyes. It also explains the relationships between the latest developments in human vision research and emerging technologies, such as: self-driving cars, face recognition and other forms of biometrics, and virtual reality. After seeing a large number of examples of optical illusions and other visual phenomena, this talk will make you wonder: can you really trust what you see?
What is computational thinking? Who needs it? Why? How can it be learnt? ...Aaron Sloman
What is computational thinking?
Who needs it? Why? How can it be learnt?
Can it be taught? How?
Slides for invited presentation at Conference of ALT (Association for Learning Technology) 11th Sept 2012, University of Manchester.
PDF available (easier for printing, selecting text, etc.):
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#talk105
A video of the actual presentation (using no slides because of a projector problem) is now available here
http://www.youtube.com/watch?v=QXAFz3L2Qpo
It also has been made available as "slide 47" after the PDF presentation on this page.
I attempt to generalise Jeannette Wing's notion of "Computational thinking" (ACM 2006) to include attempting to understand much biological information processing, and try to show the necessity for educators to do deep computational thinking if they wish to facilitate processes of learning.
Construction kits for evolving life -- Including evolving minds and mathemati...Aaron Sloman
Darwin's theory of evolution by natural selection does not adequately explain the generative power of biological evolution. For that we need to understand the mechanisms involved in producing new options for natural selection, without which there would always be the same set of possibilities available. This applies also to the construction kits: evolution can produce new construction kits, "Derived" construction kits, based on the Fundamental construction kit provided by the physical universe and its originally lifeless physical and chemical mechanisms. It turns out that life needs both concrete and abstract construction kits, of ever increasing complexity. This paper introduces some basic ideas, though far more empirical and theoretical research is required, combining multiple disciplines. Slideshare no longer allows presentations to be updated, so I no longer use it. For a later version search for: Sloman "Construction kits for evolving life" cogaff. Most of my slideshare presentations have newer versions in the CogAff web site at the University of Birmingham, UK. (Not Alabama)
The Turing Inspired Meta-Morphogenesis Project -- The self-informing universe...Aaron Sloman
This replaces an earlier version. The latest version with clickable links is available at Versions with clickable links available at http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Reorganised several times since first uploaded: most recently 25 Jan 2016
-------------------------------------------------------------------------------------------------------
Slides include link to video of lecture (158MB) http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#ailect2-2015
-------------------------------------------------------------------------------------------------------------
Two questions are shown to have deep connections: What are the functions of vision in animals? and How did human languages evolve? The answer given here is that the functions of vision need to be supported by richly structured internal languages (forms of representation used for acquiring, storing, manipulating, deriving and using information), from which it follows that internal languages must have evolved before languages for communication.
---------------------------------------------------------------------------------------------------------------
The account of the functions of vision mentions early AI vision, the impact of Marr and the even greater impact of Gibson, but argues that they did not recognize all the functions of vision, e.g. the uses of vision in making mathematical discoveries leading to Euclid's elements.
---------------------------------------------------------------------------------------------------------------
Many questions are left unanswered by this research, which is part of the Meta-Morphogenesis project, introduced here:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
---------------------------------------------------------------------------------------------------------------
A slideshare presentation on "origins of language" by Jasmine Wong, adds some useful additional evidence, but presents a simpler theory:
http://www.slideshare.net/JasmineWong6/origins-of-language
---------------------------------------------------------------------------------------------------------------
Minor corrections+ additions 30-Mar-2015, 1-Apr-2015, 15-Apr-2015 12-Nov-2015
If learning maths requires a teacher, where did the first teachers come from?
or
Why (and how) did biological evolution produce mathematicians?
Presentation at Symposium on Mathematical Cognition AISB2010
Part of the Meta-Morphogenesis Project. See also this discussion of toddler theorems:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
Evolution of human mathematics from earlier abilities to perceived, use and reason about affordances, spatial possibilities and constraints.
The necessity of mathematical truth does not imply infallibility of mathematical reasoning. (Lakatos).
Toddlers discover theorems without knowing it. Later they may learn to reflect on and talk about what they have learnt. Compare Annette Karmiloff-Smith on "Representational re-description".
Why is it still so hard to give robots and AI systems the ability to reason spatially as mathematicians do (except for simple special cases, e.g. where space is discretised.)
A multi-picture challenge for theories of visionAaron Sloman
(Modified 7th June 2013 to include some droodles.)
Some informal experiments are presented whose results help to challenge most theories of vision and proposed mechanisms of vision.
A possible explanatory information-processing architecture is proposed, based on multiple dynamical systems, grown during an individual's life time, most of which are dormant most of the time, but which can be very rapidly activated and instantiated so as to build a multi-ontology interpretation of the currently, and recently, available visual information -- e.g. turning a corner into a busy street in an unfamiliar city. As far as I know, there is no working implementation of such a system, though a very early prototype called Popeye (implemented in Pop2) around 1976 is summarised. Many hard unsolved problems remain, though most of them are ignored by research on vision that makes narrow assumptions about the functions of biological vision.
Meta-Morphogenesis, Evolution, Cognitive Robotics and Developmental Cognitive...Aaron Sloman
How could a planet, condensed from a cloud of dust, produce minds -- and products of minds, along with microbes, mice, monkeys, mathematics, music, marmite, murder, megalomania, and all other forms and products of life on earth (and possibly elsewhere)?
This presentation introduces the ambitious, multi-disciplinary Meta-Morphogenesis project, partly inspired by Turing's 1952 paper on morphogenesis. It may lead to an answer, by identifying the many transitions between different types and mechanisms of biological information processing, including transitions that changed the mechanisms of change, altering forms of evolution, development, learning, culture and ecosystem dynamics. One of the questions raised is whether chemical information-processing is capable of supporting processes that would be infeasible or impossible on a Turing machine or conventional computer.
A 2hour 30 min recording of this tutorial was made by Adam Ford, available here: http://www.youtube.com/watch?v=BNul52kFI74 (new version installed on 14 Jun 2013 with titles and audio problem fixed). Also available here
http://www.cs.bham.ac.uk/research/projects/cogaff/movies/#m-m-tut
"Information" here is used in Jane Austen's sense, not Claude Shannon's sense. See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/austen-info.html
More information about the project is available here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html
Adam Ford interviewed the author about some of these topics at the AGI conference in December 2012 in this video: http://www.youtube.com/watch?v=iuH8dC7Snno
Related PDF presentations can be found here http://www.cs.bham.ac.uk/research/projects/cogaff/talks
What's vision for, and how does it work? From Marr (and earlier)to Gibson and...Aaron Sloman
ABSTRACT
Very many researchers assume that it is obvious what vision (e.g. in humans) is for, i.e. what functions it has, leaving only the problem of explaining how those functions are fulfilled. So they postulate mechanisms and try to show how those mechanisms can produce the required effects, and also, in some cases, try to show that those postulated mechanisms exist in humans and other animals and perform the postulated functions. The main point of this presentation is that it is far from obvious what vision is for - and J.J. Gibson's main achievement is drawing attention to some of the functions that other researchers had ignored. I'll present some of the other work, show how Gibson extends and improves it, and then point out much more there is to the functions of vision and other forms of perception than even Gibson had noticed.
In particular, much vision research, unlike Gibson, ignores vision's function in on-line control and perception of continuous processes; and nearly all, including Gibson's work, ignores meta-cognitive perception, and perception of possibilities and constraints on possibilities and the associated role of vision in reasoning. If we don't understand that we cannot understand how biological mechanisms arising from requirements for being embodied in a rich, complex and changing 3-D environment underpin human mathematical capabilities, including the ability to reason about topology and Euclidean geometry.
Last updated: 1st March 2014, 10 June 2015 (additional links)
Helping Darwin: How to think about evolution of consciousness (Biosciences ta...Aaron Sloman
ABSTRACT
Many of Darwin's opponents, and some of those who accepted the theory of evolution as regards physical forms, objected to the claim that human mental functions, and
consciousness in particular, could be products of evolution. There were several reasons for this opposition, including unanswered questions as to how physical mechanisms could produce mental states and processes an old, and still surviving, philosophical problem.
A new answer is now available. Evolution could have produced the "mysterious" aspects of consciousness if, like engineers developing computing systems in the last six or seven decades, evolution encountered and "solved" increasingly complex problems of representation and control (including self-monitoring and self-control) by using systems with increasingly abstract mechanisms based on virtual machines, including most
recently self-monitoring virtual machines.
These capabilities are, like many capabilities of computer-based systems, implemented in non-physical virtual machinery which, in turn, are implemented in lower level physical mechanisms.
This would require far more complex virtual machines than human engineers have so far created. Noone knows whether the biological virtual machines could have been
implemented in the discrete-switch technology used in current computers.
These ideas were not available to Darwin and his contemporaries: most of the concepts, and the technology, involved in creation and use of sophisticated virtual machines were developed only in the last half century, as a by-product of a large number of design decisions by hardware and software engineers solving different problems.
Virtual Machines and the Metaphysics of Science Aaron Sloman
Philosophers regularly use complex (running) virtual machines (not virtual realities) composed of enduring interacting non-physical subsystems (e.g. operating systems, word-processors, email systems, web browsers, and many more). These VMs can be subdivided into different kinds with different types of functions, e.g. "specific-function VMs" and "platform VMs" (including language VMs, and operating system VMs) that provide support for a variety of different (possibly concurrent) "higher level" VMs, with different functions.
Yet, almost all ignore (or misdescribe) these VMs when discussing functionalism, supervenience, multiple realisation, reductionism, emergence, and causation.
Such VMs depend on many hardware and software designs that interact in very complex ways to maintain a network of causal relationships between physical and virtual entities and processes.
I'll try to explain this, and show how VMs are important for philosophy, in part because evolution long ago developed far more sophisticated systems of virtual machinery (e.g. running on brains and their surroundings) than human engineers so far. Most are still not understood.
This partly accounts for the apparent intractability of several philosophical problems.
E.g. running VM subsystems can be disconnected from input-output interactions for extended periods, and some can have more complexity than the available input/output bandwidth can reveal.
Moreover, despite the advantages of VMs for self-monitoring and self control, they can also lead to self-deception.
For a lot of related material see Steve Burbeck's web site http://evolutionofcomputing.org/Multicellular/Emergence.html
(A related presentation debunking the "hard problem" of consciousness is also in this collection.)
Why the "hard" problem of consciousness is easy and the "easy" problem hard....Aaron Sloman
The "hard" problem of concsiousness can be shown to be a non-problem because it is formulated using a seriously defective concept (the concept of "phenomenal consciousness" defined so as to rule out cognitive functionality and causal powers).
So the hard problem is an example of a well known type of philosophical problem that needs to be dissolved (fairly easily) rather than solved. For other examples, and a brief introduction to conceptual analysis, see http://www.cs.bham.ac.uk/research/projects/cogaff/misc/varieties-of-atheism.html
In contrast, the so-called "easy" problem requires detailed analysis of very complex and subtle features of perceptual processes, introspective processes and other mental processes, sometimes labelled "access consciousness": these have cognitive functions, but their complexity (especially the way details change as the environment changes or the perceiver moves) is considerable and very hard to characterise.
"Access consciousness" is complex also because it takes many different forms, since what individuals are conscious of and what uses being conscious of things can be put to, can vary hugely, from simple life forms, through many other animals and human infants, to sophisticated adult humans,
Finding ways of modelling these aspects of consciousness, and explaining how they arise out of physical mechanisms, requires major advances in the science of information processing systems -- including computer science and neuroscience.
There are empirical facts about introspection that have generated theories of consciousness but some of the empirical facts go unnoticed by philosophers.
The notion of a virtual machine is introduced briefly and illustrated using Conway's "Game of life" and other examples of virtual machinery that explain how contents of consciousness can have causal powers and can have intentionality (be able to refer to other things).
The beginnings of a research program are presented, showing how more examples can be collected and how notions of virtual machinery may need to be developed to cope with all the phenomena.
What is science? (Can There Be a Science of Mind?) (Updated August 2010)Aaron Sloman
This presentation gives an introduction to philosophy of science, though a rather idiosyncratic one, stressing science as the search for powerful new ontologies rather than merely laws. You can't express a law unless you have
an ontology including the items referred to in the law (e.g. pressure, volume, temperature). The talk raises a
number of questions about the aims and methods of science, about the differences between the physical sciences and
the science of information-processing systems (e.g. organisms, minds, computers), whether there is a unique truth
or final answers to be found by science, whether scientists ever prove anything (no -- at most they show that some
theory is better than any currently available rival theory), and why science does not require faith (though
obstinacy can be useful). The slides end with a section on whether a science of mind is possible, answering yes, and explaining how.
Evolution of minds and languages: What evolved first and develops first in ch...Aaron Sloman
SLIDESHARE NOW STUPIDLY DOES NOT ALLOW SLIDES TO BE UPDATED. To find the latest version of these slides go to http://www.cs.bham.ac.uk/research/projects/cogaff//talks/#talk111
The version posted here was last updated on 16 March 2015. There have been several changes since then on the alternative site. Why did Slideshare take such a stupid decision (after being bought by Linkedin?)
A theory is presented according to which "languages" with structural variability and compositional semantics evolved in several species for *internal* use (e.g. in perception, planning, learning, forming goals, deciding, etc.) before *external* languages evolved for communication. The theory implies that such internal languages develop in young humans before a language for communication.
It is is also noted that the standard notion of 'compositional semantics' has to allow for the propagation of semantic content from parts to wholes to be potentially context sensitive at every stage: i.e. current context, speaker intentions, user knowledge, shared goals, can all affect how semantics of larger parts are derived from semantics of smaller parts+syntactic structure. This applies as much to non-verbal languages as to verbal ones.
This theory of how human languages evolved from earlier 'internal languages' (GLs) is inconsistent with the best known published theories of evolution or development of language.
But that does not make it wrong. Moreover, this theory is supported by empirical evidence including the example of deaf children in Nicaragua: http://en.wikipedia.org/wiki/Nicaraguan_Sign_Language
This is a mixture of philosophy, artificial intelligence, Cognitive science, Software engineering and theoretical biology, presented at the Workshop on Philosophy and Engineering, London, 10-12 November 2008.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty,
International FDP on Fundamentals of Research in Social Sciences
at Integral University, Lucknow, 06.06.2024
By Dr. Vinod Kumar Kanvaria
Exploiting Artificial Intelligence for Empowering Researchers and Faculty, In...
What designers of artificial companions need to understand about biological ones
1. Invited Presentation at Public Session of AISB’08, 3rd April 2008, Aberdeen
http://www.aisb.org.uk/convention/aisb08/
What designers of
artificial companions
need to understand
about biological ones.
Aaron Sloman
http://www.cs.bham.ac.uk/˜axs/
These slides (expanded since the event) are in my ‘talks’ directory:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#aisb08
AISB 2008 Slide 1 Last revised: March 9, 2009
2. Apologies
I apologise
• For slides that are too cluttered: I write my slides so that they can be read by people
who did not attend the presentation.
• So please ignore what’s on the screen unless I draw attention to something.
No apologies
For using linux and latex
This file is PDF produced by pdflatex.
NOTE:
To remind me to skip some slides during the presentation they are ‘greyed’.
This presentation, in Aberdeen in April 2008 overlaps with a presentation at a meeting on Artificial/Digital
companions at the Oxford Internet Institute in October 2007, and several other presentations in this
directory:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/
AISB 2008 Slide 2 Last revised: March 9, 2009
3. Plan – may not have enough time
This is what I aim to do
• Present some examples (using videos) of biological competences in children and
other animals.
• Discuss some of the requirements for artificial companions (ACs).
• Distinguish relatively easy goals and much harder, but more important, goals for
designers of ACs (Type 1 and Type 2 goals, below).
• Assert that current techniques will not lead to achieving the hard goals.
• Suggest ways of moving towards achieving the hard goals, by looking more closely at
features of how the competences develop in humans.
• In particular there is a lot of knowledge about the space and structures and processes
in space that a young child develops (some of it shared with some other animals) that
forms part of the basis of a lot of later learning.
• No AI systems are anywhere near the competences of young children.
• Trying to go directly to systems with adult competences will produce systems that are
either very restricted, or very brittle and unreliable (or both).
AISB 2008 Slide 3 Last revised: March 9, 2009
4. Show some videos of biological competences
Parrot scratching its neck with a feather.
Felix Warneken’s videos
http://email.eva.mpg.de/˜warneken/
Chimp (Alexandra) and fallen lid
Child and cabinet
Warneken was trying to show that both chimpanzees and pre-verbal human toddlers could be altruistic –
i.e. motivated to help others.
My interest is in the forms of representation and architectures that allow such individuals to represent
complex structures and processes in the environment and to detect that someone else has a specific
goal that is not being achieved.
Forming the motive to help with that goal, and acting on it, must build on sophisticated cognitive
competences.
Child playing with trains
a 30 month old child sitting on the floor playing with trains has a rich store of usable information about
what is going on around him, including some of the processes and relationships that he is no longer
looking at. Video included here:
http://www.cs.bham.ac.uk/research/projects/cosy/conferences/mofm-paris-07/sloman/vid
Child jumping?
A 27 month old child stepping into his father’s shoes, then jumping/walking around in them (with music in
the background). What forms of representation, concepts and knowledge must he be using?
Betty (New Caledonian crow) makes hooks out of wire.
http://users.ox.ac.uk/˜kgroup/tools/crow_photos.shtml
AISB 2008 Slide 4 Last revised: March 9, 2009
5. Example of your own spatial competence
Have you ever seen someone shut a door by grasping the edge of the
door and pushing or pulling it shut?
Would you recommend that strategy?
What will happen to someone who tries?
How do you know?
Humans, and apparently some other animals, have the ability to represent and reason
about a wide class of spatial structures and processes – including some that have never
been perceived.
This ability develops in different ways in different environments and in different individuals
– not all cultures have hinged doors.
What processes in human brains, or in computers, could enable such visual reasoning to
occur?
I’ll suggest that such capabilities are likely to be required in certain kinds of Artificial
Companions looking after people living in their own homes.
AISB 2008 Slide 5 Last revised: March 9, 2009
6. What are designers of artificial companions trying to do?
Common assumptions made by researchers:
• ACs will be able to get to know their owners and help them achieve their goals or
satisfy their needs.
• Their owners could be any combination of: elderly, physically disabled, cognitively
impaired, lonely, keen on independence, ...
• ACs could provide assistance using pre-built knowledge plus knowledge gained from
external sources, including central specialist databases that are regularly updated,
and the internet.
• They may help with practical problems around the home, as well as providing help,
warnings and information requested.
• They could also provide company and companionship.
My claim:
The detailed requirements for ACs to meet such specifications are not at all obvious,
and will be found to have implications that make the design task very difficult, in ways
that have not been noticed. Achieving the goals may not impossible provided that we
analyse the problems properly.
Some people argue that new kinds of information-processing machines will be required in order to match
what biological systems can do.
As far as I am concerned that is an open question: dogmatic and wishful answers should be avoided.
AISB 2008 Slide 6 Last revised: March 9, 2009
7. Why will it be so hard?
Because:
(a) Motivations for doing these things can vary.
(b) The more ambitious motivations have hidden presuppositions.
(c) Those presuppositions concern the still barely understood capabilities of
biological companions.
(d) Getting those capabilities into machines is still a long way off:
Mainly:
Artificial Companions will have very open-ended, changing sets of requirements,
compared with domain-specific narrowly targeted AI systems
Examples of domain-specific, task bounded systems:
a flight booking system
route advising system for a fixed bus or train network
a theorem prover or proof checker to help someone working in group theory
a de-bugger for student programs for producing a certain class of graphical displays
In humans new skills or knowledge can usually be combined with old skills and
knowledge in creative ways: a Type 2 AC (explained below) will need to do that.
Most AI systems are not able to support such extendability and flexibility.
What sorts of designs could support that?
AISB 2008 Slide 7 Last revised: March 9, 2009
8. We are not even close.
We are nowhere near getting machines to have the capabilities of young
children or nest-building birds or young hunting mammals or ...
There are some impressive (often insect-like) AI systems that have been highly trained or
specially designed for specific tasks, but many insects are far more sophisticated.
Such AI systems do not know what they are doing or why, or what difference it would
make if they did something different.
However, AI has achieved some very impressive results, e.g. chess programs that beat
grand-masters, mathematical theorem provers, planning systems, and search engines
like google, which in their narrow fields completely outstrip human intelligence.
Even when they are spectacularly good in their own domains, such systems are not
designed to be capable of being combined with other competences to match human
creativity and flexibility: they may scale up but they don’t scale out.
A world-beating chess program cannot decide how to play with a young enthusiastic but inexpert player
so as to encourage and inspire that child to go on learning.
A language-understanding program typically cannot use visual input or a gesture to help it disambiguate
an utterance.
A vision program typically cannot use linguistic input to guide its understanding of a complex scene or
picture.
Scaling up requires architectures, forms of representation and mechanisms that allow
different kinds of knowledge or skill to be combined opportunistically.
AISB 2008 Slide 8 Last revised: March 9, 2009
9. I am not saying it is impossible
I am not saying it is impossible for AI to achieve these things:
but first the researchers have to want to achieve them,
and then they have study the natural systems to find out a lot more about the variety of
tasks and ways of getting things right or getting them wrong,
and ways of debugging the faulty solutions
I.e. they need to study different combinations of requirements
instead of just focusing on mechanisms and trying to make them work a bit better.
It is specially important not to use fixed narrowly-focused benchmark tests
to drive research: all that will produce is (at best) systems that are good on
those benchmarks.
Benchmarks can be very useful for engineering projects.
Scientific AI research has been seriously harmed by benchmarks.
AISB 2008 Slide 9 Last revised: March 9, 2009
10. Beware the stunted roadmap trap
The most obvious and tempting research roadmaps may lead to
dead-ends.
People who don’t do the right sort of requirements-analysis see only the
routes to “tempting dead ends” – the stunted roadmap: but they don’t
recognise them as dead ends because those routes lead to ways of
improving their programs – to make them a little better.
AISB 2008 Slide 10 Last revised: March 9, 2009
11. Type 1 goals for ACs
People working on artificial companions may be trying to produce
something that is intended to provide one or more of the following kinds of
function:
Type 1 goals:
Engaging functions
• TOYS:
a toy or entertainment for occasional use (compare computer games, music CDs)
• ENGAGERS:
something engaging that is used regularly to provide interest, rich and deep enjoyment or a feeling of
companionship (compare pets, musical instruments)
• DUMMY-HUMANS (digital pacifiers?):
something engaging that is regarded by the user as being like another caring, feeling individual with
which a long term relationship can develop – even if it is based on an illusion because the machine is
capable only of shallow manifestations of humanity: e.g. learnt behaviour rules, such as nodding,
smiling, head turning, gazing at faces, making cute noises, or in some cases apparently requesting
being comforted, etc.
These goals don’t interest me much, though I have nothing against people working on
them, provided that they are not simply pursued cynically for financial gain.
AISB 2008 Slide 11 Last revised: March 9, 2009
12. Type 2 goals for ACs
Some people working on artificial companions hope eventually to produce
something that is intended to provide one or more of the following kinds of
function:
Type 2 goals:
Enabling functions
• HELPERS:
ACs that can reliably provide help and advice that meets a fixed set of practical needs that users are
likely to have, e.g.:
Detecting that the user has fallen and is not moving, then calling an ambulance service.
• DEVELOPING HELPERS:
a helper that is capable over time of developing a human-like understanding of the user’s
environment, needs, preferences, values, knowledge, capabilities, e.g.:
Noticing that the user is becoming more absent-minded and therefore learning to detect when
tasks are unfinished, when the user needs a reminder or warning, etc.
More advanced competences of this sort will require the next category of AC.
• CARING DEVELOPING HELPERS:
a developing helper that grows to care about the user and really wants to help when things go wrong
or risk going wrong, and wants to find out how best to achieve these goals.
Caring will be required to ensure that the AC is motivated to find out what the user wants and needs
and deals with conflicting criteria in ways that serve the user’s interests.
I am more interested in these, but they are very hard.
AISB 2008 Slide 12 Last revised: March 9, 2009
13. Why out of reach?
Current AI is nowhere near meeting the following requirements:
• An AC may need to understand the physical environment
- what can’t you easily do with a damaged left/right thumb or a stiff back, etc..
- noticing a dangerous situation, e.g. apron string looped round pan handle,
- coffee spilt on electric kettle base, blows a household fuse:
when will it be safe to turn on the kettle again?
- helping a user solve the problem of losing balance when soaping a foot in a shower,
• An AC will need to understand human minds – at least as well as humans do
E.g. guessing how a patient is likely to react to a variety of new situations.
Knowing what a patient is likely to find hard to understand.
Knowing how detailed explanations should be.
Diagnosing sources of puzzlement, when the patient is unable to understand something.
Designing machines with these competences will require us to have a much deeper understanding
of the information processing architectures and mechanisms used by humans who do have those
competences.
Trying to use statistical learning to short-cut that research will produce shallow, limited, and fragile
systems.
I would not trust them with an elderly relative of mine.
• We need progress on architectures that allow a machine to develop new motives,
preferencs, ideals, attitudes, values ....
AISB 2008 Slide 13 Last revised: March 9, 2009
14. Developing new motives on the basis of deep
knowledge and real caring
The AC need will need to derive new caring motives on the basis of those
competences:
Starting from real desire to help, it will need to be able to work out new ways to help, or to avoid being
unhelpful, in different situations.
It may change some of its motives or its preferences if it discovers that acting on them leads to
consquences it alread prefers to avoid, e.g. the user being unhappy or angry.
The AC should want to act on the conclusions based on new motives and
preferences.
This has ethical implications, mentioned later.
The requirement for future robots to be able to develop their own motives was discussed in
The Computer Revolution in Philosophy (1978), e.g. in Chapter 10 and the Epilogue
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
AISB 2008 Slide 14 Last revised: March 9, 2009
15. How can we make progress?
Since humans provide an existence proof, one way to meet these objectives in a reliable
fashion will be to understand and replicate some of the generic capabilities of a typical
young human child, including the ability to want to help.
Then other capabilities can be built on top of those
(layer after layer, using the ability of the architecture to extend itself).
A human child does not start with a “tabula rasa” (blank slate): the millions of years of
evolution that produced human brains prepared us to learn and develop in specific ways
in specific sorts of environments.
Adult human capabilities (including helping, caring, learning capabilities) develop on the
basis of a very rich biological genetic heritage, plus interaction with the environment
over many years,
though not necessarily in a normal human body:
Compare humans born limbless, or blind, or deaf, or with cerebral palsy,
or Alison Lapper. (See http://www.alisonlapper.com/ )
The brains of such people matter more for their development than their bodies.
But their brains are products of evolution – evolution in the context of normal bodies.
Note: Humans are also an existence proof that systems that develop in these ways can
go badly wrong and do serious harm, so many precautions will be needed.
AISB 2008 Slide 15 Last revised: March 9, 2009
16. Limitations of current methods
Explicitly programmed systems are very limited.
It is not necessarily so, but in practice the difficulties of hand-coding are enormous.
Compare the number of distinct design decisions that go into producing a Boeing 747 or an Airbus 380.
Could a similar amount of human effort produce a worthwhile AC?
Only if the design were based on a deep understanding of requirements.
Designers try to get round limitations by producing systems that learn.
But the learning methods available so far are far too impoverished
Many of them merely collect statistics based on training examples, e.g. learning what to do when, and
then use the statistics to modify their behaviour so as to conform to those statistics.
Such statistical learning methods with little prior knowledge about at least the type of
environment in which learning will occur, will always be limited, fragile and unreliable
(except for very restricted applications).
See John McCarthy’s “The Well-designed Child”
“Evolution solved a different problem than
that of starting a baby with no a priori assumptions.”
On his web page.
http://www-formal.stanford.edu/jmc/child.html (Also AI Journal December 2008)
AISB 2008 Slide 16 Last revised: March 9, 2009
17. A more structured view of nature-nurture trade-offs
Work done mainly with Jackie Chappell, published in IJUC 2007.
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609
(Chris Miall helped with the diagram.)
Routes on left from genome to behaviour are more direct.
(The vast majority of organisms have no other option.)
Routes towards the right may involve many different layers of competence, developed in
stages, with different ontologies, based partly on learning new ways to learn.
See slide below on an architecture that builds itself.
AISB 2008 Slide 17 Last revised: March 9, 2009
18. Example: A kitchen mishap
Many of the things that crop up will concern physical objects and physical
problems.
Someone knocks over a nearly full coffee filter close to the base of a cordless
kettle.
This causes the residual current device in the central fuse box to trip, removing
power from many devices in the house.
You may know what to do: unplug the base, reset the RCD, and thereby restore power.
But if you not sure whether it was safe to use the kettle after draining the base, and
when you try it later the RCD trips again, leaving you wondering whether it would ever
be safe to try again, or whether you should buy a new kettle.
Would the AC be able say: “Try opening the base, dry it thoroughly, then replace it, and
the kettle can be used again”?
• Should an AC be able to give helpful advice in such a situation?
• Would linguistic interaction suffice? How?
• Will cameras and visual capabilities be provided?
What visual capabilities: current AI vision systems are close to useless!
AISB 2008 Slide 18 Last revised: March 9, 2009
19. Canned responses – and intelligent responses
• The spilt coffee was just one example among a vast array of possibilities that will vary
from culture to culture, from household to household within a culture and from time to
time in any household, as the people and things in the house change.
• Of course, if the designer anticipates such accidents, the AC will be able to ask a few
questions and spew out relevant canned advice, and even diagrams showing how to
open and dry out the flooded base.
• But suppose designers had not had that foresight: What would enable the AC to give
sensible advice?
AISB 2008 Slide 19 Last revised: March 9, 2009
20. How can we hope to produce an AC with something
like human varieties of competence and creativity?
• If the AC knew about electricity and was able to visualise the consequences of liquid
pouring over the kettle base, it might be able to use a mixture of geometric and logical
reasoning creatively to reach the right conclusions.
• It would need to know about and be able to reason about spatial structures and the
behaviour of liquids.
This will require us to go far beyond the ‘Naive Physics’ project
– Although Pat Hayes described the ‘Naive physics’ project decades ago, it has proved extremely
difficult to give machines the kind of intuitive understanding required for creative problem-solving in
novel physical situations.
– In part that is because we do not yet understand the forms of representation humans (and other
animals) use for that sort of reasoning
– The Naive Physics project used logic to formalise everything, and the only use of the knowledge
was to perform deductions, whereas humans and other animals use a formalism that allows the
knowledge to be used in perception, action and intuitive causal reasoning.
See these papers and presentations on understanding causation with Jackie Chappell:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks#wonac
AISB 2008 Slide 20 Last revised: March 9, 2009
21. Understanding shower-room affordances
Understanding a human need and seeing what is and is not relevant to
meeting that need may require creative recombination of prior knowledge
and competences.
Suppose an elderly user finds it difficult to keep his balance in the shower when
soaping his feet, and asks the AC for advice.
He prefers taking showers to taking baths, partly because showers are cheaper.
How should the AC react on hearing the problem?
Should it argue for the benefits of baths?
Should it send out a query to its central knowledge base asking how how people should
keep their balance when washing their feet?
(It might get a pointer to a school for trapeze artists.)
The AC could start an investigation into local suppliers of shower seats: but that
requires working out that a seat in the shower would solve the problem, despite the fact
that showers typically do not and should not have seats.
What if the AC designer had not anticipated the problem?
What are the requirements for the AC to be able to invent the idea of a folding seat attached to the wall
of the shower, that can be lowered temporarily to enable feet to be washed safely in a sitting position?
Alternatively what are the requirements for it to be able to pose a suitable query to a search engine?
How will it know that safety harnesses and handrails are not good solutions?
AISB 2008 Slide 21 Last revised: March 9, 2009
22. The ability to deal with open-ended sets of
problems
I have presented a few examples of problems an AC may have to deal with
around the house, all involving perceiving or reasoning about a novel
problem situation.
The set of possible problems that can arise is enormously varied,
partly because of the open-ended variability of ways in which physical objects (including humans, pet
animals and inanimate objects) can interact
partly because of the ways in which the contents of a house can change over time
partly because of the ways in which a human’s needs, preferences, desires, knowledge and capabilities
can change over time
A developing caring helper will have to be able to deal with all that novelty.
Humans cope with novelty on the basis of very general capabilities for learning about new
environments, which build on a their rich evolutionary heritage in ways that are not yet
understood.
Likewise ACs will need to be able to ‘scale out’, i.e. to combine competences in new ways
to deal with novel situations.
AISB 2008 Slide 22 Last revised: March 9, 2009
23. Is the solution statistical?
Can the required competences, including the ability to combine them
creatively, be acquired by a machine trained on a very large corpus of
examples in which it searches for statistically re-usable patterns.
The current dominant approach to developing language understanders, vision programs,
robots, advice givers, and even some engaging interfaces, involves mining large
collections of examples, using sophisticated statistical pattern extraction and matching, to
find re-usable patterns.
This is sometimes easier than trying to develop a structure-based understander and
reasoner with the same level of competence, and can give superficially successful results,
depending on the size and variety of the corpus and the variety of tests.
But the method is inherently broken because, as sentences get longer, or semantic
structures get more complex, or physical situations get more complex, the probability of
encountering recorded examples close to them falls very rapidly to near zero – especially
when problems include several modalities, e.g. perception, planning and language
understanding.
A helper must use deep general knowledge to solve a novel problem creatively.
E.g. understanding a new utterance often requires using non-linguistic context to interpret some of the
linguistic constructs.
How big is a pile of stones? It depends on the context. See:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0605
AISB 2008 Slide 23 Last revised: March 9, 2009
24. One of the problems of statistical/corpus-based
approaches
An obvious problem, often forgotten in the flush of progress from
86% correct on some benchmark test to 86.35% is this
however well a system trained on text does on text-based benchmark tests, it will not
necessarily have any of the abilities that a human understander has to link the syntactic
and semantic linguistic structures to the physical or world in which we perceive, move,
manipulate and interact with things, including other language users.
Put another way: before a young child learns to understand and produce linguistic
utterances she typically has rich physical and social interactions with things and people in
the environment – and learning to talk is learning to use linguistic structures and
processes to enrich those interactions.
Putting the understanding of a corpus based NLP system to the same tests a young
human succeeds in would require linking the system to some sort of robot.
Some people are trying: that challenge is part of the specification of the EU Cognitive
Systems initiatives in FP6 and FP7.
My impression is that progress is still very slow.
I think that is partly because the problems defining the requirements have not been
studied in sufficient depth.
AISB 2008 Slide 24 Last revised: March 9, 2009
25. Size alone will not solve the problem.
There is no reason to believe that the fundamental problems will be overcome just by
enlarging the corpus:
even a system trained not only on the whole internet but also on all human literature will
not understand that the text refers to things that can be seen, eaten, assembled,
designed, danced with, etc.
— unless it starts off with knowledge about the world somehow pre-compiled, and uses
the textual analysis to abduce mappings between the textual structures and the
non-textual world.
AISB 2008 Slide 25 Last revised: March 9, 2009
26. The myth of multi-modality
It is often assumed that the problem can be overcome by applying the
statistical approach to a combination of modalities:
give a system lots of images, or even videos, as well as audio-tracks, and maybe other
kinds of sensory inputs, and then let it learn correlations between the things people see,
smell, taste, feel, hear, etc. and the things they say or write.
(That’s the hope.)
But multimodality leads to a huge combinatorial problem because of the vast diversity of
patterns of sensory and motor signals.
I conjecture that only a machine that is capable of developing an a-modal ontology,
referring to things in the environment will be able to understand what people are talking
about.
Consider the variety of sensor and motor signal patterns associated with the notion of
something being grasped.
AISB 2008 Slide 26 Last revised: March 9, 2009
27. Grasping: Multiple views of a simple a-modal process of
two 3-D surfaces coming together
with another 3-D object held between them.
Four examples of grasping: two done by fingers, and two by a plastic clip. In all cases,
two 3-D surfaces move together, causing something to be held between them, though
retinal image projections and sensorimotor patterns in each case are very different.
You can probably see what is going on in these pictures even though they are probably very different in their
2-D structure from anything else you have ever scene.
But if you map them onto 3-D structures you may find something common, which cannot be defined in
terms of 2-D image properties.
AISB 2008 Slide 27 Last revised: March 9, 2009
28. Why do statistics-based approaches work at all?
The behaviour of any intelligent system, or collection of individuals, will leave traces that
may have re-usable features, and the larger the set the more re-usable items it is likely to
contain – up to a point.
For instance it may not provide items relevant to new technological, or cultural
developments or to highly improbable but perfectly possible physical configurations and
processes.
So any such collection of traces will have limited uses, and going beyond those uses will
require something like the power of the system that generated the original
behaviours.
Some human expertise is like this.
AISB 2008 Slide 28 Last revised: March 9, 2009
29. How humans use statistical traces
In humans (and some other animals), there are skills that make use of deep generative
competences whose application requires relatively slow, creative, problem solving, e.g.
planning routes.
Frequent use of such competences trains powerful learning mechanisms that compile
and store many partial solutions matched to specific contexts (environment and goals).
As that store of partial solutions (traces of past structure-creation) grows, it covers more everyday
applications of the competence, and allows fast and fluent responses in more contexts and tasks.
A statistical AI system that cannot generate the data can infer those partial solutions from
large amounts of data.
But because the result is just a collection of partial solutions it will always have severely bounded
applicability compared with humans, and will not be extendable in the way human competences are.
If trained only on text it will have no comprehension of non-linguistic context.
AISB 2008 Slide 29 Last revised: March 9, 2009
30. Coping with novelty
The history of human science, technology and art shows that people are
constantly creating new things – pushing the limits of their current
achievements.
Dealing with novel problems and situations requires different mechanisms
that support creative development of novel solutions.
(Many jokes depend on that.)
If the deeper, more general, slower, competence is not available when stored patterns are inadequate,
wrong extrapolations can be made, inappropriate matches will not be recognised, new situations cannot
be dealt with properly and further learning will be very limited, or at least very slow.
In humans, and probably some other animals, the two systems work together to provide a combination of
fluency and generality. (Not just in linguistic competence, but in many other domains.)
AISB 2008 Slide 30 Last revised: March 9, 2009
31. Where does the human power come from?
Before human toddlers learn to talk they have already acquired deep, reusable structural
information about their environment and about how people work.
They cannot talk but they can see, plan, be puzzled, want things, and act purposefully:
They have something to communicate about.
E.g. see Warneken’s videos.
That pre-linguistic competence grows faster with the aid of language, but must be based
on a prior, internal, formal ‘linguistic’ competence
using forms of representation with structural variability and (context-sensitive)
compositional semantics.
This enables them to learn any human language and to develop in many cultures.
Jackie Chappell and I have been calling these internal forms of representation
Generalised Languages (GLs).
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#glang
What evolved first: Languages for communicating, or languages for thinking
(Generalised Languages: GLs)
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0703
Computational Cognitive Epigenetics
More papers/presentations are on my web site.
AISB 2008 Slide 31 Last revised: March 9, 2009
32. Can we give ACs suitable GLs ?
ACs without a similar pre-communicative form of representation allowing
them to represent things about their world, and also possible goals to
achieve, will not have anything to communicate when they start learning a
language.
Their communicative competences are likely to remain shallow, brittle and dependent on
pre-learnt patterns or rules for every task because they don’t share our knowledge of the
world we are talking about, thinking about, and acting in.
Perhaps, like humans (and some other altricial species), ACs can escape these limitations if they start
with a partly ‘genetically’ determined collection of meta-competences that continually drive the acquisition
of new competences building on previous knowledge and previous competences: a process that
continues throughout life.
The biologically general mechanisms that enable humans to grow up in a very wide variety of
environments, are part of what enable us to learn about, think about, and deal with novel situations
throughout life.
I conjecture that this requires an architecture that grows itself over many years partly as a
result of finding out both very general and very specific features of the environment
through creative play, exploration and opportunistic learning.
AISB 2008 Slide 32 Last revised: March 9, 2009
33. An architecture that builds itself
The hypothesised architecture is made of multiple dynamical systems with many
multi-stable components, most of which are inactive at any time.
Some change continuously, others discretely, and any changes can propagate effects in
parallel with many other changes in many directions.
Some components are closely coupled
with environment through sensors and
effectors, others much less coupled –
even free-wheeling, and unconstrained
by embodiment (and some of them can
use logic, algebra, etc., when planning,
imagining, designing, reminisicing,
reasoning, inventing stories, etc.).
The red arrows represent theory-based
semantic links from decoupled parts of
the architecture to portions of the
environment that are not necessarily
accessible to the senses. See
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models
Most animals have a genetically pre-configured architecture: the human one grows itself,
partly in response to what it finds in the environment.
For more on this see
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0801
Architectural and representational requirements for seeing processes and affordances
AISB 2008 Slide 33 Last revised: March 9, 2009
34. We can discern some major sub-divisions within a
complex architecture
The CogAff Schema – for designs or requirements for architectures.
Different layers correspond to different evolutionary phases.
NEWER
OLDER
(Think of the previous “dynamical systems” diagram as rotated 90 degrees
counter-clockwise.)
AISB 2008 Slide 34 Last revised: March 9, 2009
35. H-Cogaff: the human variety
Here’s another view of the architecture that builds itself (after rotating 90 degrees).
NEWER
OLDER
This shows a possible way of filling in the CogAff schema on previous slide – to produce a human-like
architecture (H-CogAff). (Arrows represent flow of information and control)
For more details see papers and presentations in the Cogaff web site:
http://www.cs.bham.ac.uk/research/projects/cogaff/
http://www.cs.bham.ac.uk/research/projects/cogaff/talks
http://www.cs.bham.ac.uk/research/projects/cogaff/03.html#200307 (progress report)
AISB 2008 Slide 35 Last revised: March 9, 2009
36. Machines that have their own motives
Human learning is based in part on what humans care about
and vice versa: what humans care about is based in part on what they
learn.
In particular, learning to help others depends on caring about what they want, what will
and will not harm them, etc.
And learning about the needs and desires of others can change what we care about.
The factors on which the caring and learning are based can change over time – e.g. as
individuals develop or lose capabilities, acquire new preferences and dislikes, etc.
If we make machines that can come to care, then that means they have their own desires,
preferences, hopes, fears, etc.
In that case, we shall have a moral obligation to take account of their desires and
preferences: anything with desires should have rights: treating them as slaves would be
highly immoral.
As noted in the Epilogue to The Computer Revolution in Philosophy (1978), now online here:
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/
And in this paper on why Asimov’s laws of robotics are immoral:
http://www.cs.bham.ac.uk/research/projects/cogaff/misc/asimov-three-laws.html
AISB 2008 Slide 36 Last revised: March 9, 2009
37. Can it be done?
Perhaps we should work on this roadmap as a way of understanding
requirements?
We can take care to avoid the stunted roadmap problem.
Don’t always go for small improvements – that’s a route to dead ends.
Try to envision far more ambitious goals, and work backwards.
Research planning should use more backward chaining.
AISB 2008 Slide 37 Last revised: March 9, 2009
38. Rights of intelligent machines
If providing effective companionship
requires intelligent machines to be able to develop their own
goals, values, preferences, attachments etc.,
including really wanting to help and please their owners,
then if some of them develop in ways we don’t intend,
will they not have the right to have their desires considered,
in the same way as our children do
if they develop in ways their parents don’t intend?
See also
http://www.cs.bham.ac.uk/research/projects/cogaff/crp/epilogue.html
AISB 2008 Slide 38 Last revised: March 9, 2009
39. Recruiting needed?
When the time comes,
who will join the society
for the prevention of cruelty to robots?
When a young robot decides that it wants to study philosophy in order to be able to think
clearly about how to look after people who need care, will Scottish universities let it in,
without demanding a top up fee?
I hope so!
For more information see:
http://www.cs.bham.ac.uk/research/projects/cogaff/
The cognition and affect project web site
http://www.cs.bham.ac.uk/research/projects/cogaff/talks
Online presentations (mostly PDF)
http://www.cs.bham.ac.uk/research/projects/cosy/papers/
Papers produced in the CoSy robot project
AISB 2008 Slide 39 Last revised: March 9, 2009