SlideShare ist ein Scribd-Unternehmen logo
1 von 29
Dariusz Plewczynski, PhD
                            ICM, University of Warsaw

                              D.Plewczynski@icm.edu.pl

czwartek, 25 marca 2010
Cognitive Computing
                 From Brain Modelling to Large
                    Scale Machine Learning
                          Dariusz Plewczynski, PhD
                            ICM, University of Warsaw

                              D.Plewczynski@icm.edu.pl

czwartek, 25 marca 2010
What is Cognition?




czwartek, 25 marca 2010
What is Cognition?

   1. Cognoscere latin: "to know" or "to recognize"
   2. Cognition is a general term for all forms of
      knowing (e.g. attending, remembering,
      reasoning and understanding concepts, facts,
      propositions, and rules).
   3. Cognitive processes refers to the processing of
      information, applying knowledge, and changing
      preferences.
   4. Cognitive psychology is the study of cognition.
   5. Cognitive science is an interdisciplinary field
      that extends the principles of cognitive
      psychology to other systems that manipulate
      information.
   6. Cognitive Informatics studies the natural
      intelligence and internal information processing
      mechanisms of the brain, as well as the
      processes involved in perception and cognition.

                             http://en.wikiversity.org/wiki/Cognition




czwartek, 25 marca 2010
Cognitive Psychology




czwartek, 25 marca 2010
Cognitive Psychology

   • focuses on study of higher mental
     functions with particular emphasis on
     the ways in which people acquire
     knowledge and use it to shape and
     understand their experience in the
     world.
   • examines internal mental processes
     such as problem solving, memory, and
     language.
   • How people understand, diagnose,
     and solve problems, the mental
     processes, which mediate between
     stimulus and response in the form of
     algorithmic or heuristics rules.

                  http://en.wikiversity.org/wiki/Cognitive_psychology




czwartek, 25 marca 2010
Cognitive Science




czwartek, 25 marca 2010
Cognitive Science
  The interdisciplinary study of mind or the
  study of thought. It embraces multiple
  research disciplines, including psychology,
  artificial intelligence, philosophy,
  neuroscience, linguistics, anthropology,
  sociology and biology.

  It relies on varying scientific methodology
  (e.g. behavioral experimentation,
  computational simulations, neuro-imaging,
  statistical analyses), and spans many
  levels of analysis of the mind (from low-
  level learning and decision mechanisms to
  high-level logic and planning, from neural
  circuitry to modular brain organization,
  etc.).


                  http://en.wikiversity.org/wiki/Cognitive_science




czwartek, 25 marca 2010
Cognitive Computing?




czwartek, 25 marca 2010
Cognitive Computing?


        we know now what is cognition


                          at least more or less ...




czwartek, 25 marca 2010
Cognitive Computing?


        we know now what is cognition


                          at least more or less ...


                                      ... but where is the computing ?




czwartek, 25 marca 2010
Computing




  Computing as a discipline, Denning 1989, Computer



czwartek, 25 marca 2010
Computing

  The discipline of computing is the systematic study of
  algorithmic processes that describe and transform information:
  theory, analysis, design, efficiency, implementation, application.

        Instances of theory may appear at every stage of abstraction
        and design, instances of modeling at every stage of theory
        and design, and instances of design at
        every stage of theory and abstraction.




  Computing as a discipline, Denning 1989, Computer



czwartek, 25 marca 2010
Cognitive Informatics




czwartek, 25 marca 2010
Cognitive Informatics
  An emerging discipline that studies the
  natural intelligence and internal
  information processing mechanisms of the
  brain, as well as the processes involved in
  perception and cognition.

  It provides a coherent set of fundamental
  theories, and contemporary mathematics,
  which form the foundation for most
  information- and knowledge-based
  science and engineering disciplines such
  as computer science, cognitive science,
  neuropsychology, systems science,
  cybernetics, software engineering, and
  knowledge engineering.


                  http://en.wikiversity.org/wiki/Cognitive_informatics




czwartek, 25 marca 2010
How the brain works?

  David Marr (1945-1980) three levels of analysis:

     the problem (computational level)
     the strategy (algorithmic level)
     how it’s actually done by networks of neurons
  (implementational level)




                                                     P. Latham
                                                      P. Dayan

czwartek, 25 marca 2010
Simulating the Brain

  neuron introduced by Heinrich von Waldeyer-Hartz 1891




                          http://en.wikipedia.org/wiki/Neuron



czwartek, 25 marca 2010
Simulating the Brain

  synapse introduced by Charles Sherrington 1897




                          http://en.wikipedia.org/wiki/Synapse



czwartek, 25 marca 2010
How the brain works?

        neocortex (cognition)

                                                  6 layers
                                    ~30 cm
                          ~0.5 cm




                                     subcortical structures
                                     (emotions, reward,
                                     homeostasis, much much
                                     more)

                                                             P. Latham
                                                              P. Dayan

czwartek, 25 marca 2010
How the brain works?

  Cortex vs CPU numbers:
                                                         1 mm^2
        1 mm3 of cortex:               1 mm2 of a CPU:

        50,000 neurons                 1 million transistors
        10000 connections/neuron       2 connections/transistor
        (=> 500 million connections)   (=> 2 million connections)
        4 km of axons                  .002 km of wire

        whole brain (2 kg):            whole CPU:

        1011 neurons                   109 transistors
        1015 connections               2*109 connections
        8 million km of axons          2 km of wire
                                                                P. Latham
                                                                 P. Dayan

czwartek, 25 marca 2010
How the brain really learns?

 Time & Learning:

        You have about 1015 synapses.

     If it takes 1 bit of information to set a synapse, you need 1015
 bits to set all of them.

        30 years ≈ 109 seconds.

   To set 1/10 of your synapses in 30 years, you must absorb
 100,000 bits/second.

 Learning in the brain is almost completely unsupervised!
                                                                P. Latham
                                                                 P. Dayan

czwartek, 25 marca 2010
Neuronal Simulators

  Software Packages:


       Neuron http://www.neuron.yale.edu/neuron/

       NEST http://www.nest-initiative.org/

       Brian http://www.briansimulator.org/

       Genesis http://genesis-sim.org/

       ...



czwartek, 25 marca 2010
/3$&4/ 34.07 9- '-,0*3-7 9+ * (&),0&%&-7 E.*0&$*$&=- )47-0 4% &$( -%%-3$( 4/ (&8/*0( */7
    Whole Brain Emulation
/*,$&3 ($'-/8$2(; #/4$2-' ,4((&90- (3*0- (-,*'*$&4/ 0-=-0 )&82$ 433.' 9-$6--/ &/7&=&7.
  0-3.0-( */7 )40-3.0*' 34/3-/$'*$&4/ (3*0-(H )40-3.0*' 7+/*)&3( 34.07 9- '-,0*3-7 6&$
 (( *3$&4/ &/$-'*3$&4/( 4% 34/3-/$'*$&4/(; # ,-'2*,( 0-(( 0&1-0+
,*'*$&4/ 34.07 *0(4 433.' 4/ 2&82-' 0-=-0( &% 62*$ )*$$-'( &( $2-
 &=&$+ 4% 34'$&3*0 )&/&340.)/( '*$2-' $2*/ &/7&=&7.*0 /-.'4/(; #
 *0 0&1-0+ 9.$ 34),.$*$&4/*00+ 7-)*/7&/8 (3*0- 4' (-,*'*$&4/
 .07 9- $2- *$4)&3 (3*0-: $'-*$&/8 $2- 9'*&/ -).0*$&4/ *( * P 947+
 $-) 4% *$4)(;

 /=-'(-0+: &% &$ 34.07 9- 7-)4/($'*$-7 $2*$ $2-'- &( /4 (.32 (3*0-: &$
 .07 7-)4/($'*$- $2- &/%-*(&9&0&$+ 4% 6240- 9'*&/ -).0*$&4/; R.-
3*.(*00+ &),4'$*/$ &/%0.-/3- %'4) ()*00-' (3*0-( &/ $2&( 3*(-: *
).0*$&4/ *$ * ,*'$&3.0*' (3*0- 3*//4$ 9-34)- */ -).0*$&4/; S2-
.(*0 7+/*)&3( 4% $2- (&).0*$&4/ &( /4$ &/$-'/*00+ 34/($'*&/-7: (4 &$
/4$ * ! $4 ! )47-0 4% $2- '-0-=*/$ 7+/*)&3(; @&4048&3*00+
 -'-($&/8 (&).0*$&4/( )&82$ ($&00 9- ,4((&90-: 9.$ $2-+ 64.07 9-
 *0 $4 ,*'$&3.0*' (3*0-( */7 ,2-/4)-/*: */7 $2-+ 64.07 /4$ %.00+         !"#$%& '( )"*& +,-.
,'47.3- $2- &/$-'/*0 3*.(*0 ($'.3$.'- 4% $2- 6240- 9'*&/;                01& /&%23$+ +4+0&5
                                                                          A. Sandberg, N. Bostrom


  czwartek, 25 marca 2010
Brain Resolution

  WBE levels of interest:

     An informal poll among WBE workshop in 2008 attendees
     produced a range of estimates where the required resolution
     for Whole Brain Emulation (WBE) is. The consensus
     appeared to be level 4‐6. Two participants were more
     optimistic about high level models, while two suggested that
     elements on level 8‐9 may be necessary at least initially (but
     that the bulk of mature emulation, once the basics were
     understood, could occur on level 4‐5). To achieve emulation
     on this level, the consensus was that 5×5×50 nm scanning
     resolution would be needed. This roadmap will hence focus
     on level 4‐6 models, while being open for that deeper levels
     may turn out to be needed.                         A. Sandberg, N. Bostrom


czwartek, 25 marca 2010
Something Smaller

  Mammalian thalamo-cortical
  System by E. Izhikevich:
   The simulation of a model that has the size of the human
   brain: a detailed large-scale thalamocortical model based on
   experimental measures in several mammalian species.

     The model exhibits behavioral regimes of normal brain activity that
     were not explicitly built-in but emerged spontaneously as the result of
     interactions among anatomical and dynamic processes. It describes
     spontaneous activity, sensitivity to changes in individual neurons,
     emergence of waves and rhythms, and functional connectivity on
     different scales.


                                                                      E. Izhikevich


czwartek, 25 marca 2010
and Less Complicated


                          IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 5, SEPTEMBER 2004




                                                                                          E. Izhikevich


czwartek, 25 marca 2010
Our Goal: Cognitive Networks
  Cognitive networks CN are inspired by
  brain structure and performed cognitive functions

  CN put together single machine learning units connected via an
  interconnections. The goal is to understand the learning as the
  spatiotemporal information processing and storing capability of
  such networks (Meta-Learning!).

  1. Space: for every LU (learning unit):
     a. For every time step:
        i. Update the state of each LU using changed training data
        ii. If the LU learning was performed with success,
           generate an event for each coupling that the LU is
           post-event coupled to and pre-event coupled to.
  2. Time: For every TC (time coupling):
     When it receives a pre- or post-event,
     update its state and, if necessary, the state of the post-event LUs
czwartek, 25 marca 2010
Uwe Koch &
                                            Stephane Spieser
                                                              Pawel G Sadowski, Tom
      Dariusz Plewczyński                                     Kathryn S Lilley
                   darman@icm.edu.pl                       Marcin von Grotthuss

      Krzysztof Ginalski
                                       Leszek Rychlewski

                                            Adrian Tkacz
             Jan Komorowski & Marcin
             Kierczak

                                              Lucjan Wyrwicz




czwartek, 25 marca 2010
Brainstorming
(static consensus learning)
                                            Uwe Koch &
                                            Stephane Spieser
                                                              Pawel G Sadowski, Tom
      Dariusz Plewczyński                                     Kathryn S Lilley
                   darman@icm.edu.pl                       Marcin von Grotthuss

      Krzysztof Ginalski
                                       Leszek Rychlewski

                                            Adrian Tkacz
             Jan Komorowski & Marcin
             Kierczak

                                              Lucjan Wyrwicz




czwartek, 25 marca 2010

Weitere ähnliche Inhalte

Andere mochten auch (7)

Marr's Theory of Vision
Marr's Theory of VisionMarr's Theory of Vision
Marr's Theory of Vision
 
David Marr's 3d model
 David Marr's 3d model David Marr's 3d model
David Marr's 3d model
 
software effort estimation
 software effort estimation software effort estimation
software effort estimation
 
Kolokwium habilitacyjne
Kolokwium habilitacyjneKolokwium habilitacyjne
Kolokwium habilitacyjne
 
3D Vision Technology
3D Vision Technology3D Vision Technology
3D Vision Technology
 
Wyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawczeWyklad habilitacyjny: obliczenia poznawcze
Wyklad habilitacyjny: obliczenia poznawcze
 
Computer Vision Introduction
Computer Vision IntroductionComputer Vision Introduction
Computer Vision Introduction
 

Ähnlich wie Nencki 2010 Day1

Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-Taymoor Nazmy
 
Chaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind MahineChaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind MahineSyedVAhamed
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligenceNitesh Kumar
 
Optical Character and Formula Recognition.docx
Optical Character and Formula Recognition.docxOptical Character and Formula Recognition.docx
Optical Character and Formula Recognition.docxSAJJADALI591691
 
Diffusion Tensor Imaging (DTI) for the study of disorders of consciousness
Diffusion Tensor Imaging (DTI) for the study of disorders of consciousnessDiffusion Tensor Imaging (DTI) for the study of disorders of consciousness
Diffusion Tensor Imaging (DTI) for the study of disorders of consciousnessStephen Larroque
 
Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics? Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics? PhiloWeb
 
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdfpierstanislaopaolucc1
 
Artificial Intelligence & Cosmic Consciousness
Artificial Intelligence & Cosmic ConsciousnessArtificial Intelligence & Cosmic Consciousness
Artificial Intelligence & Cosmic ConsciousnessVapula
 
Design Science in Information Systems
Design Science in Information SystemsDesign Science in Information Systems
Design Science in Information SystemsSergej Lugovic
 
Cognitive Computing for Tacit Knowledge1
Cognitive Computing for Tacit Knowledge1Cognitive Computing for Tacit Knowledge1
Cognitive Computing for Tacit Knowledge1Lucia Gradinariu
 
Ch 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfCh 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfKrishnaMadala1
 
Introduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.docIntroduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.docbutest
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationMohammed Bennamoun
 
[TRANSCRIPT] Do we have a right to freedom of thought?
 [TRANSCRIPT] Do we have a right to freedom of thought?  [TRANSCRIPT] Do we have a right to freedom of thought?
[TRANSCRIPT] Do we have a right to freedom of thought? Jim Stroud
 
Open-endedness curriculum at EEM Institute
Open-endedness curriculum at EEM InstituteOpen-endedness curriculum at EEM Institute
Open-endedness curriculum at EEM InstituteAnatoly Levenchuk
 
Introduction.doc
Introduction.docIntroduction.doc
Introduction.docbutest
 
Saturn: Carnal Mind
Saturn: Carnal MindSaturn: Carnal Mind
Saturn: Carnal MindVapula
 

Ähnlich wie Nencki 2010 Day1 (20)

Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-Artificial intelligent Lec 1-ai-introduction-
Artificial intelligent Lec 1-ai-introduction-
 
Chaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind MahineChaps29 the entirebookks2017 - The Mind Mahine
Chaps29 the entirebookks2017 - The Mind Mahine
 
Artificial intelligence
Artificial intelligenceArtificial intelligence
Artificial intelligence
 
Optical Character and Formula Recognition.docx
Optical Character and Formula Recognition.docxOptical Character and Formula Recognition.docx
Optical Character and Formula Recognition.docx
 
Diffusion Tensor Imaging (DTI) for the study of disorders of consciousness
Diffusion Tensor Imaging (DTI) for the study of disorders of consciousnessDiffusion Tensor Imaging (DTI) for the study of disorders of consciousness
Diffusion Tensor Imaging (DTI) for the study of disorders of consciousness
 
Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics? Extending the Mind with Cognitive Prosthetics?
Extending the Mind with Cognitive Prosthetics?
 
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
2023-1113e-INFN-Seminari-Paolucci-BioInspiredSpikingLearningSleepCycles.pdf
 
blue brain
blue brainblue brain
blue brain
 
Artificial Intelligence & Cosmic Consciousness
Artificial Intelligence & Cosmic ConsciousnessArtificial Intelligence & Cosmic Consciousness
Artificial Intelligence & Cosmic Consciousness
 
138693 28152-brain-chips
138693 28152-brain-chips138693 28152-brain-chips
138693 28152-brain-chips
 
Design Science in Information Systems
Design Science in Information SystemsDesign Science in Information Systems
Design Science in Information Systems
 
Hci lecture 01_00
Hci lecture 01_00Hci lecture 01_00
Hci lecture 01_00
 
Cognitive Computing for Tacit Knowledge1
Cognitive Computing for Tacit Knowledge1Cognitive Computing for Tacit Knowledge1
Cognitive Computing for Tacit Knowledge1
 
Ch 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdfCh 1 Introduction to AI.pdf
Ch 1 Introduction to AI.pdf
 
Introduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.docIntroduction to Artificial Intelligence.doc
Introduction to Artificial Intelligence.doc
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
[TRANSCRIPT] Do we have a right to freedom of thought?
 [TRANSCRIPT] Do we have a right to freedom of thought?  [TRANSCRIPT] Do we have a right to freedom of thought?
[TRANSCRIPT] Do we have a right to freedom of thought?
 
Open-endedness curriculum at EEM Institute
Open-endedness curriculum at EEM InstituteOpen-endedness curriculum at EEM Institute
Open-endedness curriculum at EEM Institute
 
Introduction.doc
Introduction.docIntroduction.doc
Introduction.doc
 
Saturn: Carnal Mind
Saturn: Carnal MindSaturn: Carnal Mind
Saturn: Carnal Mind
 

Kürzlich hochgeladen

Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyAlfredo García Lavilla
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfPrecisely
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteDianaGray10
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 

Kürzlich hochgeladen (20)

Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Commit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easyCommit 2024 - Secret Management made easy
Commit 2024 - Secret Management made easy
 
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdfHyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
Hyperautomation and AI/ML: A Strategy for Digital Transformation Success.pdf
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Take control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test SuiteTake control of your SAP testing with UiPath Test Suite
Take control of your SAP testing with UiPath Test Suite
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 

Nencki 2010 Day1

  • 1. Dariusz Plewczynski, PhD ICM, University of Warsaw D.Plewczynski@icm.edu.pl czwartek, 25 marca 2010
  • 2. Cognitive Computing From Brain Modelling to Large Scale Machine Learning Dariusz Plewczynski, PhD ICM, University of Warsaw D.Plewczynski@icm.edu.pl czwartek, 25 marca 2010
  • 4. What is Cognition? 1. Cognoscere latin: "to know" or "to recognize" 2. Cognition is a general term for all forms of knowing (e.g. attending, remembering, reasoning and understanding concepts, facts, propositions, and rules). 3. Cognitive processes refers to the processing of information, applying knowledge, and changing preferences. 4. Cognitive psychology is the study of cognition. 5. Cognitive science is an interdisciplinary field that extends the principles of cognitive psychology to other systems that manipulate information. 6. Cognitive Informatics studies the natural intelligence and internal information processing mechanisms of the brain, as well as the processes involved in perception and cognition. http://en.wikiversity.org/wiki/Cognition czwartek, 25 marca 2010
  • 6. Cognitive Psychology • focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. • examines internal mental processes such as problem solving, memory, and language. • How people understand, diagnose, and solve problems, the mental processes, which mediate between stimulus and response in the form of algorithmic or heuristics rules. http://en.wikiversity.org/wiki/Cognitive_psychology czwartek, 25 marca 2010
  • 8. Cognitive Science The interdisciplinary study of mind or the study of thought. It embraces multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, anthropology, sociology and biology. It relies on varying scientific methodology (e.g. behavioral experimentation, computational simulations, neuro-imaging, statistical analyses), and spans many levels of analysis of the mind (from low- level learning and decision mechanisms to high-level logic and planning, from neural circuitry to modular brain organization, etc.). http://en.wikiversity.org/wiki/Cognitive_science czwartek, 25 marca 2010
  • 10. Cognitive Computing? we know now what is cognition at least more or less ... czwartek, 25 marca 2010
  • 11. Cognitive Computing? we know now what is cognition at least more or less ... ... but where is the computing ? czwartek, 25 marca 2010
  • 12. Computing Computing as a discipline, Denning 1989, Computer czwartek, 25 marca 2010
  • 13. Computing The discipline of computing is the systematic study of algorithmic processes that describe and transform information: theory, analysis, design, efficiency, implementation, application. Instances of theory may appear at every stage of abstraction and design, instances of modeling at every stage of theory and design, and instances of design at every stage of theory and abstraction. Computing as a discipline, Denning 1989, Computer czwartek, 25 marca 2010
  • 15. Cognitive Informatics An emerging discipline that studies the natural intelligence and internal information processing mechanisms of the brain, as well as the processes involved in perception and cognition. It provides a coherent set of fundamental theories, and contemporary mathematics, which form the foundation for most information- and knowledge-based science and engineering disciplines such as computer science, cognitive science, neuropsychology, systems science, cybernetics, software engineering, and knowledge engineering. http://en.wikiversity.org/wiki/Cognitive_informatics czwartek, 25 marca 2010
  • 16. How the brain works? David Marr (1945-1980) three levels of analysis: the problem (computational level) the strategy (algorithmic level) how it’s actually done by networks of neurons (implementational level) P. Latham P. Dayan czwartek, 25 marca 2010
  • 17. Simulating the Brain neuron introduced by Heinrich von Waldeyer-Hartz 1891 http://en.wikipedia.org/wiki/Neuron czwartek, 25 marca 2010
  • 18. Simulating the Brain synapse introduced by Charles Sherrington 1897 http://en.wikipedia.org/wiki/Synapse czwartek, 25 marca 2010
  • 19. How the brain works? neocortex (cognition) 6 layers ~30 cm ~0.5 cm subcortical structures (emotions, reward, homeostasis, much much more) P. Latham P. Dayan czwartek, 25 marca 2010
  • 20. How the brain works? Cortex vs CPU numbers: 1 mm^2 1 mm3 of cortex: 1 mm2 of a CPU: 50,000 neurons 1 million transistors 10000 connections/neuron 2 connections/transistor (=> 500 million connections) (=> 2 million connections) 4 km of axons .002 km of wire whole brain (2 kg): whole CPU: 1011 neurons 109 transistors 1015 connections 2*109 connections 8 million km of axons 2 km of wire P. Latham P. Dayan czwartek, 25 marca 2010
  • 21. How the brain really learns? Time & Learning: You have about 1015 synapses. If it takes 1 bit of information to set a synapse, you need 1015 bits to set all of them. 30 years ≈ 109 seconds. To set 1/10 of your synapses in 30 years, you must absorb 100,000 bits/second. Learning in the brain is almost completely unsupervised! P. Latham P. Dayan czwartek, 25 marca 2010
  • 22. Neuronal Simulators Software Packages: Neuron http://www.neuron.yale.edu/neuron/ NEST http://www.nest-initiative.org/ Brian http://www.briansimulator.org/ Genesis http://genesis-sim.org/ ... czwartek, 25 marca 2010
  • 23. /3$&4/ 34.07 9- '-,0*3-7 9+ * (&),0&%&-7 E.*0&$*$&=- )47-0 4% &$( -%%-3$( 4/ (&8/*0( */7 Whole Brain Emulation /*,$&3 ($'-/8$2(; #/4$2-' ,4((&90- (3*0- (-,*'*$&4/ 0-=-0 )&82$ 433.' 9-$6--/ &/7&=&7. 0-3.0-( */7 )40-3.0*' 34/3-/$'*$&4/ (3*0-(H )40-3.0*' 7+/*)&3( 34.07 9- '-,0*3-7 6&$ (( *3$&4/ &/$-'*3$&4/( 4% 34/3-/$'*$&4/(; # ,-'2*,( 0-(( 0&1-0+ ,*'*$&4/ 34.07 *0(4 433.' 4/ 2&82-' 0-=-0( &% 62*$ )*$$-'( &( $2- &=&$+ 4% 34'$&3*0 )&/&340.)/( '*$2-' $2*/ &/7&=&7.*0 /-.'4/(; # *0 0&1-0+ 9.$ 34),.$*$&4/*00+ 7-)*/7&/8 (3*0- 4' (-,*'*$&4/ .07 9- $2- *$4)&3 (3*0-: $'-*$&/8 $2- 9'*&/ -).0*$&4/ *( * P 947+ $-) 4% *$4)(; /=-'(-0+: &% &$ 34.07 9- 7-)4/($'*$-7 $2*$ $2-'- &( /4 (.32 (3*0-: &$ .07 7-)4/($'*$- $2- &/%-*(&9&0&$+ 4% 6240- 9'*&/ -).0*$&4/; R.- 3*.(*00+ &),4'$*/$ &/%0.-/3- %'4) ()*00-' (3*0-( &/ $2&( 3*(-: * ).0*$&4/ *$ * ,*'$&3.0*' (3*0- 3*//4$ 9-34)- */ -).0*$&4/; S2- .(*0 7+/*)&3( 4% $2- (&).0*$&4/ &( /4$ &/$-'/*00+ 34/($'*&/-7: (4 &$ /4$ * ! $4 ! )47-0 4% $2- '-0-=*/$ 7+/*)&3(; @&4048&3*00+ -'-($&/8 (&).0*$&4/( )&82$ ($&00 9- ,4((&90-: 9.$ $2-+ 64.07 9- *0 $4 ,*'$&3.0*' (3*0-( */7 ,2-/4)-/*: */7 $2-+ 64.07 /4$ %.00+ !"#$%& '( )"*& +,-. ,'47.3- $2- &/$-'/*0 3*.(*0 ($'.3$.'- 4% $2- 6240- 9'*&/; 01& /&%23$+ +4+0&5 A. Sandberg, N. Bostrom czwartek, 25 marca 2010
  • 24. Brain Resolution WBE levels of interest: An informal poll among WBE workshop in 2008 attendees produced a range of estimates where the required resolution for Whole Brain Emulation (WBE) is. The consensus appeared to be level 4‐6. Two participants were more optimistic about high level models, while two suggested that elements on level 8‐9 may be necessary at least initially (but that the bulk of mature emulation, once the basics were understood, could occur on level 4‐5). To achieve emulation on this level, the consensus was that 5×5×50 nm scanning resolution would be needed. This roadmap will hence focus on level 4‐6 models, while being open for that deeper levels may turn out to be needed. A. Sandberg, N. Bostrom czwartek, 25 marca 2010
  • 25. Something Smaller Mammalian thalamo-cortical System by E. Izhikevich: The simulation of a model that has the size of the human brain: a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. It describes spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales. E. Izhikevich czwartek, 25 marca 2010
  • 26. and Less Complicated IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 5, SEPTEMBER 2004 E. Izhikevich czwartek, 25 marca 2010
  • 27. Our Goal: Cognitive Networks Cognitive networks CN are inspired by brain structure and performed cognitive functions CN put together single machine learning units connected via an interconnections. The goal is to understand the learning as the spatiotemporal information processing and storing capability of such networks (Meta-Learning!). 1. Space: for every LU (learning unit): a. For every time step: i. Update the state of each LU using changed training data ii. If the LU learning was performed with success, generate an event for each coupling that the LU is post-event coupled to and pre-event coupled to. 2. Time: For every TC (time coupling): When it receives a pre- or post-event, update its state and, if necessary, the state of the post-event LUs czwartek, 25 marca 2010
  • 28. Uwe Koch & Stephane Spieser Pawel G Sadowski, Tom Dariusz Plewczyński Kathryn S Lilley darman@icm.edu.pl Marcin von Grotthuss Krzysztof Ginalski Leszek Rychlewski Adrian Tkacz Jan Komorowski & Marcin Kierczak Lucjan Wyrwicz czwartek, 25 marca 2010
  • 29. Brainstorming (static consensus learning) Uwe Koch & Stephane Spieser Pawel G Sadowski, Tom Dariusz Plewczyński Kathryn S Lilley darman@icm.edu.pl Marcin von Grotthuss Krzysztof Ginalski Leszek Rychlewski Adrian Tkacz Jan Komorowski & Marcin Kierczak Lucjan Wyrwicz czwartek, 25 marca 2010

Hinweis der Redaktion

  1. Cognitive psychology focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. This figures indicates key foci of cognitive psychology.     Cognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. It had its foundations in the Gestalt psychology o f Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children. Cognitive psychologists are interested in how people understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to problems take the form of algorithms—rules that are not necessarily understood but promise a solution, or heuristics—rules that are understood but that do not always guarantee solutions. In other instances, solutions may be found through insight, a sudden awareness of relationships.  
  2. Cognitive psychology focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. This figures indicates key foci of cognitive psychology.     Cognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. It had its foundations in the Gestalt psychology o f Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children. Cognitive psychologists are interested in how people understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to problems take the form of algorithms—rules that are not necessarily understood but promise a solution, or heuristics—rules that are understood but that do not always guarantee solutions. In other instances, solutions may be found through insight, a sudden awareness of relationships.  
  3. Despite their inseparability, the three paradigms are distinct from one another because they represent separate areas of competence. Theory is concerned with the ability to describe and prove relationships among objects. Abstraction is concerned with the ability to use those relationships to make predictions that can be compared with the world. Design is concerned with the ability to implement specific instances of those relationships and use them to perform useful actions. Applied mathematicians, computational scientists, and design engineers generally do not have interchangeable skills.   Moreover, in computing we tend to study computational aids that support people engaged in informationtransforming processes. On the design side, for example, sophisticated VLSI design and simulation systems enable the efficient and correct design of microcircuitry, and programming environments enable the efficient design of software. On the modeling side, supercomputers evaluate mathematical models and make predictions about the world, and networks help disseminate findings from scientific experiments. On the theory side, computers help prove theorems, check the consistency of specifications, check for counterexamples, and demonstrate test cases.   Computing sits at the crossroads among the central processes of applied mathematics, science, and engineering. The three processes are of equal-and fundamental- importance in the discipline, which is a unique blend of interaction among theory, abstraction, and design. The binding forces are a common interest in experimentation and design as information transformers, a common interest in computational support of the stages of those processes, and a common interest in efficiency. The fundamental question underlying all of computing is, “What can be (efficiently) automated?”
  4. Despite their inseparability, the three paradigms are distinct from one another because they represent separate areas of competence. Theory is concerned with the ability to describe and prove relationships among objects. Abstraction is concerned with the ability to use those relationships to make predictions that can be compared with the world. Design is concerned with the ability to implement specific instances of those relationships and use them to perform useful actions. Applied mathematicians, computational scientists, and design engineers generally do not have interchangeable skills.   Moreover, in computing we tend to study computational aids that support people engaged in informationtransforming processes. On the design side, for example, sophisticated VLSI design and simulation systems enable the efficient and correct design of microcircuitry, and programming environments enable the efficient design of software. On the modeling side, supercomputers evaluate mathematical models and make predictions about the world, and networks help disseminate findings from scientific experiments. On the theory side, computers help prove theorems, check the consistency of specifications, check for counterexamples, and demonstrate test cases.   Computing sits at the crossroads among the central processes of applied mathematics, science, and engineering. The three processes are of equal-and fundamental- importance in the discipline, which is a unique blend of interaction among theory, abstraction, and design. The binding forces are a common interest in experimentation and design as information transformers, a common interest in computational support of the stages of those processes, and a common interest in efficiency. The fundamental question underlying all of computing is, “What can be (efficiently) automated?”
  5. Cognitive psychology focuses on study of higher mental functions with particular emphasis on the ways in which people acquire knowledge and use it to shape and understand their experience in the world. This figures indicates key foci of cognitive psychology.     Cognitive psychology is the school of psychology that examines internal mental processes such as problem solving, memory, and language. It had its foundations in the Gestalt psychology o f Max Wertheimer, Wolfgang Köhler, and Kurt Koffka, and in the work of Jean Piaget, who studied intellectual development in children. Cognitive psychologists are interested in how people understand, diagnose, and solve problems, concerning themselves with the mental processes which mediate between stimulus and response. Cognitive theory contends that solutions to problems take the form of algorithms—rules that are not necessarily understood but promise a solution, or heuristics—rules that are understood but that do not always guarantee solutions. In other instances, solutions may be found through insight, a sudden awareness of relationships.  
  6. Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.   As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.   Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. The key observation is that synapses dominate all the three costs! Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? This is exactly the challenge that our paper addresses. Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization. Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity.  
  7. The term “neuron” was coined by Heinrich Wilhelm Gottfried von Waldeyer-Hartz in 1891 to capture the discrete information processing units of the brain.    The junctions between two neurons were termed “synapses” by Sir Charles Sherrington in 1897.    Information flows only along one direction through a synapse, thus we talk about a “pre-synaptic” and a “post-synaptic” neuron. Neurons, when activated by sufficient input received via synapses, emit “spikes” that are delivered to those synapses that the neuron is pre-synaptic to.    Neurons can be either “excitatory” or “inhibitory.”
  8. Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.   As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.   Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. The key observation is that synapses dominate all the three costs! Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? This is exactly the challenge that our paper addresses. Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization. Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity.  
  9. Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.   As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.   Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. The key observation is that synapses dominate all the three costs! Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? This is exactly the challenge that our paper addresses. Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization. Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity.  
  10. Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.   As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.   Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. The key observation is that synapses dominate all the three costs! Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? This is exactly the challenge that our paper addresses. Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization. Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity.  
  11. Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.   As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.   Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. The key observation is that synapses dominate all the three costs! Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? This is exactly the challenge that our paper addresses. Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization. Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity.  
  12. On a historical note, in 1956, a team of IBM researchers simulated 512 neurons ( N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956. ).   Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times. The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing.   
  13. On a historical note, in 1956, a team of IBM researchers simulated 512 neurons ( N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956. ).   Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times. The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing.   
  14. On a historical note, in 1956, a team of IBM researchers simulated 512 neurons ( N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956. ).   Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times. The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing.   
  15. On a historical note, in 1956, a team of IBM researchers simulated 512 neurons ( N. Rochester, J. H. Holland, L. H. Haibt, and W. L. Duda, Tests on a Cell Assembly Theory of the Action of the Brain Using a Large Digital Computer,  IRE Transaction of Information Theory, IT-2, pp. 80-93, September 1956. ).   Our results represent a judicious intersection between computer science which defines the region of feasibility in terms of available computing resources today, and neuroscience which defines the region of desirability in terms of biological details that one would like to add. At any given point in time, to get a particular scale of simulation at a particular simulation speed, one must balance between feasibility and desirability. Thus, our results demonstrate that a non-empty intersection between these two regions exists today at rat-scale, at near real-time and at a certain complexity of simulations. This intersection will continue to expand over time. As more biological richness is added, correspondingly more resources will be required to accommodate the model in memory and to maintain reasonable simulation times. The value of the current simulator is in the fact that it permits almost interactive, large-scale simulation, and, hence, allows us to explore a wide space of parameters in trying to uncover (“guess”) the function of the cerebral cortex. Furthermore, understanding and harnessing dynamics of such large-scale networks is a tremendously exciting frontier. We hope that C2 will become the linear accelerator of cognitive computing.   
  16. Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.   As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.   Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. The key observation is that synapses dominate all the three costs! Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? This is exactly the challenge that our paper addresses. Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization. Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity.  
  17. Thus, neurons are simulated in a “clock-driven” fashion whereas synapses are simulated in an “event-driven” fashion.   As a first step toward cognitive computation, an interesting question is whether one can simulate a mammalian-scale cortical model in near real-time on an existing computer system? What are the memory, computation, and communication costs for achieving such a simulation? Memory: To achieve near real-time simulation times, the state of all neurons and synapses must fit in the random access memory of the system. Since synapses far outnumber the neurons, the total available memory divided by the number of bytes per synapse limits the number of synapses that can be modeled. We need to store state for 448 billion synapses and 55 million neurons where later being negligible in comparison to the former.   Communication: Let us assume that, on an average, each neuron fires once a second. Each neuron connects to 8,000 other neurons, and, hence, each neuron would generate 8,000 spikes (“messages’) per second. This amounts to a total of 448 billion messages per second. Computation: Let us assume that, on an average, each neuron fires once a second. In this case, on an average, each synapse would be activated twice—once when its pre-synaptic neuron fires and once when its post-synaptic neuron fires. This amounts to 896 billion synaptic updates per second. Let us assume that the state of each neuron is updated every millisecond. This amounts to 55 billion neuronal updates per second. Once again, synapses seem to dominate the computational cost. The key observation is that synapses dominate all the three costs! Let us now take a state-of-the-art supercomputer BlueGene/L with 32,768 processors, 256 megabytes of memory per processor (a total of 8 terabytes), and 1.05 gigabytes per second of in/out communication bandwidth per node. To meet the above three constraints, if one can design data structure and algorithms that require no more than 16 byes of storage per synapse, 175 Flops per synapse per second, and 66 bytes per spike message, then one can hope for a rat-scale, near real-time simulation. Can such a software infrastructure be put together? This is exactly the challenge that our paper addresses. Specifically, we have designed and implemented a massively parallel cortical simulator, C2, designed to run on distributed memory multiprocessors that incorporates several algorithmic enhancements: (a) a computationally efficient way to simulate neurons in a clock-driven ("synchronous") and synapses in an event-driven("asynchronous") fashion; (b) a memory efficient representation to compactly represent the state of the simulation; (c) a communication efficient way to minimize the number of messages sent by aggregating them in several ways and by mapping message exchanges between processors onto judiciously chosen MPI primitives for synchronization. Furthermore, the simulator incorporated (a) carefully selected computationally efficient models of phenomenological spiking neurons from the literature; (b) carefully selected models of spike-timing dependent synaptic plasticity for synaptic updates; (c) axonal delays; (d) 80% excitatory neurons and 20% inhibitory neurons; and (e) a certain random graph of neuronal interconnectivity.