Diese Präsentation wurde erfolgreich gemeldet.
Wir verwenden Ihre LinkedIn Profilangaben und Informationen zu Ihren Aktivitäten, um Anzeigen zu personalisieren und Ihnen relevantere Inhalte anzuzeigen. Sie können Ihre Anzeigeneinstellungen jederzeit ändern.

Introduction to Spiking Neural Networks: From a Computational Neuroscience perspective

246 Aufrufe

Veröffentlicht am

Slides for Neural Networks study group at Mozilla Community Space Taipei on Oct. 19, 2019

Veröffentlicht in: Technologie
  • Als Erste(r) kommentieren

Introduction to Spiking Neural Networks: From a Computational Neuroscience perspective

  1. 1. Jason Tsai (蔡志順) Oct. 19, 2019 @Mozilla Community Space Taipei *Picture adopted from https://bit.ly/2ts8xCk Introduction to Spiking Neural Networks
  2. 2. *Copyright Notice: All figures in this presentation are taken from the quoted sources as mentioned in the respective slides and their copyright belongs to the owners. This presentation itself adopts Creative Commons license.
  3. 3. Neural Networks 3D Simulation (Video demo) *Video from https://youtu.be/3JQ3hYko51Y
  4. 4. Questions  What are the advantages of spiking neural networks and neuromorphic computing?  What are current challenges of spiking neural networks (SNNs)?
  5. 5. Characteristics of SNNs  Spatio-temporal  Asynchronous  Sparsity  Additive weight operation*  Energy-efficient  Stochastic  Robust to noise
  6. 6. Outlines • Basic neuroscience • Learning algorithms • Neuron models • Neural encoding schemes • Neuromorphic platforms
  7. 7. Prerequisite Neuroscience
  8. 8. Nerve Cell (Neuron) *Figure adopted from Eric R. Kandel, et.al. Principles of Neural Science, Fifth Edition. McGraw-Hill Education. 2013. Page 22.
  9. 9. Synapse *Figure adopted from https://bit.ly/2ycOmcq (ROC means receptor-operated channels)
  10. 10. Neuron’s Spike: Action Potential *Figure adopted from https://en.wikipedia.org/wiki/Action_potential & The front cover of “Spikes: Exploring the Neural Code (1999)”
  11. 11. EPSP / IPSP *Figure adopted from https://bit.ly/2OgAx7z
  12. 12. The Effect of Presynaptic Spikes on Postsynaptic Neuron *Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 5.
  13. 13. Hebb’s Learning Postulate  "When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased.“* * Refer to Donald O. Hebb, The Organization of Behavior: A Neuropsychological Theory. 1949 & 2002. Page 62.  Causality  Repetition
  14. 14. Long-Term Potentiation (LTP) / Long- Term Depression (LTD)  LTP is a long-lasting, activity-dependent increase in synaptic strength that is a leading candidate as a cellular mechanism contributing to memory formation in mammals in a very broadly applicable sense.* * Refer to J. David Sweatt. Mechanisms of Memory, Second Edition. Academic Press. 2010. Page 112.
  15. 15. Synaptic Plasticity *Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 353.
  16. 16. Back-propagating Action Potential (bAP) *Further reading: https://en.wikipedia.org/wiki/Neural_backpropagation Induction of tLTP requires activation of the presynaptic input milliseconds before the bAP in the postsynaptic dendrite.
  17. 17. *Figure adopted from https://doi.org/10.3389/fnsyn.2011.00004 Spike-Timing-Dependent Plasticity (STDP)
  18. 18. Experiment Evidence of STDP  From Wikipedia: “Henry Markram, when he was in Bert Sakmann's lab and published their work in 1997, used dual patch clamping techniques to repetitively activate pre-synaptic neurons 10 milliseconds before activating the post- synaptic target neurons, and found the strength of the synapse increased. When the activation order was reversed so that the pre- synaptic neuron was activated 10 milliseconds after its post-synaptic target neuron, the strength of the pre-to-post synaptic connection decreased. Further work, by Guoqiang Bi, Li Zhang, and Huizhong Tao in Mu-Ming Poo's lab in 1998, continued the mapping of the entire time course relating pre- and post-synaptic activity and synaptic change, to show that in their preparation synapses that are activated within 5-20 ms before a postsynaptic spike are strengthened, and those that are activated within a similar time window after the spike are weakened.” *Further reading: https://en.wikipedia.org/wiki/Spike-timing-dependent_plasticity
  19. 19. Cortical Column *Figure adopted from https://bit.ly/2OZQpKA
  20. 20. Lateral Inhibition Lateral inhibition is a Central Nervous System process whereby application of a stimulus to the center of the receptive field excites a neuron, but a stimulus applied near the edge inhibits it. *Figure adopted from https://bit.ly/2yaat37
  21. 21. Lateral Inhibition (Cont’d) *Figure adopted from http://wei-space.blogspot.tw/2007/11/lateral-inhibition.html & https://en.wikipedia.org/wiki/Lateral_inhibition
  22. 22. Hierarchical Sparse Distributed Representations in Visual Cortex *Figure adopted from https://bit.ly/2Ov5qV2 & https://bit.ly/2xTS1fw
  23. 23. Dopamine: Essential for Reward Processing in Mammalian Brain *Figure adopted from http://www.jneurosci.org/content/29/2/444 Dopamine neurons form huge synaptic contacts to target!
  24. 24. Learning Rule
  25. 25. Two Hot Approaches  Supervised: Stochastic Gradient Descent based Backpropagation learning rule (Treat the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are considered as noise.*) Unsupervised: STDP (Spike-Timing- Dependent Plasticity) based learning rule *Refer to Jun Haeng Lee, et al., Training Deep Spiking Neural Networks Using Backpropagation. Frontiers in Neuroscience, 08 November 2016. https://doi.org/10.3389/fnins.2016.00508
  26. 26. *Refer to Yu, Q., Tang, H., Hu, J., Tan, K.C., Neuromorphic Cognitive Systems: A Learning and Memory Centered Approach. Springer International Publishing. 2017. Page 9. STDP Learning Rule
  27. 27. STDP Learning Rule (1-to-1) *Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
  28. 28. STDP Learning Rule (2-to-1) N0 is stimulated until N1 fires, then e0 is stopped for 30 ms. N2 is stimulated by e2 during those 30 ms. *Figure adopted from http://dx.doi.org/10.7551/978-0-262-33027-5-ch037
  29. 29. STDP Finds Spike Patterns *Figure adopted from https://doi.org/10.1371/journal.pone.0001377
  30. 30. Triplet STDP *Figure adopted from https://doi.org/10.1523/JNEUROSCI.1425-06.2006
  31. 31. Triplet STDP with traces *Figure adopted from https://doi.org/10.1007/s00422-008-0233-1
  32. 32. Reward-modulated STDP *Figure adopted from https://doi.org/10.1371/journal.pcbi.1000180
  33. 33. Neural Modeling
  34. 34. 1st Generation of Neuron Models (McCulloch–Pitts Neuron Model) *Figure adopted from http://wwwold.ece.utep.edu/research/webfuzzy/docs/kk-thesis/kk-thesis-html/node12.html
  35. 35. 2nd Generation of Neuron Models *Figure adopted from http://cs231n.github.io/neural-networks-1/
  36. 36. 3rd Generation of Neuron Models (Spiking Neuron Models) *Figure adopted from http://kzyjc.cnjournals.com/html/2018/5/20180512.htm
  37. 37. Spiking Neuron Models Miscellaneous models (integrators / resonators):  Hodgkin-Huxley model  Izhikevish model  Leaky Integrate-and-Fire (LIF) model  Resonate-and-Fire model  Spike Response model (SRM) …… *Further reading: https://en.wikipedia.org/wiki/Biological_neuron_model & http://www.scholarpedia.org/article/Spike-response_model
  38. 38. Hodgkin-Huxley Model *Figure adopted from Wulfram Gerstner & Werner M. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press. 2002. Page 34.
  39. 39. Hodgkin-Huxley Model (Cont’d) *Taken from: https://www.bonaccorso.eu/2017/08/19/hodgkin-huxley-spiking-neuron-model-python/amp/
  40. 40. Izhikevich Model *Taken from: http://www.physics.usyd.edu.au/teach_res/mp/ns/doc/nsIzhikevich3.htm
  41. 41. Izhikevich Model (Cont’d) *Refer to Simple Model of Spiking Neurons (2003) https://www.izhikevich.org/publications/spikes.htm
  42. 42. Leaky Integrate-and-Fire Model *Figure adopted from Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski “Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition” Cambridge University Press. 2014. Page 11.
  43. 43. The Firing of a Leaky Integrate-and- Fire Model Neuron *Figure adopted from https://doi.org/10.1371/journal.pone.0001377
  44. 44. Resonate-and-Fire Model *Refer to Resonate-and-Fire Neurons (2001) https://www.izhikevich.org/publications/resfire.htm
  45. 45. Neural Coding
  46. 46. Hypothesized Neural Coding Schemes  Rate Coding  Temporal Coding  Population Coding  Sparse Coding *Further reading: https://en.wikipedia.org/wiki/Neural_coding
  47. 47. Rate Coding *Further reading: http://lcn.epfl.ch/~gerstner/SPNM/node7.html Rate as a Spike Density Rate as a Population Activity
  48. 48. Temporal Coding *Further reading: http://lcn.epfl.ch/~gerstner/SPNM/node8.html Time-to-First-Spike (Latency Code) Firing at Phases respecting to Oscillation Interspike synchrony
  49. 49. Population Coding *Figure adopted from https://doi.org/10.1038/35039062
  50. 50. Sparse Coding *Figure adopted from http://brainworkshow.sparsey.com/measuring-similarity-in-localist-vs-distributed-representations/
  51. 51. Sparse Coding with Inhibitory Neurons  Population sparseness: Few neurons are active at any given time  Lifetime sparseness: Individual neurons are responsive to few specific stimuli *Figure adopted from https://doi.org/10.1523/JNEUROSCI.4188-12.2013
  52. 52. Neuromorphic Computing
  53. 53. Categories of AI Chips  AI Accelerator  GPU  FPGA  ASIC  Neuromorphic chip  Network-on-Chip  Memory-based  Memristor-based  Many-core CPU  DSP  Spintronics-based  Photonics-based
  54. 54. Why Neuromorphic *Figure adopted from https://bit.ly/31v1NAS
  55. 55. IBM’s TrueNorth Chip *Figure adopted from https://doi.org/10.1126/science.1254642 *Video demo https://youtu.be/7ELRZrjCFd0
  56. 56. Intel’s Loihi Chip *Figure adopted from https://doi.org/10.1109/MM.2018.112130359 *Video demo https://youtu.be/cDKnt9ldXv0
  57. 57. BrainChip’s Akida NSoC *Figure adopted from https://www.brainchipinc.com/products/akida-neuromorphic-system-on-chip *Video demo https://bit.ly/35rea45
  58. 58. 北京清華大學「天機芯」 *Video demo https://youtu.be/Nf0qVjT9WV0 *Figure adopted from https://doi.org/10.1038/s41586-019-1424-8
  59. 59. ANN-to-SNN Conversion  Train ANNs using standard supervised training techniques like backpropagation to leverage the superior performance of trained ANNs and subsequently convert to event-driven SNNs for inference operation on neuromorphic platform.  Rate-encoded spikes are approximately proportional to the magnitude of the original ANN inputs.
  60. 60. ANN-to-SNN Conversion (Cont’d) *Figure adopted from https://arxiv.org/abs/1802.02627 A Poisson event-generation process is used to produce the input spike train to the network.
  61. 61. Software Simulation  MATLAB  PyNN http://neuralensemble.org/PyNN/  BindsNET (with PyTorch) https://github.com/Hananel-Hazan/bindsnet  Brian http://briansimulator.org/  Nengo https://www.nengo.ai/  NEST http://www.nest-simulator.org/
  62. 62. Further Reading  Wulfram Gerstner & Werner M. Kistler, “Spiking Neuron Models: Single Neurons, Populations, Plasticity”. Cambridge University Press (2002)  Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski, “Neuronal Dynamics: From Single Neurons to Networks and Models of Cognition”. Cambridge University Press (2014)  Eugene M. Izhikevich, “The Dynamical Systems in Neuroscience: Geometry of Excitability and Bursting”. The MIT Press (2007)  Nikola K. Kasabov, “Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence”. Springer International Publishing (2018)  蔺想红、王向文, “脉冲神经网络原理及应用”. 科学出版社 (2018)

×