* 파이콘 한국 2020의 발표자료입니다.
현대 인공 신경망의 뿌리가 되었던 뇌 과학!
이 발표에서는 인공 신경망에 대한 뇌 과학적 접근과,
뇌 세포의 발화를 모사하는 파이썬 기반의 뉴로모픽 신경망 모델에 대한 사례를 공유할 예정입니다.
뉴로모픽 신경망은 단순히 기존의 딥러닝에서 셀 구조만을 변경한 것이 아닙니다.
실제로 실험을 수행하기 어려운 생물학적 한계점을 뇌 시뮬레이션을 통해서 극복할 수 있으며,
나아가 뇌의 정보처리 메커니즘을 밝히고, 다양한 뇌 질환 치료제의 타겟을 연구하는데 아주 중요한 역할을 할 수 있습니다.
이번 발표를 통해, 기계학습을 연구하고 있는 많은 연구자 분들에게 새로운 아이디어에 대한 영감이 될 수 있기를 희망합니다.
2. 저는 인간의 학습 원리와 최신 딥러닝 기술을 융합하여
AGI (Artificial general intelligence) 를 개발하는
꿈을 가지고 있습니다.
✉ bananaband657@gmail.com
🏠 https://banana-media-lab.tistory.com
https://github.com/MrBananaHuman
3. Introduction01
- Introduction to neuroscience
Spiking Neural Network (SNN)02
- SNN as a neuromorphic neural network model
Modeling of SNN03
- Python Nengo library for SNN modeling
Applications of SNN04
- Deep SNN models
Future of SNN05
- Neuromorphic chip
5. Introduction
The brain is the most complex 1.5 kg organ that controls all functions of the body, interprets information from the outside
world, and embodies the essence of the mind and soul.
Thoughts
Perceptions
Language
Sensations
Memories
Actions
Emotions
Learning
6. Introduction
The brain is the most complex 1.5 kg organ that controls all functions of the body, interprets information from the outside
world, and embodies the essence of the mind and soul.
Thoughts
Perceptions
Language
Sensations
Memories
Actions
Emotions
Learning
7. History of Neuroscience - Neuron
Neuroscience is the study of how the nervous system develops, its structure, and what it does.
The first drawing of a neuron as the nerve cell (1865) [1] The first illustrated a synapse (1893, 1897) [2-3]
[1] Otto Friedrich Karl Deiters, Deiters, 1865
[2] Sherrington CS, 1897, A textbook of physiology, London:Macmillian, p.1024-70
[3] Cajal R, 1893, Arch Anat Physiol Anat Abth., V & VI:310-428
8. History of Neuroscience - Neuron
A typical neuron consists of a cell body (soma), dendrites, and a single axon.
[1] https://ib.bioninja.com.au/standard-level/topic-6-human-physiology/65-neurons-and-synapses/neurons.html
Synapse
Dendrite
Nucleus Soma
(Cell body)
Axon terminal
Myelin
sheath
Axon
Synapse
[1]
9. History of Neuroscience – Action Potential
An action potential is a rapid rise and subsequent fall in voltage or membrane potential across a cellular membrane with a
characteristic pattern.
[3]
[1] [2]
[1] How big is the GIANT Squid Giant Axon?, @TheCellularScale
[2] Hodgkin AL & Huxley AF, 1945, J Physiol
[3] https://www.moleculardevices.com/applications/patch-clamp-electrophysiology/what-action-potential#gref
10. History of Neuroscience - Synapse
Synapses are biological junctions through which neurons' signals can be sent to each other.
[2]
[1] https://synapseweb.clm.utexas.edu/type-1-synapse
[2] Besson, P., 2017, Doctoral dissetation
Presynaptic
neuron
Postsynaptic
neuron
Synpase
[1]
Excitatory postsynaptic potential
(EPSP)
Inhibitory postsynaptic potential
(IPSP)
11. History of Neuroscience - Synaptic Plasticity in Synapse
Synaptic plasticity refers to the phenomenon whereby strength of synaptic connections between neurons changes over time.
[1] M G LARRABEE, D W BRONK, 1947, J Neurophysiol.
Presynaptic neuron
Postsynaptic neuron
Before
stimulating
After
stimulating
Action potentials recorded from the
postganglionic nerve (1947) [1]
13. Artificial Neural Network (ANN) Revolution
[1]
ANN is abstract model that mimics the complex structure and functioning of the brain, which is developing explosively in
recent years.
[1] A brief history of neural nets and deep learning by A. Kurenkov
14. Limitation of ANN
Despite the success of the ANN algorithm, it has clear limitations.
Computational limitations
[1] Whittington and Bogacz, 2019, Trends in Cognitive Sciences
[2] Grossberg, 1987, Cognitive Science
[3] Lillicrap et al., 2020, Nature Review Neuroscience
• Lack of local error representation → Vanishing gradient [1]
• Symmetry of forwards and backwards weights → Weight transport problem [2]
• Feedback in brains alters neural activity [3]
• Unrealistic models of neurons → Large computational cost [1]
• Error signals are singed and potentially extreme-valued → Over fitting [3]
15. How Does The Brain Learn?
[1]
[1] Brainbow Hippocampus, Greg Dunn and Brian Edwards, 2014
[2] https://blogs.cardiff.ac.uk/acerringtonlab/ca1-pyramidal-neuron-red-hot/
[2]
17. Overview
SNNs operate using spikes, which are discrete events that take place at points in time, rather than continuous values.
[1] Anwani and Rajendran, 2015, IJCNN
[1]
Components
• Spiking neuron model
• Synapse
• Synaptic plasticity
18. Spiking Neuron Model - Leaky Integrate-and-Fire (LIF)Model
A spiking neuron model is a mathematical description of the properties of certain cells in the nervous system that generate
sharp electrical potentials across their cell membrane, roughly one millisecond in duration.
[1] Teka, W. et al., 2014, PLoS Comput Biol.
[Appendix 1] https://www.youtube.com/watch?v=2_MIjvwWsrg
[Appendix 2] https://www.youtube.com/watch?v=KXnHxZdn8NU
Characteristics
• Subthreshold leaky-
integrator dynamic
• A firing threshold
• Reset mechanism
Resistor-Capacitor (RC) circuit [1]
19. Spiking Neuron Model - Leaky Integrate-and-Fire (LIF)Model
A spiking neuron model is a mathematical description of the properties of certain cells in the nervous system that generate
sharp electrical potentials across their cell membrane, roughly one millisecond in duration.
Characteristics
• Subthreshold leaky-
integrator dynamic
• A firing threshold
• Reset mechanism
[1] Louis Lapicque, 1907, Journal de Physiologie et de Pathologie Générale.
[Appendix 1] https://www.youtube.com/watch?v=2_MIjvwWsrg
[Appendix 2] https://www.youtube.com/watch?v=KXnHxZdn8NU
Leaky Integrate-and-Fire model [1]
20. Synapse Model
The synapse model activates as an input current stimulation to the spiking neuron model.
[1] Dutta, S. et al., 2017, Scientific reports
[1]
21. Synaptic Plasticity - Learning in the Brain
To reduce
punishment
To improve
knowledge
(reward(?))
ANN
output target
Error function
Loss function
Learning rate Error signal
SNN
• Unsupervised
learning
• Fire together,
wire together
• STDP learning
• BCM learning
• Supervised
learning
• Local error
propagation
• TP learning
• PES learning
[1] Timothy P. Lillicrap et al., 2020, Nat Rev Neurosci.
[1]
[1] [1]
22. Unsupervised Learning - Spike Timing Dependent Plasticity (STDP)
The Spike Timing Dependent Plasticity (STDP) algorithm, which has been observed in the mammalian brain, modulates the
weight of a synapse based on the relative timing of presynaptic and postsynaptic spikes. [1-3]
[1] Wang, R. et al., 2016, ISCAS
[2] Gerstner et al., 1996, Nature
[3] Bi and Poo, 1998, Journal of Neuroscience
[2-3]
[1]
PostPre
Spike
Spike
Time (ms)
Δt
23. Unsupervised Learning - Bienenstock, Cooper & Munro (BCM)
The BCM model proposes a sliding threshold for long-term potentiation (LTP) or long-term depression (LTD) induction, and
states that synaptic plasticity is stabilized by a dynamic adaptation of the time-averaged postsynaptic activity.
[1] Bienenstock, Cooper & Munro 1982 J Neurosci
Bienenstock, Cooper & Munro (BCM) learning [1]
Learning in visual cortex BCM model
24. Supervised Learning - Target Propagation (TP)
output target
Local layer-wise errors
Hypothesis
• The essential idea behind using a
stack of auto-encoders for deep
learning
• This backward-propagated target
induces hidden-activity targets
that should have been realized by
the network
• Learning proceeds by updating
the forward weights to minimize
these local layer-wise activity
differences
Target propagation (TP) learning [1]
[1] Timothy P. Lillicrap et al., 2020, Nat Rev Neurosci.
25. Supervised Learning - Prescribed Error Sensitivity (PES)
[1]
[1] Timothy P. Lillicrap et al., 2020, Nat Rev Neurosci.
[2] Voelker, A. R., 2015, Centre for Theoretical Neuroscience
Prescribed Error Sensitivity (PES) learning [2]
A connection from x to y learns to output y ∗ by minimizing |y ∗ − y|.
26. Modeling of Spiking Neural Network (SNN)
Python Nengo library for SNN modeling
03
27. Nengo Library
The Nengo Brain Maker is a Python package for building, testing, and deploying neural networks as a Neural Engineering
Framework (NEF).
[1] https://www.nengo.ai/
[1]
28. Nengo Library
The Nengo Brain Maker is a Python package for building, testing, and deploying neural networks as a Neural Engineering
Framework (NEF).
[1] https://www.nengo.ai/
[1]
29. Nengo Tutorial
Installation
Usage
Build a network
!pip install nengo
import nengo
import numpy as np
net = nengo.Network()
with net:
sin_input = nengo.Node(output=np.sin)
input_neuron = nengo.Ensemble(n_neurons=4, dimensions=1)
nengo.Connection(sin_input, input_neuron)
Node
(Sine)
30. Spiking Neuron Model
Characteristics
import matplotlib.pyplot as plt
%matplotlib inline
from nengo.dists import Choice
from nengo.utils.ensemble import tuning_curves
from nengo.utils.matplotlib import rasterplot
with nengo.Simulator(net) as sim:
plt.figure()
plt.plot(*tuning_curves(input_layer, sim))
plt.xlabel("input value")
plt.ylabel("firing rate")
plt.xlim(-1, 1)
plt.title(str(nengo.LIF()))
sim.run(5.0)
46. Supervised Learning
With PES learning
with net:
noise_input = nengo.Node(WhiteSignal(60, high=5), size_out=1)
input_layer = nengo.Ensemble(60, dimensions=1)
output_layer = nengo.Ensemble(60, dimensions=1)
nengo.Connection(noise_input, input_layer)
conn = nengo.Connection(input_layer, output_layer)
error_neuron = nengo.Ensemble(60, dimensions=1)
nengo.Connection(output_layer, error_neuron)
nengo.Connection(input_layer, error_neuron, transform=-1)
conn.learning_rule_type = nengo.PES()
nengo.Connection(error_neuron, conn.learning_rule)
with nengo.Simulator(model) as sim:
sim.run(10.0)
47. Supervised Learning
With PES learning
with net:
noise_input = nengo.Node(WhiteSignal(60, high=5), size_out=1)
input_layer = nengo.Ensemble(60, dimensions=1)
output_layer = nengo.Ensemble(60, dimensions=1)
nengo.Connection(noise_input, input_layer)
conn = nengo.Connection(input_layer, output_layer)
error_neuron = nengo.Ensemble(60, dimensions=1)
nengo.Connection(output_layer, error_neuron)
nengo.Connection(input_layer, error_neuron, transform=-1)
conn.learning_rule_type = nengo.PES()
nengo.Connection(error_neuron, conn.learning_rule)
with nengo.Simulator(model) as sim:
sim.run(10.0)
48. Keras Model Converting
[1]
[1] https://towardsdatascience.com/mnist-handwritten-digits-classification-using-a-convolutional-neural-network-cnn-af5fafbc35e9
49. Keras Model Converting
MNIST model converting
converter = nengo_dl.Converter(model, epochs=2,
swap_activations={tf.nn.relu: nengo.RectifiedLinear())
with nengo_dl.Simulator(converter.net, seed=0, minibatch_size=200) as sim:
sim.compile(
optimizer=tf.optimizers.RMSprop(0.001),
loss={
converter.outputs[dense1]: tf.losses.SparseCategoricalCrossentropy(
from_logits=True
)
},
metrics={converter.outputs[dense1]: tf.metrics.sparse_categorical_accuracy},
)
sim.fit(
{converter.inputs[inp]: train_images},
{converter.outputs[dense1]: train_labels},
epochs=epochs,
)
sim.save_params("./mnist_model")
52. Solving XOR Problem
It is known that the XOR problem cannot be solved with the traditional perceptron model but Nengo based SNN can solve the
problem with only a single layer. [1]
[2]
[1] Gidon et al., 2020, Science
[2] https://github.com/sunggukcha/xor
[3] https://www.nengo.ai/examples/
[3]
53. Permuted Sequential MNIST
In the Permuted Sequential MNIST data containing the order information for writing numbers, the Nengo SNN-based
(Legendre Memory Units) LMU showed SOTA performance.
[2]
[1] https://github.com/edwin-de-jong/mnist-digits-stroke-sequence-data/wiki/MNIST-digits-stroke-sequence-data
[2] Coelker, A. et al., 2019, NIPS
[3] https://www.nengo.ai/examples/
[2]
[1]
[3]
54. Large Scale Virtual Brain Simulation
Methods
• Semantic Pointer Architecture
Unified Network (SPAUN)
• Using Nengo
• 2.5 million LIF neurons
• Success on 8 diverse tasks
• Copy drawing style
• Image recognition
• Reinforcement learning
• Serial working memory
• Counting
• Question Answering
• Rapid variable creation
• Fluid reasoning
[1] Eliasmith et al., 2012, Science
[1]
55. Large Scale Virtual Brain Simulation
Methods
• Semantic Pointer Architecture
Unified Network (SPAUN)
• Using Nengo
• 2.5 million LIF neurons
• Success on 8 diverse tasks
• Copy drawing style
• Image recognition
• Reinforcement learning
• Serial working memory
• Counting
• Question Answering
• Rapid variable creation
• Fluid reasoning
[1] Eliasmith et al., 2012, Science
[1]
57. Neuromorphic Advantages
Advantages
• Sparsification over time
→ Less communication
• Less communication
→ Fewer memory lookups
• Cheaper computation
→ Sum instead of multiply
[1] Jeehyun Kwak and Hyun Jae Jang, Neural Computation Lab (NCL), Korea Univ.
[1]