Looking at the relationship between Machine Learning, Deep Learning and the Brain
Slides from a talk given at the Machine Learning and AI meetup in Melbourne. http://www.meetup.com/Machine-Learning-AI-Meetup/events/227156709/
excited then disappointed by AI
study Neuroscience (by accident)
germany free
excited by ML
turn up in both ML and Neuroscience
Fundamental
so how similare are they really
Walk through the steps
Simple
Linear weights
Tractable
Deterministic
Stateless
explain structure of a neuron, analogy to artificial neuron
dendrites => input
electrochemical
Complex
Nonlinear ‘weights’
Stochastic?
intractable differential equations (simplifications)
dendrites are input
neurons connect to many other neurons
dendrite computation super complex
non-linear
resistance
dendritic spikes
This is like the `sum` and step function on an artificial neuron
Neurons have state
are time dependant
fire all or nothing
have to recover
electrochemical
nasty differentials
hard to model
different types
(shows recordings of neurons), current injection
different firing patterns
Fast spiking
Stutter
Regular/Adaptive
We don’t really know exactly how they work - unlike aritificial neurons which we understand really well
One problem - Simple vs Complex stimuli
The real world is somewhat different
Another problem
Hard to know what’s going on when in a network
look at where they are similar and where not
when i think of a neural net…
80 billion neurons
10 trillion synapses
run on sandwiches and glasses of water
beautiful mess
ANN - Generally we think of this
feed forward
multi-layer
trained via-backprop
around since the 80’s
There are of course fancier versions (RNN etc)
these days its called deep learning
multi-layer used to be hard, now:
more data
faster computers
tricks (dropout)
Deep Learning used to reference:
Restricted boltzmann machine
Auto encoders
unsupervised feature learning
Claims deep learning like brain
Problems with feed forward
geometry => faster/slower processing times
asynchrony
Problems with Backprop
no supervised signal
no error function
no derivatives
how would you communicate it with spikes?
no bi-directional weights
forward backward pass
fundamental differences in paradigms learning brain vs ANN approach to learning
learning paradigm problem
very different to how we learn
unsupervised (passive) RBMs
reinforcement (active)
one-shot (active)
Similarities in the actual architecture
Neural learnings of High level concepts
grandmother cell in neuroscience
google from youtube
Learning concepts
Embeddings
switching between modalities
where has knowledge been shared
neuroscience uses a lot of machine learning
it’s also given ML neural networks
Hopfield Network
Hebbian learning
model for associative memory in the brain
explain images
RBM’s
Visual cortext => evidence we learn from our surroundings
General learning - somatosensory
necker cube perception