3. Natural Neural Networks
⢠Signals âmoveâ via electrochemical signals
⢠The synapses release a chemical transmitter â
the sum of which can cause a threshold to be
reached â causing the neuron to âfireâ
⢠Synapses can be inhibitory or excitatory
3
4. Natural Neural Networks
⢠We are born with about 100 billion neurons
⢠A neuron may connect to as many as 100,000
other neurons
4
5. Natural Neural Networks
⢠Many of their ideas still used today e.g.
â many simple units, âneuronsâ combine to give
increased computational power
â the idea of a threshold
5
6. Modelling a Neuron
⢠aj :Activation value of unit j
⢠wj,i :Weight on link from unit j to unit i
⢠ini :Weighted sum of inputs to unit i
⢠ai :Activation value of unit i
⢠g :Activation function
j
jiji aWin ,
6
7. Activation Functions
⢠Stept(x) = 1 if x ⼠t, else 0 threshold=t
⢠Sign(x) = +1 if x ⼠0, else â1
⢠Sigmoid(x) = 1/(1+e-x)
7
8. Building a Neural Network
1. âSelect Structureâ: Design the way that the
neurons are interconnected
2. âSelect weightsâ â decide the strengths with
which the neurons are interconnected
â weights are selected so get a âgood matchâ to
a âtraining setâ
â âtraining setâ: set of inputs and desired
outputs
â often use a âlearning algorithmâ
8
9. Basic Neural Networks
⢠Will first look at simplest networks
⢠âFeed-forwardâ
â Signals travel in one direction through net
â Net computes a function of the inputs
9
10. The First Neural Neural Networks
Neurons in a McCulloch-Pitts network are connected by directed, weighted
paths
-1
2
2X1
X2
X3
Y
10
11. The First Neural Neural Networks
If the on weight on a path is positive the path is
excitatory,
otherwise it is inhibitory
-1
2
2X1
X2
X3
Y
11
12. The First Neural Neural Networks
The activation of a neuron is binary. That is, the neuron
either fires (activation of one) or does not fire (activation of
zero).
-1
2
2X1
X2
X3
Y
12
13. The First Neural Neural Networks
For the network shown here the activation function for unit Y is
f(y_in) = 1, if y_in >= θ else 0
where y_in is the total input signal received
θ is the threshold for Y
-1
2
2X1
X2
X3
Y
13
14. The First Neural Neural Networks
Originally, all excitatory connections into a particular neuron have the same
weight, although different weighted connections can be input to different
neurons
Later weights allowed to be arbitrary
-1
2
2X1
X2
X3
Y
14
15. The First Neural Neural Networks
Each neuron has a fixed threshold. If the net input into the neuron is
greater than or equal to the threshold, the neuron fires
-1
2
2X1
X2
X3
Y
15
16. The First Neural Neural Networks
The threshold is set such that any non-zero inhibitory input will prevent the neuron
from firing
-1
2
2X1
X2
X3
Y
16
17. Building Logic Gates
⢠Computers are built out of âlogic gatesâ
⢠Use threshold (step) function for activation
function
â all activation values are 0 (false) or 1 (true)
17
18. The First Neural Neural Networks
AND Function
1
1
X1
X2
Y
AND
X1 X2 Y
1 1 1
1 0 0
0 1 0
0 0 0
Threshold(Y) = 2
18
19. The First Neural Networks
AND FunctionOR Function
2
2X1
X2
Y
OR
X1 X2 Y
1 1 1
1 0 1
0 1 1
0 0 0
Threshold(Y) = 2
19
20. Perceptron
⢠Synonym for Single-Layer,
Feed-Forward Network
⢠First Studied in the 50âs
⢠Other networks were known
about but the perceptron
was the only one capable of
learning and thus all research
was concentrated in this area
20
21. Perceptron
⢠A single weight only affects
one output so we can restrict
our investigations to a model
as shown on the right
⢠Notation can be simpler, i.e.
j
WjIjStepO 0
21
23. What can perceptrons represent?
0,0
0,1
1,0
1,1
0,0
0,1
1,0
1,1
AND XOR
⢠Functions which can be separated in this way are called Linearly Separable
⢠Only linearly separable functions can be represented by a perceptron
⢠XOR cannot be represented by a perceptron
23
24. XOR
⢠XOR is not âlinearly separableâ
â Cannot be represented by a perceptron
⢠What can we do instead?
1. Convert to logic gates that can be represented by
perceptrons
2. Chain together the gates
24
25. Single- vs. Multiple-Layers
⢠Once we chain together the gates then we have âhidden
layersâ
â layers that are âhiddenâ from the output lines
⢠Have just seen that hidden layers allow us to represent XOR
â Perceptron is single-layer
â Multiple layers increase the representational power, so
e.g. can represent XOR
⢠Generally useful nets have multiple-layers
â typically 2-4 layers
25