1. 66//1010//20132013
Fungsi Aktivasi
⹠Fungsi aktivasi dengan notasi: ”(.)
mendefinisikan nilai output dari suatu neuron
dalam level aktivasi tertentu berdasarkan nilai
output pengkombinasi linier ui.
âą Beberapa fungsi aktivasi yg biasa digunakan:
â Hardlimiter
â Threshold
â Sigmoid
â Tangen Hiperbolik
Fungsi Aktivasi
1. Hardlimiter
2. Piecewise Linear
3. 6/10/20136/10/2013
Arsitektur JST
Single layerSingle layer Multiple layerMultiple layer
fully connectedfully connected
Recurrent networkRecurrent network
without hidden unitswithout hidden units
inputsinputs
outputsoutputs
{
}
Recurrent networkRecurrent network
with hidden unitswith hidden units
Unit delayUnit delay
operatoroperator
Standard Activation Functions
âą The hard-limiting threshold function
â Corresponds to the biological paradigm
âą either fires or not
âą Sigmoid functions ('S'-shaped curves)
â The logistic function
â The hyperbolic tangent (symmetrical)
â Both functions have a simple differential
â Only the shape is important
)exp(1
1
)(
av
vf
â+
=
4. 6/10/20136/10/2013
âą Representation of Boolean function AND
âą Linear threshold is used
Perceptron Training
t = 0.0t = 0.0
YY
XX
WW11 = 1.5= 1.5
WW33 = 1= 1
--11
WW22 = 1= 1
11 ifif ÎŁÎŁ wwiixxii >t>t
OutputOutput== {{00 otherwiseotherwise
Perceptron Training
âą Epoch
â Presentation of the entire training set to the neural network.
â In the case of the AND function an epoch consists of four sets
of inputs being presented to the network (i.e. [0,0], [0,1], [1,0],
[1,1])
âą Error
â a simple definition of error
â The error value is the amount by which the value output by
the network differs from the target value.
â For example, if we required the network to output 0 and it
output a 1, then Error = -1
Sum of squaredSum of squared
errors :errors :
5. 66//1010//20132013
Perceptron Training
âą Target Value (T)
â Value required to be produced
â If we present the network with [1,1] for the AND function,
T will be 1
âą Output (O)
â The output value from the neuron
âą Ij - Inputs being presented to the neuron
âą Wj - Weight from input neuron (Ij) to the output neuron
âą LR( ) - The learning rate
This dictates how quickly the network converges
It is set by a matter of experimentation
η
Perceptron Training
âą Algorithm
While epoch produces a non null errorWhile epoch produces a non null error
End WhileEnd While