SlideShare ist ein Scribd-Unternehmen logo
1 von 62
Downloaden Sie, um offline zu lesen
1
CS407 Neural Computation
Lecture 7:
Neural Networks Based on
Competition.
Lecturer: A/Prof. M. Bennamoun
2
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
3
Introduction…Competitive nets
Fausett
The most extreme form of competition among a group of
neurons is called “Winner Take All”.
As the name suggests, only one neuron in the competing
group will have a nonzero output signal when the
competition is completed.
A specific competitive net that performs Winner Take All
(WTA) competition is the Maxnet.
A more general form of competition, the “Mexican Hat” will
also be described in this lecture (instead of a non-zero o/p for the winner
and zeros for all other competing nodes, we have a bubble around the winner ).
All of the other nets we discuss in this lecture use WTA
competition as part of their operation.
With the exception of the fixed-weight competitive nets
(namely Maxnet, Mexican Hat, and Hamming net) all of the
other nets combine competition with some form of learning
to adjusts the weights of the net (i.e. the weights that are
not part of any interconnections in the competitive layer).
4
Introduction…Competitive nets
Fausett
The form of learning depends on the purpose for which the
net is being trained:
– LVQ and counterpropagation net are trained to
perform mappings. The learning in this case is
supervised.
– SOM (used for clustering of input data): a common
use of unsupervised learning.
– ART are also clustering nets: also unsupervised.
Several of the nets discussed use the same learning
algorithm known as “Kohonen learning”: where the units
that update their weights do so by forming a new weight
vector that is a linear combination of the old weight vector
and the current input vector.
Typically, the unit whose weight vector was closest to the
input vector is allowed to learn.
5
Introduction…Competitive nets
Fausett
The weight update for output (or cluster) unit j is given as:
where x is the input vector, w.j is the weight vector for unit j,
and α the learning rate, decreases as learning proceeds.
Two methods of determining the closest weight vector to a
pattern vector are commonly used for self-organizing nets.
Both are based on the assumption that the weight vector
for each cluster (output) unit serves as an exemplar for the
input vectors that have been assigned to that unit during
learning.
– The first method of determining the winner uses the
squared Euclidean distance between the I/P vector
[ ]
old)()1(
old)()old()new(
.
...
j
jjj
wx
wxww
αα
α
−+
−+=
6
Introduction…Competitive nets
Fausett
and the weight vector and chooses the unit whose
weight vector has the smallest Euclidean distance
from the I/P vector.
– The second method uses the dot product of the I/P
vector and the weight vector.
The dot product can be interpreted as giving the
correlation between the I/P and weight vector.
In general and for consistency, we will use Euclidean
distance squared.
Many NNs use the idea of competition among neurons to
enhance the contrast in activations of the neurons
In the most extreme situation, the case of the Winner-Take-
All, only the neuron with the largest activation is allowed to
remain “on”.
7
8
Introduction…Competitive nets
1. Euclidean distance between two vectors
2. Cosine of the angle between two vectors
( ) ( )i
t
ii xxxxxx −−=−
i
i
t
xx
xx
=ψcos
21
ψψψ << T
9
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
10
MAXNET
[ ]k
M
k
yWy Γ=+1
















−−
−−
−−
=
1..
.
.
..1
..1
εε
εε
εε
MW
( )



≥
<
=
0,
0,0
netnet
net
netf
p
10 << ε
11
Hamming Network and MAXNET
MAXNET
– A recurrent network involving both excitatory
and inhibitory connections
• Positive self-feedbacks and negative
cross-feedbacks
– After a number of recurrences, the only non-
zero node will be the one with the largest
initializing entry from i/p vector
12
• Maxnet Lateral inhibition between competitors
Maxnet Fixed-weight competitive net
Fausett


 >
=
otherwise0
0if
)(
:Maxnetfor thefunctionactivation
xx
xf
Competition:
• iterative process until the net stabilizes (at most one
node with positive activation)
– Step 0:
• Initialize activations and weight (set )
where m is the # of nodes (competitors)
,/10 m<< ε
13
Algorithm
Step 1: While stopping condition is false do steps 2-4
– Step 2: Update the activation of each node:
– Step 3: Save activation for use in next iteration:
– Step 4: Test stopping condition:
If more than one node has a nonzero activation,
continue; otherwise, stop.



−
=
=
=
otherwise
if1
:weights
nodeinput to)0(
ε
ji
w
Aa
ij
jj
mj(old)a(old) –af(new)a
jk
kjj ,...,1for =





= ∑≠
ε
mj(new)a(old)a jj ,...,1for ==
the total i/p to node Aj from
all nodes, including itself
14
Example
Note that in step 2, the i/p to the function f is the total
i/p to node Aj from all nodes, including itself.
Some precautions should be incorporated to handle
the situation in which two or more units have the
same, maximal, input.
Example:
ε = .2 and the initial activations (input signals) are:
a1(0) = .2, a2(0) = .4 a3(0) = .4 a4(0) = .8
As the net iterates, the activations are:
a1(1) = f{a1(0) -.2 [a2(0) + a3(0) + a4(0)]} = f(-.12)=0
a1(1) = .0, a2(1) = .08 a3(1) = .32 a4(1) = .56
a1(2) = .0, a2(2) = .0 a3(2) = .192 a4(2) = .48
a1(3) = .0, a2(3) = .0 a3(3) = .096 a4(3) = .442
a1(4) = .0, a2(4) = .0 a3(4) = .008 a4(4) = .422
a1(5) = .0, a2(5) = .0 a3(5) = .0 a4(5) = .421
The only node
to remain on
15
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
16
Mexican Hat Fixed-weight competitive net
Architecture
Each neuron is connected with excitatory links (positively
weighted) to a number of “cooperative neighbors” neurons
that are in close proximity.
Each neuron is also connected with inhibitory links (with
negative weights) to a number of “competitive neighbors”
neurons that are somewhat further away.
There may also be a number of neurons, further away still, to
which the neurons is not connected.
All of these connections are within a particular layer of a
neural net.
The neurons receive an external signal in addition to these
interconnections signals (just like Maxnet).
This pattern of interconnections is repeated for each neuron
in the layer.
The interconnection pattern for unit Xi is as follows:
17
Mexican Hat Fixed-weight competitive net
The size of the region of cooperation (positive connections) and
the region of competition (negative connections) may vary, as
may vary the relative magnitudes of the +ve and –ve weights
and the topology of the regions (linear, rectangular, hexagonal,
etc..)
si = external activation
w1 = excitatory connectionw2 = inhibitory connection
18
Mexican Hat Fixed-weight competitive net
The contrast enhancement of the signal si received by unit Xi
is accomplished by iteration for several time steps.
The activation of unit Xi at time t is given by:
Where the terms in the summation are the weighted signal
from other units cooperative and competitive neighbors) at
the previous time step.
1 





−+= ∑ +
k
kikii )(txw(t)sf(t)x
inputs
+
-
-
-
-
19
Mexican Hat Fixed-weight competitive net
Nomenclature
R2=Radius of region of interconnections; Xi is connected to units Xi+k
and Xi-k for k=1, …R2
R1=Radius of region with +ve reinforcement;
R1< R2
wk = weights on interconnections between Xi and units Xi+k and Xi-k;
wk is positive for 0 ≤ k ≤ R1
wk is negative for R1 < k ≤ R2
x = vector of activations
x_old = vector of activations at previous time step
t_max = total number of iterations of contrast enhancement
s = external signal
20
Mexican Hat Fixed-weight competitive net
Algorithm
As presented, the algorithm corresponds to the external signal
being given only for the first iteration (step 1).
Step 0 Initialize parameters t_max, R1, R2 as desired
Initialize weights:
Step 1 Present external signal s:
Set iteration counter: t = 1
0x_old toInitialize
...,1for0
...,0for0
212
11



+=<
=>
=
RRkC
RkC
wij
ii xoldx =
=
=
_
:n)...,1,i(forarrayinactivationSave x_old
sx
21
Mexican Hat Fixed-weight competitive net
Algorithm
Step 2 While t is less than t_max, do steps 3-7
Step 3 Compute net input (i = 1, …n)
Step 4 Apply activation function f (ramp from 0 to x_max,
slope 1):
∑∑ ∑ +−=
+
−=
−−
−=
++ ++=
2
1
1
1
1
2 1
2
1
21 ___
R
Rk
ki
R
Rk
R
Rk
kikii oldxColdxColdxCx





>
≤≤
<
=
maxmax_
max0
00
)(
xifx
xifx
xif
xf
nixxx ii ,...,1)),0max(max,_min( ==
22
Mexican Hat Fixed-weight competitive net
Step 5 Save current activations in x_old:
Step 6 Increment iteration counter:
Step 7 Test stopping condition:
If t < t_max, continue; otherwise, stop.
Note:
The +ve reinforcement from nearby units and –ve reinforcement
from units that are further away have the effect of
– increasing the activation of units with larger initial activations,
– reducing the activations of those that had a smaller external
signal.
nixoldx ii ,...,1_ ==
1+= tt
23
Mexican Hat Example
Let’s consider a simple net with 7 units. The activation
function for this net is:
Step 0 Initialize parameters R1=1, R2 =2, C1 =.6, C2 =-.4
Step 1 t=0 The external signal s = (.0, .5, .8, 1, .8, .5, .0)
Step 2 t=1, the update formulas used in step 3, are listed
as follows for reference:
)0.0,5.0,8.0,0.1,8.0,5.0,0.0()1(old_
:_oldinSave
)0.0,5.0,8.0,0.1,8.0,5.0,0.0()0(
=
=
x
x
x





>
≤≤
<
=
22
20
00
)(
xif
xifx
xif
xf
The node
to be
enhanced
24
Mexican Hat Example
7657
76546
765435
654324
543213
43212
3211
_6.0_6.0_4.0
_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0_4.0
_4.0_6.0_6.0_6.0
_4.0_6.0_6.0
oldxoldxoldxx
oldxoldxoldxoldxx
oldxoldxoldxoldxoldxx
oldxoldxoldxoldxoldxx
oldxoldxoldxoldxoldxx
oldxoldxoldxoldxx
oldxoldxoldxx
++−=
+++−=
−+++−=
−+++−=
−+++−=
−++=
−+=
2.0)0.0(6.0)5.0(6.0)8.0(4.0
38.0)0.0(6.0)5.0(6.0)8.0(6.0)0.1(4.0
06.1)0.0(4.0)5.0(6.0)8.0(6.0)0.1(6.0)8.0(4.0
16.1)5.0(4.0)8.0(6.0)0.1(6.0)8.0(6.0)5.0(4.0
06.1)8.0(4.0)0.1(6.0)8.0(6.0)5.0(6.0)0.0(4.0
38.0)0.1(4.0)8.0(6.0)5.0(6.0)0.0(6.0
2.0)8.0(4.0)5.0(6.0)0.0(6.0
7
6
5
4
3
2
1
−=++−=
=+++−=
=−+++−=
=−+++−=
=−+++−=
=−++=
−=−+=
x
x
x
x
x
x
x
Step 3 t=1
25
Mexican Hat Example
Step 4
Step 5-7 Bookkeeping for next iteration
Step 3: t=2
Step 4:
Step 5-7 Bookkeeping for next iteration
196.0)0.0(6.0)38.0(6.0)06.1(4.0
39.0)0.0(6.0)38.0(6.0)06.1(6.0)16.1(4.0
14.1)0.0(4.0)38.0(6.0)06.1(6.0)16.1(6.0)06.1(4.0
66.1)38.0(4.0)06.1(6.0)16.1(6.0)06.1(6.0)38.0(4.0
14.1)06.1(4.0)16.1(6.0)06.1(6.0)38.0(6.0)0.0(4.0
39.0)16.1(4.0)06.1(6.0)38.0(6.0)0.0(6.0
196.0)06.1(4.0)38.0(6.0)0.0(6.0
7
6
5
4
3
2
1
−=++−=
=+++−=
=−+++−=
=−+++−=
=−+++−=
=−++=
−=−+=
x
x
x
x
x
x
x
)0.0,38.0,06.1,16.1,06.1,38.0,0.0(=x
)0.0,39.0,14.1,66.1,14.1,39.0,0.0(=x
The node
Which has
been
enhanced
26
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
27
Hamming Network Fixed-weight competitive net
Hamming distance of two vectors is the number of
components in which the vector differ.
For 2 bipolar vectors, of dimension n,
The Hamming net uses Maxnet as a subnet to find the
unit with the largest input (see figure on next slide).
yx and
distanceHammingshorterlargerlarger
)(5.0
2
),HD(distanceHamming
andindifferentbitsofnumberis
andinagreementinbitsofnumberis:where
⇒⇒⋅
+⋅=
−=⋅
=−=
−=⋅
a
na
na
and
d
a
da
yx
yx
yx
yx
yx
yx
yx




















0
1
0
and
0
1
1
28
Hamming Network Fixed-weight competitive net
Architecture
The sample architecture shown assumes input vectors are 4-
tuples, to be categorized as belonging to one of 2 classes.
29
Hamming Network Fixed-weight competitive net
Algorithm
Given a set of m bipolar exemplar vectors e(1), e(2), …, e(m), the
Hamming net can be used to find the exemplar that is closest to the
bipolar i/p vector x.
The net i/p y_inj to unit Yj gives the number of components in which
the i/p vector and the exemplar vector for unit Yje(j) agree (n minus
the Hamming distance between these vectors).
Nomenclature:
n number of input nodes, number of components of any I/p
vector
m number of o/p nodes, number of exemplar vectors
e(j) the jth exemplar vector:
( ))(,),(),()( 1 jejejej ni LL=e
30
Hamming Network Fixed-weight competitive net
Algorithm
Step 0 To store the m exemplar vectors, initialise the weights and
biases:
Step 1 For each vector x, do steps 2-4
Step 2 Compute the net input to each unit Yj:
Step 3 Initialise activations for Maxnet
Step 4 Maxnet iterates to find the best match exemplar.
nbjew jiij 5.0),(5.0 ==
mjwxbiny
i
ijijj ,,1for_ L=+= ∑
mjinyy jj ,,1for_)0( L==
31
Hamming Network Fixed-weight competitive net
Upper layer: Maxnet
– it takes the y_in as its initial value, then
iterates toward stable state (the best match
exemplar)
– one output node with highest y_in will be
the winner because its weight vector is
closest to the input vector
jY
32
Hamming Network Fixed-weight competitive net
Example (see architecture n=4)
[ ]
[ ] patterns)(exemplar2casein thism1-1,-1,-1,)2(
1-1,-1,-1,)1(
:ectorsexemplar vGiven the
=−=
=
e
e
The Hamming net can be used to find the exemplar that is closest to each of the
bipolar i/p patterns, (1, 1, -1, -1), (1, -1, -1, -1), (-1, -1, -1, 1), and (-1, -1, 1, 1)
Step 1: Store the m exemplar vectors in the weights:
case)(in this4nodesinputofnumber
2/2
:biasestheInitialize
2
)(
rewhe
5.5.
5.5.
5.5.
5.5.
21
==
===
=












−−
−−
−−
−
=
n
nbb
je
wW i
ij
33
Hamming Network Fixed-weight competitive net
Example
For the vector x=(1,1, -1, -1), let’s do the following steps:
These values represent the Hamming similarity because
(1, 1, -1, -1) agrees with e(1)= (1,-1, -1, -1) in the 1st , 3rd,
and 4th components and because (1,1, -1, -1) agrees
with e(2)=(-1,-1,-1,1) in only the 3rd component
Since y1(0) > y2(0), Maxnet will find that unit Y1 has the
best match exemplar for input vector x = (1, 1, -1, -1).
112_
312_
222
111
=−=+=
=+=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
1)0(
3)0(
2
1
=
=
y
y
34
Hamming Network Fixed-weight competitive net
Example
For the vector x=(1,-1, -1, -1), let’s do the following steps:
Note that the input agrees with e(1) in all 4 components
and agrees with e(2) in the 2nd and 3rd components
Since y1(0) > y2(0), Maxnet will find that unit Y1 has the
best match exemplar for input vector x = (1, -1, -1, -1).
202_
422_
222
111
=+=+=
=+=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
2)0(
4)0(
2
1
=
=
y
y
35
Hamming Network Fixed-weight competitive net
Example
For the vector x=(-1,-1, -1, 1), let’s do the following steps:
Note that the input agrees with e(1) in the 2nd and 3rd
components and agrees with e(2) in all four components
Since y2(0) > y1(0), Maxnet will find that unit Y2 has the
best match exemplar for input vector x = (-1, -1, -1, 1).
422_
202_
222
111
=+=+=
=+=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
4)0(
2)0(
2
1
=
=
y
y
36
Hamming Network Fixed-weight competitive net
Example
For the vector x=(-1,-1, 1, 1), let’s do the following steps:
Note that the input agrees with e(1) in the 2nd component
and agrees with e(2) in the 1st, 2nd, and 4th components
Since y2(0) > y1(0), Maxnet will find that unit Y2 has the
best match exemplar for input vector x = (-1, -1, 1, 1).
312_
112_
222
111
=+=+=
=−=+=
∑
∑
i
ii
i
ii
wxbiny
wxbiny
3)0(
1)0(
2
1
=
=
y
y
37
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
38
Kohonen Self-Organizing Maps (SOM)
Fausett
Teuvo Kohonen, born: 11-07-
34, Finland
Author of the book: “Self-
organizing maps”, Springer
(1997)
SOM = “Topology preserving
maps”
SOM assume a topological
structure among the cluster
units
This property is observed in the
brain, but not found in other
ANNs
39
Kohonen Self-Organizing Maps (SOM)
Fausett
40
Fausett
Kohonen Sefl-Organizing Maps (SOM)
Architecture:
41
Fausett
Kohonen Self-Organizing Maps (SOM)
Architecture:
* * {* (* [#] *) *} * *
{ } R=2 ( ) R=1 [ ] R=0
Linear array of cluster units
This figure represents the neighborhood of the unit designated by # in a 1D topology
(with 10 cluster units).
*******
*******
*******
***#***
*******
*******
*******
R=2
R=1
R=0
Neighbourhoods for rectangular grid (Each unit has 8 neighbours)
42
Fausett
Kohonen Self-Organizing Maps (SOM)
Architecture:
******
******
******
**#***
******
******
****** R=2
R=1
R=0
Neighbourhoods for hexagonal grid (Each unit has 6 neighbours).
In each of these illustrations, the “winning unit” is indicated by the symbol # and the
other units are denoted by *
NOTE: What if the winning unit is on the edge of the grid?
• Winning units that are close to the edge of the grid will have some neighbourhoods
that have fewer units than those shown in the respective figures
• Neighbourhoods do not “wrap around” from one side of the grid to the other; “missing”
units are simply ignored.
43
Training SOM Networks
Which nodes are trained?
How are they trained?
– Training functions
– Are all nodes trained the same amount?
Two basic types of training sessions?
– Initial formation
– Final convergence
How do you know when to stop?
– Small number of changes in input mappings
– Max # epochs reached
44
Fausett
Kohonen Self-Organizing Maps (SOM)
Algorithm:
Step 0. Initialise weights wij (randomly)
Set topological neighbourhood parameters
Set learning rate parameters
Step 1 While stopping condition is false, do steps 2-8.
Step 2 For each i/p vector x, do steps 3-5
Step 3 For each j, compute:
Step 4 Find index J such that D(J) is a minimum
Step 5 For all units j within a specified neighbourhood of J, and
for all i:
Step 6 Update learning rate.
Step 7 Reduce radius of topological neighbourhood at specified time
Step 8 Test stopping condition
∑ −=
i
iij xwJD 2
)()(
)]([)()( oldwxoldwneww ijiijij −+= α
45
Fausett
Kohonen Self-Organizing Maps (SOM)
Alternative structure for reducing R and α:
The learning rate α is a slowly decreasing function of time (or training
epochs). Kohonen indicates that a linearly decreasing function is satisfactory
for practical computations; a geometric decrease would produce similar
results.
The radius of the neighbourhood around a cluster unit also decreases as the
clustering process progresses.
The formation of a map occurs in 2 phases:
The initial formation of the correct order and
the final convergence
The second phase takes much longer than the first and requires a small
value for the learning rate.
Many iterations through the training set may be necessary, at least in
some applications.
46
Fausett
Kohonen Self-Organizing Maps (SOM)
Applications and Examples:
The SOM have had may applications:
Computer generated music (Kohonen 1989)
Traveling salesman problem
Examples: A Kohonen SOM to cluster 4 vectors:
– Let the 4 vectors to be clustered be:
s(1) = (1, 1, 0, 0) s(2) = (0, 0, 0, 1)
s(3) = (1, 0, 0, 0) s(4) = (0, 0, 1, 1)
- The maximum number of clusters to be formed is m=2 (4 input nodes,
2 output nodes)
Suppose the learning rate is α(0)=0.6 and α(t+1) = 0.5 α(t)
With only 2 clusters available, the neighborhood of node J (step 4) is set so
that only one cluster update its weights at each step (i.e. R=0)
47
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 0 Initial weight matrix
Initial radius: R=0
Initial learning rate: α(0)=0.6
Step 1 Begin training
Step 2 For s(1) = (1, 1, 0, 0) , do steps 3-5
Step 3












=
3.9.
7.5.
4.6.
8.2.
W
98.0)03(.)07(.)14(.)18(.)2(
86.1)09(.)05(.)16(.)12(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
48
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 4 The i/p vector is closest to o/p node 2, so J=2
Step 5 The weights on the winning unit are updated
This gives the weight matrix:
ii
iiii
xoldw
oldwxoldwneww
6.0)(4.0
)]([6.0)()(
2
222
+
−+=
This set of weights has not
been modified.












=
12.9.
28.5.
76.6.
92.2.
W
49
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 2 For s(2) = (0, 0, 0, 1) , do steps 3-5
Step 3
2768.2)112(.)028(.)076(.)092(.)2(
66.0)19(.)05(.)06(.)02(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
Step 4 The i/p vector is closest to o/p node 1, so J=1
Step 5 The weights on the winning unit are updated
This gives the weight matrix:












=
12.96.
28.20.
76.24.
92.08.
W
This set of weights has not
been modified.
50
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 2 For s(3) = (1, 0, 0, 0) , do steps 3-5
Step 3
6768.0)012(.)028(.)076(.)192(.)2(
8656.1)096(.)02(.)024(.)108(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
Step 4 The i/p vector is closest to o/p node 2, so J=2
Step 5 The weights on the winning unit are updated
This gives the weight matrix:












=
048.96.
112.20.
304.24.
968.08.
W
This set of weights has not
been modified.
51
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
Step 2 For s(4) = (0, 0, 1, 1) , do steps 3-5
Step 3
724.2)1048(.)1112(.)0304(.)0968(.)2(
7056.0)196(.)12(.)024(.)008(.)1(
2222
2222
=−+−+−+−=
=−+−+−+−=
D
D
Step 4 The i/p vector is closest to o/p node 1, so J=1
Step 5 The weights on the winning unit are updated
This gives the weight matrix:
Step 6 Reduce the learning rate α = 0.5(0.6) = 0.3












=
048.984.
112.680.
304.096.
968.032.
W
This set of weights has not
been modified.
52
Fausett
Kohonen Self-Organizing Maps (SOM)
Examples:
The weights update equations are now
The weight matrix after the 2nd epoch of training is:
Modifying the learning rate and after 100 iterations (epochs), the weight
matrix appears to be converging to:
iij
ijiijij
xoldw
oldwxoldwneww
3.0)(7.0
)]([3.0)()(
+
−+=












=
024.999.
055.630.
360.047.
980.016.
W












=
0.00.1
0.05.
5.00.
0.10.
W
The 1st
column is the average
of the 2 vects in cluster 1, and
the 2nd
col is the average of
the 2 vects in cluster 2
This set of weights has not
been modified.
53
Example of a SOM That
Use a 2-D Neighborhood
Character Recognition – Fausett, pp. 176-178
54
Traveling Salesman Problem (TSP) by SOM
The aim of the TSP is to find a tour of a given set
of cities that is of minimum length.
A tour consists of visiting each city exactly once
and returning to the starting city.
The net uses the city coordinates as input (n=2);
there are as many cluster units as there are cities
to be visited. The net has a linear topology (with
the first and last unit also connected).
Fig. shows the Initial
position of cluster
units and location of
cities
55
Traveling Salesman Problem (TSP) by SOM
Fig. Shows the result
after 100 epochs of
training with R=1
(learning rate
decreasing from 0.5 to
0.4).
Fig. Shows the final tour after 100
epochs of training with R=0
Two candidate solutions:
ADEFGHIJBC
ADEFGHIJCB
56
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
57
MatlabSOMExample
58
Architecture
59
CreatingaSOM
newsom -- Create a self-organizing map
Syntax
net = newsom
net = newsom(PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND)
Description
net = newsom creates a new network with a dialog box.
net = newsom (PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND)
takes,
PR - R x 2 matrix of min and max values for R input elements.
Di - Size of ith layer dimension, defaults = [5 8].
TFCN - Topology function, default ='hextop'.
DFCN - Distance function, default ='linkdist'.
OLR - Ordering phase learning rate, default = 0.9.
OSTEPS - Ordering phase steps, default = 1000.
TLR - Tuning phase learning rate, default = 0.02;
TND - Tuning phase neighborhood distance, default = 1.
and returns a new self-organizing map.
The topology function TFCN can be hextop, gridtop, or randtop. The
distance function can be linkdist, dist, or mandist.
60
Neural Networks Based on Competition.
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
Introduction
Fixed weight competitive nets
– Maxnet
– Mexican Hat
– Hamming Net
Kohonen Self-Organizing Maps
(SOM)
SOM in Matlab
References and suggested
reading
61
Suggested Reading.
L. Fausett,
“Fundamentals of
Neural Networks”,
Prentice-Hall,
1994, Chapter 4.
J. Zurada,
“Artificial Neural
systems”, West
Publishing, 1992,
Chapter 7.
62
References:
These lecture notes were based on the references of the
previous slide, and the following references
1. Berlin Chen Lecture notes: Normal University, Taipei,
Taiwan, ROC. http://140.122.185.120
2. Dan St. Clair, University of Missori-Rolla,
http://web.umr.edu/~stclair/class/classfiles/cs404_fs02/Misc/CS
404_fall2001/Lectures/ Lecture 14.

Weitere ähnliche Inhalte

Was ist angesagt?

Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesMohammed Bennamoun
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural networkmustafa aadel
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksThe Integral Worm
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksSivagowry Shathesh
 
Activation function
Activation functionActivation function
Activation functionAstha Jain
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationMohammed Bennamoun
 
Edge Detection and Segmentation
Edge Detection and SegmentationEdge Detection and Segmentation
Edge Detection and SegmentationA B Shinde
 
Artificial neural network for machine learning
Artificial neural network for machine learningArtificial neural network for machine learning
Artificial neural network for machine learninggrinu
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing mapsraphaelkiminya
 
Cyrus beck line clipping algorithm
Cyrus beck line clipping algorithmCyrus beck line clipping algorithm
Cyrus beck line clipping algorithmPooja Dixit
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural NetworkAtul Krishna
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Mostafa G. M. Mostafa
 

Was ist angesagt? (20)

Mc culloch pitts neuron
Mc culloch pitts neuronMc culloch pitts neuron
Mc culloch pitts neuron
 
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rulesArtificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Networks Lect3: Neural Network Learning rules
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
HOPFIELD NETWORK
HOPFIELD NETWORKHOPFIELD NETWORK
HOPFIELD NETWORK
 
Anfis (1)
Anfis (1)Anfis (1)
Anfis (1)
 
Art network
Art networkArt network
Art network
 
Artificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural NetworksArtificial Intelligence: Artificial Neural Networks
Artificial Intelligence: Artificial Neural Networks
 
Principles of soft computing-Associative memory networks
Principles of soft computing-Associative memory networksPrinciples of soft computing-Associative memory networks
Principles of soft computing-Associative memory networks
 
Activation function
Activation functionActivation function
Activation function
 
Artificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computationArtificial Neural Networks Lect1: Introduction & neural computation
Artificial Neural Networks Lect1: Introduction & neural computation
 
Edge Detection and Segmentation
Edge Detection and SegmentationEdge Detection and Segmentation
Edge Detection and Segmentation
 
Perceptron & Neural Networks
Perceptron & Neural NetworksPerceptron & Neural Networks
Perceptron & Neural Networks
 
Associative memory network
Associative memory networkAssociative memory network
Associative memory network
 
Activation function
Activation functionActivation function
Activation function
 
Artificial neural network for machine learning
Artificial neural network for machine learningArtificial neural network for machine learning
Artificial neural network for machine learning
 
Kohonen self organizing maps
Kohonen self organizing mapsKohonen self organizing maps
Kohonen self organizing maps
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
 
Cyrus beck line clipping algorithm
Cyrus beck line clipping algorithmCyrus beck line clipping algorithm
Cyrus beck line clipping algorithm
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
 
Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)Neural Networks: Radial Bases Functions (RBF)
Neural Networks: Radial Bases Functions (RBF)
 

Ähnlich wie neural networksNnf

ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...vijaym148
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Two algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksTwo algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksESCOM
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGRevanth Kumar
 
Classification using perceptron.pptx
Classification using perceptron.pptxClassification using perceptron.pptx
Classification using perceptron.pptxsomeyamohsen3
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patternsVito Strano
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORKESCOM
 
latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxMdMahfoozAlam5
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 
Back propagation
Back propagationBack propagation
Back propagationNagarajan
 

Ähnlich wie neural networksNnf (20)

ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
 
Multi Layer Network
Multi Layer NetworkMulti Layer Network
Multi Layer Network
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
 
MNN
MNNMNN
MNN
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
20120140503023
2012014050302320120140503023
20120140503023
 
Two algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networksTwo algorithms to accelerate training of back-propagation neural networks
Two algorithms to accelerate training of back-propagation neural networks
 
APPLIED MACHINE LEARNING
APPLIED MACHINE LEARNINGAPPLIED MACHINE LEARNING
APPLIED MACHINE LEARNING
 
Classification using perceptron.pptx
Classification using perceptron.pptxClassification using perceptron.pptx
Classification using perceptron.pptx
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
 
Echo state networks and locomotion patterns
Echo state networks and locomotion patternsEcho state networks and locomotion patterns
Echo state networks and locomotion patterns
 
Counterpropagation NETWORK
Counterpropagation NETWORKCounterpropagation NETWORK
Counterpropagation NETWORK
 
Lec 6-bp
Lec 6-bpLec 6-bp
Lec 6-bp
 
latest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptxlatest TYPES OF NEURAL NETWORKS (2).pptx
latest TYPES OF NEURAL NETWORKS (2).pptx
 
UofT_ML_lecture.pptx
UofT_ML_lecture.pptxUofT_ML_lecture.pptx
UofT_ML_lecture.pptx
 
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
Back propagation
Back propagationBack propagation
Back propagation
 
Mechanical Engineering Assignment Help
Mechanical Engineering Assignment HelpMechanical Engineering Assignment Help
Mechanical Engineering Assignment Help
 

Mehr von Sandilya Sridhara

Banking awareness quick reference guide 2014 gr8 ambitionz
Banking awareness quick reference guide 2014   gr8 ambitionzBanking awareness quick reference guide 2014   gr8 ambitionz
Banking awareness quick reference guide 2014 gr8 ambitionzSandilya Sridhara
 
Rbi assistants previous paper gr8 ambitionz
Rbi assistants previous paper   gr8 ambitionzRbi assistants previous paper   gr8 ambitionz
Rbi assistants previous paper gr8 ambitionzSandilya Sridhara
 
Eece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transformEece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transformSandilya Sridhara
 

Mehr von Sandilya Sridhara (7)

Continuous glucose
Continuous glucoseContinuous glucose
Continuous glucose
 
Banking awareness quick reference guide 2014 gr8 ambitionz
Banking awareness quick reference guide 2014   gr8 ambitionzBanking awareness quick reference guide 2014   gr8 ambitionz
Banking awareness quick reference guide 2014 gr8 ambitionz
 
Tempsensors sensor
Tempsensors sensorTempsensors sensor
Tempsensors sensor
 
Rbi assistants previous paper gr8 ambitionz
Rbi assistants previous paper   gr8 ambitionzRbi assistants previous paper   gr8 ambitionz
Rbi assistants previous paper gr8 ambitionz
 
Trig cheat sheet
Trig cheat sheetTrig cheat sheet
Trig cheat sheet
 
Trig cheat sheet
Trig cheat sheetTrig cheat sheet
Trig cheat sheet
 
Eece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transformEece 301 note set 14 fourier transform
Eece 301 note set 14 fourier transform
 

Kürzlich hochgeladen

University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGMANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGSIVASHANKAR N
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 

Kürzlich hochgeladen (20)

University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTINGMANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
MANUFACTURING PROCESS-II UNIT-1 THEORY OF METAL CUTTING
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 

neural networksNnf

  • 1. 1 CS407 Neural Computation Lecture 7: Neural Networks Based on Competition. Lecturer: A/Prof. M. Bennamoun
  • 2. 2 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 3. 3 Introduction…Competitive nets Fausett The most extreme form of competition among a group of neurons is called “Winner Take All”. As the name suggests, only one neuron in the competing group will have a nonzero output signal when the competition is completed. A specific competitive net that performs Winner Take All (WTA) competition is the Maxnet. A more general form of competition, the “Mexican Hat” will also be described in this lecture (instead of a non-zero o/p for the winner and zeros for all other competing nodes, we have a bubble around the winner ). All of the other nets we discuss in this lecture use WTA competition as part of their operation. With the exception of the fixed-weight competitive nets (namely Maxnet, Mexican Hat, and Hamming net) all of the other nets combine competition with some form of learning to adjusts the weights of the net (i.e. the weights that are not part of any interconnections in the competitive layer).
  • 4. 4 Introduction…Competitive nets Fausett The form of learning depends on the purpose for which the net is being trained: – LVQ and counterpropagation net are trained to perform mappings. The learning in this case is supervised. – SOM (used for clustering of input data): a common use of unsupervised learning. – ART are also clustering nets: also unsupervised. Several of the nets discussed use the same learning algorithm known as “Kohonen learning”: where the units that update their weights do so by forming a new weight vector that is a linear combination of the old weight vector and the current input vector. Typically, the unit whose weight vector was closest to the input vector is allowed to learn.
  • 5. 5 Introduction…Competitive nets Fausett The weight update for output (or cluster) unit j is given as: where x is the input vector, w.j is the weight vector for unit j, and α the learning rate, decreases as learning proceeds. Two methods of determining the closest weight vector to a pattern vector are commonly used for self-organizing nets. Both are based on the assumption that the weight vector for each cluster (output) unit serves as an exemplar for the input vectors that have been assigned to that unit during learning. – The first method of determining the winner uses the squared Euclidean distance between the I/P vector [ ] old)()1( old)()old()new( . ... j jjj wx wxww αα α −+ −+=
  • 6. 6 Introduction…Competitive nets Fausett and the weight vector and chooses the unit whose weight vector has the smallest Euclidean distance from the I/P vector. – The second method uses the dot product of the I/P vector and the weight vector. The dot product can be interpreted as giving the correlation between the I/P and weight vector. In general and for consistency, we will use Euclidean distance squared. Many NNs use the idea of competition among neurons to enhance the contrast in activations of the neurons In the most extreme situation, the case of the Winner-Take- All, only the neuron with the largest activation is allowed to remain “on”.
  • 7. 7
  • 8. 8 Introduction…Competitive nets 1. Euclidean distance between two vectors 2. Cosine of the angle between two vectors ( ) ( )i t ii xxxxxx −−=− i i t xx xx =ψcos 21 ψψψ << T
  • 9. 9 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 11. 11 Hamming Network and MAXNET MAXNET – A recurrent network involving both excitatory and inhibitory connections • Positive self-feedbacks and negative cross-feedbacks – After a number of recurrences, the only non- zero node will be the one with the largest initializing entry from i/p vector
  • 12. 12 • Maxnet Lateral inhibition between competitors Maxnet Fixed-weight competitive net Fausett    > = otherwise0 0if )( :Maxnetfor thefunctionactivation xx xf Competition: • iterative process until the net stabilizes (at most one node with positive activation) – Step 0: • Initialize activations and weight (set ) where m is the # of nodes (competitors) ,/10 m<< ε
  • 13. 13 Algorithm Step 1: While stopping condition is false do steps 2-4 – Step 2: Update the activation of each node: – Step 3: Save activation for use in next iteration: – Step 4: Test stopping condition: If more than one node has a nonzero activation, continue; otherwise, stop.    − = = = otherwise if1 :weights nodeinput to)0( ε ji w Aa ij jj mj(old)a(old) –af(new)a jk kjj ,...,1for =      = ∑≠ ε mj(new)a(old)a jj ,...,1for == the total i/p to node Aj from all nodes, including itself
  • 14. 14 Example Note that in step 2, the i/p to the function f is the total i/p to node Aj from all nodes, including itself. Some precautions should be incorporated to handle the situation in which two or more units have the same, maximal, input. Example: ε = .2 and the initial activations (input signals) are: a1(0) = .2, a2(0) = .4 a3(0) = .4 a4(0) = .8 As the net iterates, the activations are: a1(1) = f{a1(0) -.2 [a2(0) + a3(0) + a4(0)]} = f(-.12)=0 a1(1) = .0, a2(1) = .08 a3(1) = .32 a4(1) = .56 a1(2) = .0, a2(2) = .0 a3(2) = .192 a4(2) = .48 a1(3) = .0, a2(3) = .0 a3(3) = .096 a4(3) = .442 a1(4) = .0, a2(4) = .0 a3(4) = .008 a4(4) = .422 a1(5) = .0, a2(5) = .0 a3(5) = .0 a4(5) = .421 The only node to remain on
  • 15. 15 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 16. 16 Mexican Hat Fixed-weight competitive net Architecture Each neuron is connected with excitatory links (positively weighted) to a number of “cooperative neighbors” neurons that are in close proximity. Each neuron is also connected with inhibitory links (with negative weights) to a number of “competitive neighbors” neurons that are somewhat further away. There may also be a number of neurons, further away still, to which the neurons is not connected. All of these connections are within a particular layer of a neural net. The neurons receive an external signal in addition to these interconnections signals (just like Maxnet). This pattern of interconnections is repeated for each neuron in the layer. The interconnection pattern for unit Xi is as follows:
  • 17. 17 Mexican Hat Fixed-weight competitive net The size of the region of cooperation (positive connections) and the region of competition (negative connections) may vary, as may vary the relative magnitudes of the +ve and –ve weights and the topology of the regions (linear, rectangular, hexagonal, etc..) si = external activation w1 = excitatory connectionw2 = inhibitory connection
  • 18. 18 Mexican Hat Fixed-weight competitive net The contrast enhancement of the signal si received by unit Xi is accomplished by iteration for several time steps. The activation of unit Xi at time t is given by: Where the terms in the summation are the weighted signal from other units cooperative and competitive neighbors) at the previous time step. 1       −+= ∑ + k kikii )(txw(t)sf(t)x inputs + - - - -
  • 19. 19 Mexican Hat Fixed-weight competitive net Nomenclature R2=Radius of region of interconnections; Xi is connected to units Xi+k and Xi-k for k=1, …R2 R1=Radius of region with +ve reinforcement; R1< R2 wk = weights on interconnections between Xi and units Xi+k and Xi-k; wk is positive for 0 ≤ k ≤ R1 wk is negative for R1 < k ≤ R2 x = vector of activations x_old = vector of activations at previous time step t_max = total number of iterations of contrast enhancement s = external signal
  • 20. 20 Mexican Hat Fixed-weight competitive net Algorithm As presented, the algorithm corresponds to the external signal being given only for the first iteration (step 1). Step 0 Initialize parameters t_max, R1, R2 as desired Initialize weights: Step 1 Present external signal s: Set iteration counter: t = 1 0x_old toInitialize ...,1for0 ...,0for0 212 11    +=< => = RRkC RkC wij ii xoldx = = = _ :n)...,1,i(forarrayinactivationSave x_old sx
  • 21. 21 Mexican Hat Fixed-weight competitive net Algorithm Step 2 While t is less than t_max, do steps 3-7 Step 3 Compute net input (i = 1, …n) Step 4 Apply activation function f (ramp from 0 to x_max, slope 1): ∑∑ ∑ +−= + −= −− −= ++ ++= 2 1 1 1 1 2 1 2 1 21 ___ R Rk ki R Rk R Rk kikii oldxColdxColdxCx      > ≤≤ < = maxmax_ max0 00 )( xifx xifx xif xf nixxx ii ,...,1)),0max(max,_min( ==
  • 22. 22 Mexican Hat Fixed-weight competitive net Step 5 Save current activations in x_old: Step 6 Increment iteration counter: Step 7 Test stopping condition: If t < t_max, continue; otherwise, stop. Note: The +ve reinforcement from nearby units and –ve reinforcement from units that are further away have the effect of – increasing the activation of units with larger initial activations, – reducing the activations of those that had a smaller external signal. nixoldx ii ,...,1_ == 1+= tt
  • 23. 23 Mexican Hat Example Let’s consider a simple net with 7 units. The activation function for this net is: Step 0 Initialize parameters R1=1, R2 =2, C1 =.6, C2 =-.4 Step 1 t=0 The external signal s = (.0, .5, .8, 1, .8, .5, .0) Step 2 t=1, the update formulas used in step 3, are listed as follows for reference: )0.0,5.0,8.0,0.1,8.0,5.0,0.0()1(old_ :_oldinSave )0.0,5.0,8.0,0.1,8.0,5.0,0.0()0( = = x x x      > ≤≤ < = 22 20 00 )( xif xifx xif xf The node to be enhanced
  • 25. 25 Mexican Hat Example Step 4 Step 5-7 Bookkeeping for next iteration Step 3: t=2 Step 4: Step 5-7 Bookkeeping for next iteration 196.0)0.0(6.0)38.0(6.0)06.1(4.0 39.0)0.0(6.0)38.0(6.0)06.1(6.0)16.1(4.0 14.1)0.0(4.0)38.0(6.0)06.1(6.0)16.1(6.0)06.1(4.0 66.1)38.0(4.0)06.1(6.0)16.1(6.0)06.1(6.0)38.0(4.0 14.1)06.1(4.0)16.1(6.0)06.1(6.0)38.0(6.0)0.0(4.0 39.0)16.1(4.0)06.1(6.0)38.0(6.0)0.0(6.0 196.0)06.1(4.0)38.0(6.0)0.0(6.0 7 6 5 4 3 2 1 −=++−= =+++−= =−+++−= =−+++−= =−+++−= =−++= −=−+= x x x x x x x )0.0,38.0,06.1,16.1,06.1,38.0,0.0(=x )0.0,39.0,14.1,66.1,14.1,39.0,0.0(=x The node Which has been enhanced
  • 26. 26 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 27. 27 Hamming Network Fixed-weight competitive net Hamming distance of two vectors is the number of components in which the vector differ. For 2 bipolar vectors, of dimension n, The Hamming net uses Maxnet as a subnet to find the unit with the largest input (see figure on next slide). yx and distanceHammingshorterlargerlarger )(5.0 2 ),HD(distanceHamming andindifferentbitsofnumberis andinagreementinbitsofnumberis:where ⇒⇒⋅ +⋅= −=⋅ =−= −=⋅ a na na and d a da yx yx yx yx yx yx yx                     0 1 0 and 0 1 1
  • 28. 28 Hamming Network Fixed-weight competitive net Architecture The sample architecture shown assumes input vectors are 4- tuples, to be categorized as belonging to one of 2 classes.
  • 29. 29 Hamming Network Fixed-weight competitive net Algorithm Given a set of m bipolar exemplar vectors e(1), e(2), …, e(m), the Hamming net can be used to find the exemplar that is closest to the bipolar i/p vector x. The net i/p y_inj to unit Yj gives the number of components in which the i/p vector and the exemplar vector for unit Yje(j) agree (n minus the Hamming distance between these vectors). Nomenclature: n number of input nodes, number of components of any I/p vector m number of o/p nodes, number of exemplar vectors e(j) the jth exemplar vector: ( ))(,),(),()( 1 jejejej ni LL=e
  • 30. 30 Hamming Network Fixed-weight competitive net Algorithm Step 0 To store the m exemplar vectors, initialise the weights and biases: Step 1 For each vector x, do steps 2-4 Step 2 Compute the net input to each unit Yj: Step 3 Initialise activations for Maxnet Step 4 Maxnet iterates to find the best match exemplar. nbjew jiij 5.0),(5.0 == mjwxbiny i ijijj ,,1for_ L=+= ∑ mjinyy jj ,,1for_)0( L==
  • 31. 31 Hamming Network Fixed-weight competitive net Upper layer: Maxnet – it takes the y_in as its initial value, then iterates toward stable state (the best match exemplar) – one output node with highest y_in will be the winner because its weight vector is closest to the input vector jY
  • 32. 32 Hamming Network Fixed-weight competitive net Example (see architecture n=4) [ ] [ ] patterns)(exemplar2casein thism1-1,-1,-1,)2( 1-1,-1,-1,)1( :ectorsexemplar vGiven the =−= = e e The Hamming net can be used to find the exemplar that is closest to each of the bipolar i/p patterns, (1, 1, -1, -1), (1, -1, -1, -1), (-1, -1, -1, 1), and (-1, -1, 1, 1) Step 1: Store the m exemplar vectors in the weights: case)(in this4nodesinputofnumber 2/2 :biasestheInitialize 2 )( rewhe 5.5. 5.5. 5.5. 5.5. 21 == === =             −− −− −− − = n nbb je wW i ij
  • 33. 33 Hamming Network Fixed-weight competitive net Example For the vector x=(1,1, -1, -1), let’s do the following steps: These values represent the Hamming similarity because (1, 1, -1, -1) agrees with e(1)= (1,-1, -1, -1) in the 1st , 3rd, and 4th components and because (1,1, -1, -1) agrees with e(2)=(-1,-1,-1,1) in only the 3rd component Since y1(0) > y2(0), Maxnet will find that unit Y1 has the best match exemplar for input vector x = (1, 1, -1, -1). 112_ 312_ 222 111 =−=+= =+=+= ∑ ∑ i ii i ii wxbiny wxbiny 1)0( 3)0( 2 1 = = y y
  • 34. 34 Hamming Network Fixed-weight competitive net Example For the vector x=(1,-1, -1, -1), let’s do the following steps: Note that the input agrees with e(1) in all 4 components and agrees with e(2) in the 2nd and 3rd components Since y1(0) > y2(0), Maxnet will find that unit Y1 has the best match exemplar for input vector x = (1, -1, -1, -1). 202_ 422_ 222 111 =+=+= =+=+= ∑ ∑ i ii i ii wxbiny wxbiny 2)0( 4)0( 2 1 = = y y
  • 35. 35 Hamming Network Fixed-weight competitive net Example For the vector x=(-1,-1, -1, 1), let’s do the following steps: Note that the input agrees with e(1) in the 2nd and 3rd components and agrees with e(2) in all four components Since y2(0) > y1(0), Maxnet will find that unit Y2 has the best match exemplar for input vector x = (-1, -1, -1, 1). 422_ 202_ 222 111 =+=+= =+=+= ∑ ∑ i ii i ii wxbiny wxbiny 4)0( 2)0( 2 1 = = y y
  • 36. 36 Hamming Network Fixed-weight competitive net Example For the vector x=(-1,-1, 1, 1), let’s do the following steps: Note that the input agrees with e(1) in the 2nd component and agrees with e(2) in the 1st, 2nd, and 4th components Since y2(0) > y1(0), Maxnet will find that unit Y2 has the best match exemplar for input vector x = (-1, -1, 1, 1). 312_ 112_ 222 111 =+=+= =−=+= ∑ ∑ i ii i ii wxbiny wxbiny 3)0( 1)0( 2 1 = = y y
  • 37. 37 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 38. 38 Kohonen Self-Organizing Maps (SOM) Fausett Teuvo Kohonen, born: 11-07- 34, Finland Author of the book: “Self- organizing maps”, Springer (1997) SOM = “Topology preserving maps” SOM assume a topological structure among the cluster units This property is observed in the brain, but not found in other ANNs
  • 41. 41 Fausett Kohonen Self-Organizing Maps (SOM) Architecture: * * {* (* [#] *) *} * * { } R=2 ( ) R=1 [ ] R=0 Linear array of cluster units This figure represents the neighborhood of the unit designated by # in a 1D topology (with 10 cluster units). ******* ******* ******* ***#*** ******* ******* ******* R=2 R=1 R=0 Neighbourhoods for rectangular grid (Each unit has 8 neighbours)
  • 42. 42 Fausett Kohonen Self-Organizing Maps (SOM) Architecture: ****** ****** ****** **#*** ****** ****** ****** R=2 R=1 R=0 Neighbourhoods for hexagonal grid (Each unit has 6 neighbours). In each of these illustrations, the “winning unit” is indicated by the symbol # and the other units are denoted by * NOTE: What if the winning unit is on the edge of the grid? • Winning units that are close to the edge of the grid will have some neighbourhoods that have fewer units than those shown in the respective figures • Neighbourhoods do not “wrap around” from one side of the grid to the other; “missing” units are simply ignored.
  • 43. 43 Training SOM Networks Which nodes are trained? How are they trained? – Training functions – Are all nodes trained the same amount? Two basic types of training sessions? – Initial formation – Final convergence How do you know when to stop? – Small number of changes in input mappings – Max # epochs reached
  • 44. 44 Fausett Kohonen Self-Organizing Maps (SOM) Algorithm: Step 0. Initialise weights wij (randomly) Set topological neighbourhood parameters Set learning rate parameters Step 1 While stopping condition is false, do steps 2-8. Step 2 For each i/p vector x, do steps 3-5 Step 3 For each j, compute: Step 4 Find index J such that D(J) is a minimum Step 5 For all units j within a specified neighbourhood of J, and for all i: Step 6 Update learning rate. Step 7 Reduce radius of topological neighbourhood at specified time Step 8 Test stopping condition ∑ −= i iij xwJD 2 )()( )]([)()( oldwxoldwneww ijiijij −+= α
  • 45. 45 Fausett Kohonen Self-Organizing Maps (SOM) Alternative structure for reducing R and α: The learning rate α is a slowly decreasing function of time (or training epochs). Kohonen indicates that a linearly decreasing function is satisfactory for practical computations; a geometric decrease would produce similar results. The radius of the neighbourhood around a cluster unit also decreases as the clustering process progresses. The formation of a map occurs in 2 phases: The initial formation of the correct order and the final convergence The second phase takes much longer than the first and requires a small value for the learning rate. Many iterations through the training set may be necessary, at least in some applications.
  • 46. 46 Fausett Kohonen Self-Organizing Maps (SOM) Applications and Examples: The SOM have had may applications: Computer generated music (Kohonen 1989) Traveling salesman problem Examples: A Kohonen SOM to cluster 4 vectors: – Let the 4 vectors to be clustered be: s(1) = (1, 1, 0, 0) s(2) = (0, 0, 0, 1) s(3) = (1, 0, 0, 0) s(4) = (0, 0, 1, 1) - The maximum number of clusters to be formed is m=2 (4 input nodes, 2 output nodes) Suppose the learning rate is α(0)=0.6 and α(t+1) = 0.5 α(t) With only 2 clusters available, the neighborhood of node J (step 4) is set so that only one cluster update its weights at each step (i.e. R=0)
  • 47. 47 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 0 Initial weight matrix Initial radius: R=0 Initial learning rate: α(0)=0.6 Step 1 Begin training Step 2 For s(1) = (1, 1, 0, 0) , do steps 3-5 Step 3             = 3.9. 7.5. 4.6. 8.2. W 98.0)03(.)07(.)14(.)18(.)2( 86.1)09(.)05(.)16(.)12(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D
  • 48. 48 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 4 The i/p vector is closest to o/p node 2, so J=2 Step 5 The weights on the winning unit are updated This gives the weight matrix: ii iiii xoldw oldwxoldwneww 6.0)(4.0 )]([6.0)()( 2 222 + −+= This set of weights has not been modified.             = 12.9. 28.5. 76.6. 92.2. W
  • 49. 49 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 2 For s(2) = (0, 0, 0, 1) , do steps 3-5 Step 3 2768.2)112(.)028(.)076(.)092(.)2( 66.0)19(.)05(.)06(.)02(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D Step 4 The i/p vector is closest to o/p node 1, so J=1 Step 5 The weights on the winning unit are updated This gives the weight matrix:             = 12.96. 28.20. 76.24. 92.08. W This set of weights has not been modified.
  • 50. 50 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 2 For s(3) = (1, 0, 0, 0) , do steps 3-5 Step 3 6768.0)012(.)028(.)076(.)192(.)2( 8656.1)096(.)02(.)024(.)108(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D Step 4 The i/p vector is closest to o/p node 2, so J=2 Step 5 The weights on the winning unit are updated This gives the weight matrix:             = 048.96. 112.20. 304.24. 968.08. W This set of weights has not been modified.
  • 51. 51 Fausett Kohonen Self-Organizing Maps (SOM) Examples: Step 2 For s(4) = (0, 0, 1, 1) , do steps 3-5 Step 3 724.2)1048(.)1112(.)0304(.)0968(.)2( 7056.0)196(.)12(.)024(.)008(.)1( 2222 2222 =−+−+−+−= =−+−+−+−= D D Step 4 The i/p vector is closest to o/p node 1, so J=1 Step 5 The weights on the winning unit are updated This gives the weight matrix: Step 6 Reduce the learning rate α = 0.5(0.6) = 0.3             = 048.984. 112.680. 304.096. 968.032. W This set of weights has not been modified.
  • 52. 52 Fausett Kohonen Self-Organizing Maps (SOM) Examples: The weights update equations are now The weight matrix after the 2nd epoch of training is: Modifying the learning rate and after 100 iterations (epochs), the weight matrix appears to be converging to: iij ijiijij xoldw oldwxoldwneww 3.0)(7.0 )]([3.0)()( + −+=             = 024.999. 055.630. 360.047. 980.016. W             = 0.00.1 0.05. 5.00. 0.10. W The 1st column is the average of the 2 vects in cluster 1, and the 2nd col is the average of the 2 vects in cluster 2 This set of weights has not been modified.
  • 53. 53 Example of a SOM That Use a 2-D Neighborhood Character Recognition – Fausett, pp. 176-178
  • 54. 54 Traveling Salesman Problem (TSP) by SOM The aim of the TSP is to find a tour of a given set of cities that is of minimum length. A tour consists of visiting each city exactly once and returning to the starting city. The net uses the city coordinates as input (n=2); there are as many cluster units as there are cities to be visited. The net has a linear topology (with the first and last unit also connected). Fig. shows the Initial position of cluster units and location of cities
  • 55. 55 Traveling Salesman Problem (TSP) by SOM Fig. Shows the result after 100 epochs of training with R=1 (learning rate decreasing from 0.5 to 0.4). Fig. Shows the final tour after 100 epochs of training with R=0 Two candidate solutions: ADEFGHIJBC ADEFGHIJCB
  • 56. 56 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 59. 59 CreatingaSOM newsom -- Create a self-organizing map Syntax net = newsom net = newsom(PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND) Description net = newsom creates a new network with a dialog box. net = newsom (PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TND) takes, PR - R x 2 matrix of min and max values for R input elements. Di - Size of ith layer dimension, defaults = [5 8]. TFCN - Topology function, default ='hextop'. DFCN - Distance function, default ='linkdist'. OLR - Ordering phase learning rate, default = 0.9. OSTEPS - Ordering phase steps, default = 1000. TLR - Tuning phase learning rate, default = 0.02; TND - Tuning phase neighborhood distance, default = 1. and returns a new self-organizing map. The topology function TFCN can be hextop, gridtop, or randtop. The distance function can be linkdist, dist, or mandist.
  • 60. 60 Neural Networks Based on Competition. Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading Introduction Fixed weight competitive nets – Maxnet – Mexican Hat – Hamming Net Kohonen Self-Organizing Maps (SOM) SOM in Matlab References and suggested reading
  • 61. 61 Suggested Reading. L. Fausett, “Fundamentals of Neural Networks”, Prentice-Hall, 1994, Chapter 4. J. Zurada, “Artificial Neural systems”, West Publishing, 1992, Chapter 7.
  • 62. 62 References: These lecture notes were based on the references of the previous slide, and the following references 1. Berlin Chen Lecture notes: Normal University, Taipei, Taiwan, ROC. http://140.122.185.120 2. Dan St. Clair, University of Missori-Rolla, http://web.umr.edu/~stclair/class/classfiles/cs404_fs02/Misc/CS 404_fall2001/Lectures/ Lecture 14.