An Immune Algorithm for Protein Structure Prediction on Lattice Models
Robust Immunological Algorithms for High-Dimensional Global Optimization
1. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Robust Immunological Algorithms for
High-Dimensional Global Optimization
V. Cutello † G. Narzisi ‡ G. Nicosia † M. Pavone †
† Department of Mathematics and Computer Science
University of Catania
Viale A. Doria 6, 95125 Catania, Italy
(cutello, nicosia, mpavone)@dmi.unict.it
‡
Computer Science Department
Courant Institute of Mathematical Sciences
New York University
New York, NY 10012, U.S.A.
narzisi@nyu.edu
Robust Immunological Algorithms for High-Dimensional Global Optimization
2. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Outline
Introduction
Global Optimization
Numerical Minimization Problem
Artificial Immune System
Optimization Immunological Algorithm
Cloning and Hypermutation Operators.
Aging and Selection Operators.
Metrics and Dynamic Behavior
Influence of Different Potential Mutations.
Tuning of the ρ parameter.
Convergence and Learning Processes.
Results and Comparison
Conclusions
Robust Immunological Algorithms for High-Dimensional Global Optimization
3. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Global Optimization
◮ Global Optimization (GO): finding the best set of
parameters to optimize a given objective function
◮ GO problems are quite difficult to solve: there exist
solutions that are locally optimal but not globally
◮ GO requires finding a setting x = (x1 , x2 , . . . , xn ) ∈ S,
where S ⊆ Rn is a bounded set on Rn , such that a certain
n-dimensional objective function f : S → R is optimized.
◮ GOAL: findind a point xmin ∈ S such that f (xmin ) is a global
minimum on S, i.e. ∀x ∈ S : f (xmin ) ≤ f (x).
◮ It is difficult to decide when a global (or local) optimum has
been reached
◮ There could be very many local optima where the
algorithm can be trapped
the difficulty increases proportionally with the problem dimension
Robust Immunological Algorithms for High-Dimensional Global Optimization
4. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Numerical Minimization Problem
Let be x = (x1 , x2 , . . . , xn ) the variables vector in Rn ;
Bl = (Bl1 , Bl2 , . . . , Bln ) and Bu = (Bu1 , Bu2 , . . . , Bun ) the lower
and the upper bounds of the variables, such that
xi ∈ Bli , Bui (i = 1, . . . , n).
GOAL: minimizing f (x) the objective function
min(f (x)), Bl ≤ x ≤ Bu
Benchmarks used to evaluate the performances and
convergence ability:
◮ twenty-three functions taken by [Yao et al., IEEE TEC,
1999];
◮ twelve functions taken by [Timmis et al., GECCO 2003]
All these functions belong to three different categories:
unimodal, multimodal with many local optima, and multimodal
with few local optima
Robust Immunological Algorithms for High-Dimensional Global Optimization
5. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Artificial Immune System
Articial Immune Systems - AIS
◮ Immune System (IS) is the main responsible to protect the
organism against the attack from external microorganisms,
that might cause diseases;
◮ The biological IS has to assure recognition of each
potentially dangerous molecule or substance
◮ Artificial Immune Systems are a new paradigm of the
biologically-inspired computing
◮ Three immunological theory: immune networks, negative
selection, and clonal selection
◮ AIS have been successfully employed in a wide variety of
different applications
[Timmis et al.: J.Ap.Soft.Comp., BioSystems, Curr. Proteomics, 2008]
Robust Immunological Algorithms for High-Dimensional Global Optimization
6. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Artificial Immune System
Clonal Selection Algorithms - CSA
◮ CSA represents an effective mechanism for search and
optimization
◮ [Cutello et al.: IEEE TAC, J. Comb. Optimization, 2007]
◮ Cloning and Hypermutation operators: strength driven of
CSA
◮ Cloning: triggers the growth of a new population of
high-value B cells centered on a higher affinity value
◮ Hypermutation: can be seen as a local search procedure
that leads to a faster maturation during the learning phase.
Robust Immunological Algorithms for High-Dimensional Global Optimization
7. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Pseudo-code of Immunological Algorithm
Immunological Algorithm(d, dup, ρ, τB , Tmax )
FFE ← 0;
Nc ← d · dup;
t ← 0;
P (t) ← Initialize_Population(d);
// xi = Bli + β · (Bui − Bli )
Compute_Fitness(P (t) );
FFE ← FFE + d;
while (FFE < Tmax )do
P (clo) ← Cloning (P (t) , dup);
P (hyp) ← Hypermutation(P (clo) , ρ);
Compute_Fitness(P (hyp) );
FFE ← FFE + Nc ;
(t) (hyp)
(Pa , Pa ) = Aging(P (t) , P (hyp) , τB );
P (t+1) ← (µ + λ)-Selection(P (t) , P (hyp) );
a a
t ← t + 1;
end_while
Robust Immunological Algorithms for High-Dimensional Global Optimization
8. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Cloning and Hypermutation Operators.
Cloning Operator
◮ Cloning operator clones each B cell dup times (P (clo) )
◮ Each clone was assigned a random age chosen into the
range [0, τB ]
◮ Using the cloning operator, an immunological algorithm
produces individuals with higher affinities (higher fitness
function values)
◮ Improvement: choosing the age of each clone into the
range [0, 2 τB ]
3
Robust Immunological Algorithms for High-Dimensional Global Optimization
9. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Cloning and Hypermutation Operators.
Hypermutation Operator
◮ Tries to mutate each B cell receptor M times
is not based on an explicit usage of a mutation probability.
◮ There exist several different kinds of hypermutation
operator
[Cutello et al.: LNCS 3239 and IEEE Press vol.1, 2004]
◮ We have used Inversely Proportional Hypermutation
as the fitness function value of the current B cell increases, the number of
mutations performed decreases
◮ two different potential mutations are used:
ˆ
e−f (x) ˆ
α= , α = e−ρf (x)
ρ
α represents the mutation rate, and ˆ(x) the normalized
f
fitness function in [0, 1].
Robust Immunological Algorithms for High-Dimensional Global Optimization
10. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Cloning and Hypermutation Operators.
How the Hypermutation Operator works
◮ Choose randomly a variable xi (i ∈ {1, . . . , ℓ = n})
◮ replace xi with
(t+1) (t) (t)
xi = (1 − β)xi + βxrandom ,
(t) (t)
xrandom = xi is a randomly chosen variable
◮ To normalize the fitness function value was used the best
current fitness value decreased of a user-defined threshold
θ,
◮ is not known a priori any additional information concerning
the problem [Cutello et al., SAC 2006]
Robust Immunological Algorithms for High-Dimensional Global Optimization
11. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Aging and Selection Operators.
Aging Operator
◮ Eliminates all old B cells, in the populations P (t) , and P (hyp)
◮ Depends on the parameter τB : maximum number of
generations allowed
◮ when a B cell is τB + 1 old it is erased
◮ GOAL: produce an high diversity into the current
population to avoid premature convergence
◮ static aging operator: when a B cell is erased
independently from its fitness value quality
◮ elitist static aging operator: the selection mechanism does
not allow the elimination of the best B cell
Robust Immunological Algorithms for High-Dimensional Global Optimization
12. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Aging and Selection Operators.
(µ + λ)-Selection Operator
◮ The best survivors are selected to generate the new
population P (t+1) ,
◮ If d1 < d are survived then d − d1 are randomly selected
(t) (hyp)
among those “died”, i.e. (P (t) Pa ) ⊔ (P (hyp) Pa )
Robust Immunological Algorithms for High-Dimensional Global Optimization
13. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Influence of Different Potential Mutations.
◮ Two different potential mutations were used to determine the number of
mutations M;
◮ We present their comparisons, and hence their influence, in order to the
performances
◮ The main goal is to determine best law to use to tackle optimization problems
◮ Next tables show the different “impact” in term of performance and quality of the
solution
◮ We show the mean of the best B cells on all runs, and the standard deviation
◮ We used experimental protocol proposed in [Yao et al., 1999]
◮ Parameters used:
◮ d = 100, dup = 2, τB = 15
◮ 1st mutation rate: ρ ∈ {50, 75, 100, 125, 150, 175, 200}
◮ 2nd mutation rate: ρ ∈ {4, 5, 6, 7, 8, 9, 10, 11}
Robust Immunological Algorithms for High-Dimensional Global Optimization
16. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Tuning of the ρ parameter.
Potential Mutation behavior using different dimension values
Mutation Rate for the Hypermutation Operator
200
dim=30, ρ=3.5
dim=50, ρ=4.0
180 dim=100, ρ=6.0
dim=200, ρ=7.0
160
140
10
120 9
Mutations
8
100 7
6
80 5
4
60 3
2
40 1
0.4 0.5 0.6 0.7 0.8 0.9 1
20
0 0.2 0.4 0.6 0.8 1
Fitness
Robust Immunological Algorithms for High-Dimensional Global Optimization
17. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Tuning of the ρ parameter.
Potential Mutation behavior on large dimension values
Mutation Rate for the Hypermutation Operator
5000
dim=1000, ρ=9.0
dim=5000, ρ=11.5
4000
3
3000
Mutations
2.5
2
2000
1.5
1000 1
0.7 0.75 0.8 0.85 0.9 0.95 1
0
0 0.2 0.4 0.6 0.8 1
Fitness
Robust Immunological Algorithms for High-Dimensional Global Optimization
18. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Mean performance comparison curves for test function f1 Mean performance comparison curves for test function f6
70000 70000
Legend Legend
Real Real
60000 Binary 60000 Binary
50000 50000
Function value
Function value
40000 40000
30000 30000
20000 20000
10000 10000
0 0
100 200 300 400 500 600 700 5 10 15 20 25 30 35 40 45 50
Generation Generation
Robust Immunological Algorithms for High-Dimensional Global Optimization
19. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Mean performance comparison curves for test function f8 Mean performance comparison curves for test function f10
-2000 25
Legend Legend
Real Real
-4000 Binary Binary
20
-6000
Function value
Function value
15
-8000
10
-10000
5
-12000
-14000 0
100 200 300 400 500 600 700 800 900 1000 100 200 300 400 500 600 700 800 900 1000
Generation Generation
Robust Immunological Algorithms for High-Dimensional Global Optimization
20. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Mean performance comparison curves for test function f18 Mean performance comparison curves for test function f21
70 0
Legend Legend
Real -1 Real
60 Binary Binary
-2
50 -3
Function value
Function value
-4
40 -5
30 -6
-7
20 -8
-9
10
-10
0 -11
5 10 15 20 25 30 35 40 45 50 5 10 15 20 25 30 35 40 45 50
Generation Generation
Robust Immunological Algorithms for High-Dimensional Global Optimization
21. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Learning Process
◮ To analyze the learning process was used the Information Gain.
◮ measures the quantity of information the system discovers during the learning
phase
[Cutello et al.: journal of Combinatorial Optimization, 2007]
◮ B cells distribution function:
(t) Bt Bmt
fm = Ph m =
t d
m=0 Bm
t
with Bm the number of B cells at time step t with fitness function value m
◮ Information Gain:
(t) (t )
X (t)
K (t, t0 ) = fm log(fm /fm 0 )
m
◮ Entropy:
(t) (t)
X
E(t) = fm log fm
m
Robust Immunological Algorithms for High-Dimensional Global Optimization
22. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Maximum Information-Gain Principle
◮ The gain is the amount of information the system has already learnt from the
given problem instance with respect to the randomly generated initial population
P (t=0)
◮ Once the learning process begins, the information gain increases monotonically
until it reaches a final steady state
◮ Is consistent with the idea of a maximum information-gain principle [Cutello et al.,
GECCO 2003]:
dK
≥0
dt
Robust Immunological Algorithms for High-Dimensional Global Optimization
23. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Information Gain and Standard Deviation on function 5
*
Clonal Selection Algorithm: opt-IMMALG
25
20
300
15
250
200
10 150
100
50
5
0
16 32 64
0
16 32 64
Generations
Robust Immunological Algorithms for High-Dimensional Global Optimization
24. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Learning Processon functions f5 , f7 , and f10
Information Gain
25
f5
f7
f10
20
15
10
5
0
1 2 4 8 16 32
Generations
Robust Immunological Algorithms for High-Dimensional Global Optimization
25. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Convergence and Learning Processes.
Averge fitness versus Best fitness on function 5
*
Clonal Selection Algorithm: opt-IMMALG
4e+09
avg fitness
best fitness
3.5e+09
3e+09
25
2.5e+09 gain
20 entropy
Fitness
2e+09 15
10
1.5e+09
5
1e+09
0
16 32 64
5e+08
0
0 2 4 6 8 10
Generations
Robust Immunological Algorithms for High-Dimensional Global Optimization
26. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
◮ IA was extensively compared against several nature
inspired methodologies including differential evolution (DE)
algorithms
DE seems to perform better than many other EAs on the same test bed
◮ IA was compared on 33 optimization algorithms in the
overall, including a deterministic global search algorithm
DIRECT
Robust Immunological Algorithms for High-Dimensional Global Optimization
27. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
IA versus DIRECT DIRECT: [Jones et al.,j. opt. theory appl., 1993] and [Finkel, Techn. Report, 2003].
ˆ −ˆ(x)
f
α = e−ρf (x) DIRECT α= e ρ
f5 16.29 27.89 22.32
f7 1.995 × 10−5 8.9 × 10−3 1.143 × 10−4
f8 −12535.15 −4093.0 −12559.69
f12 1.770 × 10−21 0.03 7.094 × 10−21
f13 1.687 × 10−21 0.96 1.122 × 10−19
f14 0.998 1.0 0.999
f15 3.2 × 10−4 1.2 × 10−3 3.27 × 10−4
f16 −1.013 −1.031 −1.017
f17 0.423 0.398 0.425
f18 5.837 3.01 6.106
f19 −3.72 −3.86 −3.72
f20 −3.292 −3.30 −3.293
f21 −10.153 −6.84 −10.153
f22 −10.402 −7.09 −10.402
f23 −10.536 −7.22 −10.536
Robust Immunological Algorithms for High-Dimensional Global Optimization
53. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Conclusion 1/3.
◮ We have presented an extensive comparative study
illustrating the performance
◮ IA was compared with 33 state-of-the-art optimization
algorithms (deterministic and nature inspired
methodologies):
FEP; IFEP; three versions of CEP; two versions of PSO and AR PSO;
EO; SEA; HGA; immunological inspired algorithms, as BCA and two
versions of CLONALG; CHC algorithm; Generalized Generation Gap
(G3 − 1); hybrid steady-state RCMA (SW-100), Family Competition (FC);
CMA with crossover Hill Climbing (RCMA-XHC); eleven variants of DE
and two its memetic versions
◮ Two variants of IA were presented
Robust Immunological Algorithms for High-Dimensional Global Optimization
54. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Conclusion 2/3.
◮ Main features of the designed immune algorithm:
1. cloning operator, which explores the neighborhood of a
given solution
2. inversely proportional hypermutation operator, which
perturbs each candidate solution as a function of its fitness
function value
3. aging operator, that eliminates the oldest candidate
solutions from the current population in order to introduce
diversity and thus avoiding local minima during the search
process
◮ A large set of experiments was used divided in two different
categories of functions [Yao et al., IEEE TEC, 1999] and [Timmis et al., GECCO 2003]
Robust Immunological Algorithms for High-Dimensional Global Optimization
55. Introduction Optimization Immunological Algorithm Metrics and Dynamic Behavior Results and Comparison Conclusions
Conclusion 3/3.
◮ The dimensionality of the problems was varied from small
to high dimensions (5000 variabiles).
◮ The results suggest that the proposed immune algorithm is
an effective numerical optimization algorithm (in terms of
solution quality)
◮ All experimental comparisons show that IA and IA∗ are
comparable, and often outperform, all nature inspired
methodologies used, and one well-known deterministic
optimization algorithm (DIRECT).
Robust Immunological Algorithms for High-Dimensional Global Optimization