1. Master of Science Thesis Defense
Modified Predator-Prey (MPP) Algorithm for
Single- and Multi-Objective Optimization
Problems
Souma Chowdhury
Thesis Advisor: Prof. George S. Dulikravich
Department of Mechanical and Materials Engineering
Florida International University, Miami, Florida 33174
2. Acknowledgements
Prof. George S. Dulikravich
Prof. Yiding Cao
Prof. Igor Tsukanov
Dr. Ramon J. Moral
Mr. Stephen Wood
Ms. Himangi Marathe
3. Thesis Objective
The aim of this work is to develop an algorithm that can
solve multidisciplinary design optimization problems.
The predator-prey (PP) algorithm imitates the
interactions of predators and prey existing in nature.
Substantial modifications of the basic predator-prey
algorithm have been implemented in this study to
formulate a robust and computationally inexpensive
algorithm capable of handling both single- and multi-
objective optimization problems.
4. Multidisciplinary Design Optimization
Typical real world systems, be it engineering, scientific, social or
financial are comprised of a large number of variables and multiple
output parameters.
Skilled designers and systems analysts use their knowledge,
experience and intuition to assign values to these variables in order to
extract the most desirable performance from the process or system.
However, due to the size and complexity of the design task, as also
the likely involvement of different disciplines, it becomes increasingly
difficult even for the most competent designers to account for all the
variables and constraints involved simultaneously.
This calls for the application of relevant, efficient and economically
viable mathematical models.
Multidisciplinary Design Optimization (MDO) is the application of
numerical algorithms for designing systems with or without inherent
coupling between various disciplines, in order to achieve optimal
performance in terms of expected parameter outcomes, cost and
reliability.
5. Optimization Concepts
Griewank
single-
objective
function
Constrained
Pareto
optimization
fronts
problem
6. Classical Optimization Algorithms
One of the most popular classical methods is the weighted sum method,
where the objectives are linearly combined to form a single composite
Nf
objective function, that is,
F(X ) = wi fi ( X )
i =1
Shortcomings of Classical Methods:
wiyields only one single solution search.
One combination of weights ,
Successful convergence to a point on the global Pareto front depends on
the selection of the initial solution.
Diversity among the Pareto solutions is highly sensitive to the user’s choice
of weights.
This approach has an inherent tendency to converge to sub-optimum
solutions (local optima), especially in case of multi modal problems.
They are unable to handle problems with non-convex Pareto fronts.
They are unable to handle problems with discontinuous search space.
7. Evolutionary Optimization Algorithms
The last few decades have seen the development of stochastic
optimization algorithms inspired by the principles of natural evolution
coined by Darwin, i.e. “Evolution occurs through selection and
adaptation [1].”
These algorithms, often termed as Evolutionary Optimization
Algorithms (EOA), often utilize a set of multiple candidate solutions to
follow an iterative procedure producing a final set of the best
compromise solutions.
The graphical representation of the set of best trade off solutions is
termed as the Pareto front [2].
In case of single objective problems, the Pareto front reduces to a
single optimal solution known as the global minimum or global
maximum.
Genetic algorithm, differential evolution, particle swarm, ant colony,
and predator-prey algorithms are some of the most prominent EOAs.
8. Predator-Prey (PP) Algorithm
The basic concept of the predator-
prey algorithm was suggested by
H.P. Schwefel and reported by
Laumann et al. [2] in 1998.
This algorithm imitates a predator
that kills the weakest prey in its
neighborhood, and the next
Each predator is completely
generations of prey that evolve are
biased towards one of the
relatively stronger and more
objectives, which form the
immune to such predator attacks.
quantitative basis of determining
In this algorithm, prey, which
the weakest local prey.
represent members of the
New prey are created through
population/sample space and
mutation.
predators, which are comparatively
fewer in number than prey, are While the prey remain
randomly placed on a two- stationary, the predators move
dimensional lattice with connected to a random neighboring
ends (i.e. a toroidal grid). location after every generation.
9. Other Versions of PP Algorithm
Several modifications of the
initial predator-prey algorithm
have appeared in literature.
Deb [2] suggested an
improved version of the
algorithm which involved the
association of each predator
with a weighted sum of
objectives instead of one
particular objective.
Certain new features,
namely, the ‘elite preservation
operator”, the ‘recombination Toroidal Grid for Deb’s [1] Predator Prey algorithm
operator’ and the ‘diversity
preservation operator’ were
also included.
10. Other Versions of PP Algorithm
Li [4] suggested a dynamic spatial
structure of the predator-prey
population, which involved the
movement of both predators and preys
and changing population size of preys.
Some other versions of the algorithm
have been presented by Grimme et al.
[5] and Silva et al. [6]. The former uses
a modified recombination and mutation
model. The latter, predominantly a
particle swarm optimization algorithm Toroidal Grid for Li’s [3] Predator Prey algorithm,
introduces the concept of predator prey allowing both predator and prey movement
interactions in the swarm in order to
improve both the diversity and rate of
convergence.
11. Drawbacks of Existing PP algorithms
The existing versions of the PP algorithm find it difficult to produce well
distributed set of Pareto optimal solutions especially when dealing with
problems with more than two objectives or significantly high number of
design variables.
The number of function evaluations necessary for convergence, is
significantly higher than other standard EMOs as evident from test results on
the previous versions [2,4]. In majority of practical applications of
optimization, the calculation time for evaluating model functions dominate.
This demands optimization algorithms capable of producing practically
dependable solutions investing minimum number of function evaluations
possible.
The versions of the PP algorithm available in literature do not have the ability
to handle constraints, which form an integral part of most practical problems.
12. Modified Predator-Prey (MPP) Algorithm
Any general constrained multiobjective problem involving objectives and
design variables can be defined as follows.
Minimize fi = fi ( X ), i = 1, 2,..., Nf
subject to
gi ≤ 0, i = 1, 2,3,..., p
hi = 0, i = p + 1, p + 2,..., p + q
p, q ∈ N
Where X is the vector of design variables that is, X = ( x1 , x2 , x3 ,..., xm ) ,xi ∈ R
The constraints are added up to form the (Nf+1)th objective in the following
way,
p+q
p
max ( ( hi − ε ) , 0 )
max ( g i , 0 ) +
Minimize f Nf +1 =
i =1 i = p +1
where ε is the tolerance for equality constraints.
It should be noted that in case of maximization, the corresponding objective
function is multiplied by ‘-1’, to convert it into a general minimization problem.
Also, a ‘greater than equal to’ inequality constraint is converted into a ‘less than
equal to’ constraint by multiplying with ‘-1’.
13. MPP Algorithm Steps
The Modified Predator-Prey algorithm
(MPP) presented here involves the
following general steps executed by the
algorithm in each generation in solving a
‘Nf’ objective optimization:
A population of ‘N’ solutions/preys
is initialized using Sobol’s [6] quasi
random sequence generator.
The preys are placed on a
dynamically adjustable 2D lattice
with connected ends.
An active neighborhood in the 2D lattice
‘M’ number of predators is placed
on the same 2D grid such that they Nf
occupy random cell centers. M is
f= wi fi
given by
N i =1
M= × Nf
20
Nf = number of objectives,
Each predator is associated with a
wi = weight associated with
weighted value of the objectives as
the ith objective function,
follows.
fi = ith objective function.
14. MPP Algorithm Steps
Within each locality, containing a predator (active
locality), the weakest prey based on it’s
corresponding ‘f’ value is killed.
A child prey is produced by the crossover of the
strongest two local preys, and subsequent mutation
of the crossover child.
However, this child prey qualifies to be accepted
only if it fulfills the following three criteria,
1. The child is stronger than the worst local prey,
2. The child is non-dominated [2] with respect to
the other three local preys, and
3. the child is not within the objective space
hypercube [2] of the other three local preys.
Ten trials are allowed to produce a child that
simultaneously satisfies the three criteria, failing
which the weakest prey is retained.
Upon completion of predator and prey
interactions in each active locality, the predators
are relocated randomly based on a probabilistic
relocation criterion.
15. MPP Algorithm Steps
After each generation, the non-dominated solutions in the prey
population are copied to a secondary set called the ‘elite set’. The elite
set is updated after each generation based on the principles of weak
domination [2].
Certain randomly selected solutions/preys, if found to be dominated
are replaced from the 2D lattice by randomly selected elite solutions.
The algorithm is terminated based on a specified criterion such as a
maximum allowed number of function evaluations.
16. New and Modified features in MPP
Evolution
Mutation: This crossover child prey
Crossover: The blend crossover
is then subjected to non-uniform
(BLX-α), initially proposed by
mutation originally introduced by
Eshelman and Schaffer [8] for real-
Michalewicz [9], later modified and
coded genetic algorithms (later
reformulated in MPP as,
improved by Deb [5]), is used in this
algorithm. It is defined as follows −t
tmax
β = 10
xi (1,t +1) = (1 − γ i ) xi (1,t ) + γ i xi (2,t ) b
1− t
( )
yi (1,t +1) = xi (1,t +1) + τ xi (U ) − xi ( L ) 1 − ri
tmax
γ i = (1 + 2α ) ui − α ×β
where x(U) and x(L) are upper and
lower limits of the ith variable, τ takes
(1,t ) (2,t )
where, xi and xi are the parent
a Boolean value -1 or 1, each with a
(1,t +1)
solutions, xi is the child solution and
probability of 0.5, ri is a random
ui is the random number between 0
number between 0 and 1, t and tmax
and 1. BLX-α facilitates genetic
are the number of generations
recombination that is adaptive to the
already executed and the maximum
existing diversity in the parent
allowed number of generations,
population; a desirable characteristic
respectively, while β is a user
for Pareto convergence.
defined parameter.
17. New and Modified features in MPP
Diversity Preservation
An efficient multi-objective optimization algorithm is expected to promote generation
of new solutions (evolution) that do not closely resemble their parents or other
nearby solutions (in the objective space).
Here, the concept of objective space hypercube is used as a qualifying criterion for
new preys to assure diversity preservation. Each old local prey is considered to be
at the centre of its hypercube, the size of which is dynamically updated with
generations and could be determined by the following equation [10].
− 2+ t
ω = 10 tmax
ηi = ω × min ( fi new prey , fi old prey )
Here, ω is the window size of the hypercube and ηi is the half side length of the
hypercube corresponding to the ith objective.
18. New and Modified features in MPP
Diversity Preservation
An innovative concept of sectional convergence has been introduced [10] to deal
with this possible lack of diversity in the prey population due to implementation of
the weighted sum of objectives approach. Instead of the running the algorithm
throughout for the same initial specified distribution of weights, there is
redistribution of weights within a small biased range (<1.0) after certain number of
function evaluations. The redistribution is governed by the following equations.
iterpmax
( iterp − 1) M + i ( j − 1) − 1 M + i
iterp −
iterp − 1
w1i = Nf
j= , wk max = × 0.75
i
iterpmax M + 1 iterpmax iterpmax
M +1
w2 = 1 − w1
i
Nf
Nf
wij = 1 − wk max
i
2-objective problem
Nf-objective problem with Nf>2
Here iterpmax is the maximum allowed number of primary iterations, i.e. maximum
number of times redistribution is allowed, iterp is the present primary iteration. The
weights (wik) associated with the objective functions other than the jth objective are
distributed using Sobol’s [7] within the range.
19. New and Modified features in MPP
Elitism
In order to retain the genetic traits of the best solutions it is necessary to
introduce some form of elite preservation mechanism into the algorithm. This,
when judiciously applied, accelerates the rate of convergence to the Pareto
front.
In MPP, a secondary set (elite set) is constructed with the non dominated
solutions from each generation and maintained at a fixed strength Ne using the
clustering technique designed by Deb [2].
After each generation, certain randomly selected solutions/preys (from the main
population), if found to be dominated, are replaced from the 2D lattice by
randomly selected elite solutions.
This new additional attribute boosts the speed of convergence of this algorithm.
However, the allowed number of such replacements should be carefully chosen
to avoid introducing excessive elitism. Here the total number of allowed
replacements is always kept below N/2.
20. New and Modified features in MPP
Controlled Predator Relocation
A probability based predator relocation criterion is
introduced, which tries to ensure that each cell
is visited during the course of generations. The
relocation criterion is defined as follows,
if cellcount ( i, j ) > cellcountavg +1, locate = no
, locate = yes
else
Here, cellcount(i,j) is the number of times
predators have visited the cell in previous
generations, cellcountavg is the average of all
cellcount(i,j) and (i,j) is the randomly generated
location on the 2D lattice.
This new feature ensures that every member of
the population irrespective of its location in the 20 x 5 two-dimensional lattice
2D lattice gets fair opportunity of improvement.
21. New and Modified features in MPP
Constraint Handling and Dominance
The concept of weak dominance [2] is applied here, according to which in case of an
unconstrained optimization problem, solution is said to weakly dominate solution
if solution is better than solution in at least one objective and equal in all other
objectives. However, in case of a constrained optimization, the theory of
dominance is altered to give preference to feasible solutions or relatively less
infeasible solutions. The modified definition of dominance is as in NSGA-II [11]:
Solution A is said to constraint-dominate solution B if:
Solution A is feasible and solution B is not.
Solutions A and B are both infeasible, while solution A has a smaller net constraint
violation than solution B, i.e. (considering function minimization).
Solutions A and B are both feasible, while solution A weakly dominates solution B.
Due to the absence of any penalty function method, the normal objectives (f1) and
the net constraint violation objective (f3), get similar quantitative importance. This,
together with the constraint-dominance criterion, favor feasible solutions, but also
helps retain genetic traits of infeasible solutions with substantially better objective
values as well.
22. Numerical Experiments
MPP was implemented using C++ programming language.
The objective functions were evaluated by the corresponding executable files. The
C++ code simulating MPP is known as ‘mpp_cnstrnt.cpp’.
It compiles and runs successfully on both Windows and Linux workstations using
Microsoft Visual C++ .NET for the former and KDevelop 3.1.1 for the latter
operating systems.
Unconstrained 2-Objective Test Cases:
The first six test cases analyzed are taken from the multi-objective optimization
comparison by Zitzler et al. [12], namely the ZDT test cases.
Two other popular test cases with known analytical solutions for the Pareto
front, which are the Fonseca and Fleming multiobjective problem no. 2 [13] and
the Coello multiobjective problem [1], have also been used.
These eight test cases involve two-objective optimizations where both
objectives are to be minimized.
23. Details of the unconstrained 2-objective optimization test cases
24. Details of the unconstrained 2-objective optimization test cases
Parameter Value
Population size (# preys) 100
# Predators 10
Elite strength 40
Crossover probability 1.0
Mutation probability 0.05
General parameters defining MPP runs
27. Animation of Pareto Progression
Global Convergence for ZDT 3 Sectional Convergence for ZDT 3
28. Performance Measures
Two performance measures for
evaluating the performance of
multiobjective optimization
algorithms have been developed by
Deb et al. [11].
The first performance metric, the
gamma (γ) parameter or distance
metric is a measure of the extent of
convergence. The minimum of the
Euclidean distances of each
computed non-dominated solution
from H uniformly distributed points
on the ideal Pareto front (H=500) is
calculated, the average of which
gives the value of the gamma
parameter.
29. Performance Measures
The other performance metric, namely
the delta (∆) parameter or diversity
metric gives a measure of the spread
of solutions along the computed
Pareto front. It is calculated as follows,
N −1
d f + dl + di − d
∆= i =1
d f + d l + ( N − 1) d
Where, df and dl - the respective
Euclidean distances between the two
extreme solutions and the
corresponding extremities of the However, in spite of accurate
analytical Pareto front, di - Euclidean
convergence, the gamma
distance between consecutive
parameter need not be zero, due to
solutions and - mean of all di (i =
1,2,3…, N). d A perfectly uniform possible lack of coincidence of
distribution of solutions along the computed solutions and uniformly
computed Pareto front with existence
distributed analytical Pareto points.
of exact extreme solutions will give a
delta value of zero.
30. Comparison of Performance Measures
Table 3 shows the values of γ and ∆ calculated for the eight cases studied here,
and also the comparison of some them with that calculated by Deb et al. [10] for
NSGA-II. The same conditions have been used, i.e. a population of 100
solutions, subjected to 25000 function evaluations, for the 6 ZDTs. However the
Fonseca-Fleming and the Coello test cases involve 2000 function evaluations
and hence the former has not been compared with the corresponding data of
Deb at al. [11], all of which are with respect to 25000 function evaluations.
Algorithm NSGA-II (real) NSGA-II (binary) Predator Prey (MPP)
γ γ γ
∆ ∆ ∆
ZDT 1 0.0335 0.39 0.0009 0.46 0.0447 0.59
ZDT 2 0.0724 0.43 0.0009 0.44 0.1181 0.78
ZDT 3 0.1145 0.73 0.0434 0.58 0.0198 0.73
ZDT 4 0.5130 0.70 3.2276 0.48 0.6537 1.48
ZDT 5 NA NA NA NA 0.4282 1.49
ZDT 6 0.2966 0.67 7.8068 0.64 0.2334 0.71
Fonseca-Fleming NA NA NA NA 0.0082 0.42
Coello NA NA NA NA 0.0498 1.17
Table 3: Performance Parameters
31. Numerical Experiments
Unconstrained 3-Objective Test Cases:
The Pareto front is just a planar curve in two-objective problems
which proliferates into a surface in three-objective problems, and
then to a hypersurface of increasing dimensionality with every
additional problem objective.
Predator-Prey is unique in utilizing the principles of both selection
procedures, namely the weighted sum technique and the principle
of dominance.
However, the performance gain of such a characteristic can be
appreciated only when the algorithm is tested on optimization
problems with more than two objectives.
Therefore, MPP was tested on two standard scalable 3-objective
minimization problems developed by Deb et al. [14].
32. Parameter Value
Population size (# preys) 100
# Predators 10
Elite strength 40
Crossover probability 1.0
Mutation probability 0.05
General parameters defining
MPP runs
Details of the unconstrained 3-objective optimization
test cases
33. Test Case Results
DTLZ1 DTLZ1
iterp=0 iterp=0
view 1 view 2
DTLZ1 DTLZ1
iterp=3 iterp=3
view 1 view 1
34. Test Case Results
DTLZ2 DTLZ2
iterp=0 iterp=0
view 1 view 2
DTLZ2 DTLZ2
iterp=3 iterp=3
view 1 view 1
35. Numerical Experiments
Constrained 2-Objective Test Cases:
To examine the constraint handling capability of MPP, it was tested was
three well known constrained 2-objective test cases studied by Deb et al.
[11]. Two standard test cases with known analytical solutions namely Binh
multi-objective optimization problem no. 2 [15] and the Osyczka
multiobjective optimization problem no. 2 [16] have also been tested for.
Parameter Value Parameter Value
Population size (# preys) 100 Population size (# preys) 100
# Predators 10 # Predators 10
Elite strength 40 Elite strength 100
Crossover probability 1.0 Crossover probability 1.0
Mutation probability 0.05 Mutation probability 0.05
General parameters defining MPP runs General parameters defining MPP runs
for test cases studied by Deb et al. for Binh and Osyczka test cases
39. Test Case Results
120 120
MPP MPP
100 100
80 80
OSYCZKA OSYCZKA
60 60
f2
f2
iterp=0 iterp=6
40 40
20 20
0 0
-300 -250 -200 -150 -100 -50 0 -300 -250 -200 -150 -100 -50 0
f1 f1
iteration = 1 (initial populaion)
4 iteration = 20
iteration = 1 (initial populaion)
X iteration = 196 (final population)
iteration = 10
X iteration = 187 (final population)
3.5 250
3
200
2.5
Global Global
150
2
f2
f2
Convergence Convergence
TNK OSYCZKA
1.5
100
1 XX
XX
X X
X X
XX
XX X
50
XX X
X
X
XX
X X
0.5 XX
XX
XX
XX
X
X X
X
XX 0
0.5 1 1.5 2 2.5 3 3.5 4 -1500 -1000 -500 0
f1 f1
40. Single Objective Modified Predator-Prey
(SOMPP) Algorithm
SOMPP algorithm has been derived from the parent algorithm MPP developed by
Chowdhury et al. [10]. Any unconstrained single-objective optimization problem is
treated as a two-objective optimization problem, where the second objective is
just a clone of the first one. In case of the constrained problems, all the equality
and inequality constraints are collaged together to form a third objective and the
problem is solved as a three-objective optimization problem.
Any general constrained single objective test problem is reformulated as follows
p
max ( g i , 0 )
Minimize f1 = f ( X ) Minimize f3 =
Minimize f 2 = f1 i =1
p+q
max ( ( hi − ε ) , 0 )
subject to +
gi ≤ 0, i = 1, 2,3,..., p i = p +1
hi = 0, i = p + 1, p + 2,..., p + q
X = ( x1 , x2 , x3 ,..., xm ) ,xi ∈ R
p, q ∈ N
where ε is the tolerance for equality objectives.
41. Special features in SOMPP
Mutation:
Non-uniform mutation [5], as defined below, was used in this algorithm.
− 1+ K t
tmax
β = 10
b
1− t
( )
yi (1,t +1) = xi (1,t +1) + τ xi (U ) − xi ( L )
tmax
1 − ri ×β
Here, 10-(1+K) is the terminal order of magnitude of the extent of mutation.
Objective Space Hypercube Size:
Each local prey is considered to be at the centre of its hypercube, the size of
which is dynamically updated with generations and is determined by the
following novel equation.
t
− 2+ L
ω = 10 tmax
ηi = ω × min ( fi new prey , fi old prey )
Here, 10-(2+L) is the terminal order of magnitude of relative window size.
42. Special features in SOMPP
Constraint Handling and Dominance:
The basis for determining relative dominance between two solutions (solutions i
and j) is the same as used in NSGA-II [11], which is as follows. Solution i is said
to dominate solution j if:
Both solutions are infeasible, and solution i has lower value of constraint
violation than solution j (i.e. )
Solution i is feasible and solution j is infeasible.
Both solutions are feasible (or problem is unconstrained) and solution i has a
lower objective value than solution j (that is, ).
This dominance criterion puts feasibility at a higher priority than the objective
quality of the solution
Convergence or Termination Criteria:
Maximum allowed number of function evaluations (fcallmax) has been
exhausted.
The best objective value searched by the algorithm has not changed during
the last 100 generations.
43. Different Versions of SOMPP
Version 2 - Rank Based Predator Relocation: Localities with relatively
stronger prey were designed to have a higher affinity of attracting predators.
The probability cellprobij of locating a predator in a particular locality (co-
ordinates i,j generated by a random number generator) is determined as
follows.
ranki , j ranki +1, j
cellranki , j = min
ranki +1, j +1 ranki , j +1
N − cellranki , j
cellprobi , j =
N
Here, cellrankij is the rank of the cell/locality (i,j) and rankij is the rank of the
prey located at the grid point (i,j), ranking being determined on the basis of
dominance.
This feature speeds up convergence, but limits the domain of search in
certain cases.
44. Different Versions of SOMPP
Version 3 - Nine Prey Neighbourhood:
Instead of the predator being located at the
center of a four-vertex quadrilateral cell, the
predator is now located on the same grid
nodes as prey and allowed to have access
to all 8 preys around it as well as the prey
at that very grid location.
This increases the neighbourhood scope of
the predator from four to nine, but instills a
tendency to converge to a local minimum.
Version 4 - Global Elitist Crossover:
Here, the worst prey in each active
neighbourhood is replaced by the
crossover of two prey selected randomly
from the strongest ‘frac’ fraction of the
entire population. Strength of prey in this
case is determined on the basis of
dominance.
45. Different Versions of SOMPP
Version 5 – Epidemical Operator: In this version of SOMPP, the concepts
of nine-prey active neighborhoods and rank based relocation of predators
are combined with the concept of an epidemic genetic operator [17].
However, the rank for each cell is calculated as the average of the ranks of
all the local prey in that cell. Also if the objective value of the strongest prey
does not change over a certain number of consecutive iterations, a part of
the prey population is discarded and replaced with new population generated
using Sobol’s [7] quasi-random sequence generator, as shown below,
if Nchng>10
Rank prey population by dominance.
Discard weakest fw fraction of the prey population.
Set variable limits suitable to the order of magnitude of the remaining prey
and generate fw x N new prey to replace the discarded ones.
Here, Nchng is the consecutive number of generations without any change
in the objective value of the strongest prey by a relative tolerance of 10e-03.
46. Different Versions of SOMPP
Version 6 – Version 5 with dominance based selection in active
neighborhoods): Here, the relative strength of the prey in an active locality is
determined on the basis of the dominance criterion instead of the weighted f
value. In case of unconstrained problems, this has no additional influence
because the dominance is merely based on the actual objective value.
However, in case of constrained problems, this modification helps significantly
in directing solutions into the feasible region first, before the process of
minimization takes over.
47. Numerical Experiments
All six versions of SOMPP are implemented using C++ programming language. The
objective functions are evaluated by the corresponding external executable files.
The C++ code simulating SOMPP is called ‘PPsingle_cnstrnt.cpp’. It compiles and
runs successfully on both Windows and Linux workstations using Microsoft Visual
C++ .NET for the former and KDevelop 3.1.1 for the latter operating systems.
Unconstrained Single-Objective Test Functions: Parameter Value
The basic SOMPP (Version-1) and the final Population size (# preys) 10xm
SOMPP (Version-6) were both tested on ten well
Crossover probability 1.0
known unconstrained single objective test problems
[18]. An additional termination criterion based on Mutation probability 0.25
relative error of the best solution falling below 10e- Max. allowed no. of 10000
10 was also used, where, function evaluations
K (mutation) 6
Mincomp − Minanal
, if Minanal ≠ 0 L (hypercube) 10
relative error = Minanal
fw 0.9
Mincomp − Minanal , if Minanal = 0
48. Test Case Results
Ackley's Path function
101 Ackley's Path function
De Jong function 1
De Jong function 1 0
10 Easom function
Easom function
10-1
10-2
-3
10
Relative Error
Relative Error
-4
10
Using
Using
-5
10 SOMPP
-6
10
SOMPP
Version 6
Version 1
-7
10 -8
10
-9
10 10-10
-11 -12
10 10
2000 4000 6000 8000 10000 2000 4000 6000 8000 10000
Function Evaluations Function Evaluations
Goldstein-Price function
Goldstein-Price function
Michalewicz function
1
10 Michalewicz function
Rastrigin function
0
Rastrigin function 10
10-1
10-2
-3
10
Relative Error
Relative Error
Using Using
-4
10
SOMPP SOMPP
-5
10 -6
Version 1 Version 6
10
-7
10 -8
10
-9
10 10-10
-11 -12
10 10
2000 4000 6000 8000 10000 2000 4000 6000 8000 10000
Function Evaluations Function Evaluations
49. Test Case Results
Griewank funcion Griewank function
102 0
10
Miele Cantrell function Miele Cantrell function
Rosenbrock function Rosenbrock function
Schwefel function
100 Schwefel function
-2
10
-2
10
Relative Error
Relative Error
10-4
Using
Using
-4
10 SOMPP
-6
10
SOMPP
Version 6
Version 1
-6
10 -8
10
-8
10 -10
10
-10
10
2000 4000 6000 8000 2000 4000 6000 8000 10000
Function Evaluations Function Evaluations
Using Using
SOMPP SOMPP
Version 1 Version 6
50. Animation of Convergence
Convergence of solutions for the Miele Cantrell
single-objective function
51. Numerical Experiments
Constrained/Unconstrained Single Objective Test Problems by Hock &
Schittkowskii: SOMPP Version-1 was also tested on the 293 constrained
and unconstrained single objective test cases with known analytic solutions.
These 293 test cases were derived from the collection of 395
linear/nonlinear test cases formulated by Hock and Schittkowskii [19] and
Schittkowskii [20]. The number of variables involved in these 293 cases
ranges from 2 to 100 as shown below. The number of inequality and equality
constraints range from 0 to 38 and 0 to 6, respectively.
Parameter Value
100
90 Population size (# preys) 10xm
80
Crossover probability 1.0
70
# Variables
60
Mutation probability 0.1
50
Max. allowed no. of 20000
40
function evaluations
30
20
K (mutation) 2
10
L (hypercube) 4
0
50 100 150 200 250
Test Problem
53. 20000 20000
# Function evaluations
# Function evaluations
15000 15000
10000 10000
5000 5000
100 200 300 400 800 900 1000 1100 1200
TP runs TP runs
20000 20000
# Function Evaluations
# Function Evaluations
15000 15000
10000 10000
5000 5000
400 500 600 700 800 1200 1300 1400 1500 1600
TP runs TP runs
# function evaluations made for each of the 293 test problems using SOMPP Version 1
54. Numerical Experiments
13 Single Objective Test Cases from Hock-Schittkowskii:
Running all 293 test problems in series is extremely
computationally time consuming. Consequently, a set of 13
test problems were chosen from among these 293 cases.
These 13 test cases involve number of variables ranging from
2 to 50 (with or without specified limits), number of equality
constraints ranging from 0 to 6 and number of inequality
constraints ranging from 0 to 38, thereby exhibiting varying
degree and nature of complexity.
Parameter Value
Population size (# preys) 10xm
Crossover probability 1.0
Mutation probability 0.25
Max. allowed no. of 20000
function evaluations
K (mutation) 3
L (hypercube) 6
55. Test Case Results
+ Version 2
+ Version 2
5 Version 3
10 X
Version 3
107 Version 4
X
Version 4
X
+++
++
4 XXX
XX
10 Version 5
X
Version 5
Version 6
Total Constraint Violation
X+
XX
Version 6
103++X
+ 5
10
+
2
10
Relative Error
Total
103
101 X+ X+
Relative +++
+++
++ X
XXX
XX X
X
+
X +++
++ Constraint
XXX
XX
Error
100 +++++
+++++
++
++ +++ X
++ XX XX X
XX X XX
XX X
XX ++X ++
+++
XXX
XX
+ X+
++ Violation
XX
X
++ 1
XX 10
X XX
++
-1
10 X X+ +++
++
X X XX
XX
XX
X
++
++ X
++
-2
10 X+
-1 X
10 X +
X X
X
10-3X +++
+X
X+
XX
X X
-4 -3
10 10
10 20 30 40 50 60 70 0 10 20 30 40 50 60 70
TPruns TPruns
++ X X ++++X +
+++X +++++++ +
XX XX X XX
++ X X X X
XX
X XX
+X+++X X+X+
X+++ X X+X X
X
XX XX
20000
X+
X X
+X + Version 2
+ XX
18000 X
X
+ Version 3
# Function Evaluations
+ ++++
16000 Version 4
X
X
X+ Version 5
+
14000 X Version 6
X
# Function
12000 + X
X
Evaluations
10000X X X
++
++
X
8000
XX
+
X+
++X
+ X
6000
++
+ +++
XX X
++X X
+
4000
2000
0
10 20 30 40 50 60 70
TPruns
58. 1000 1000
Total Constraint Violation
Total Constraint Violation
800 800
600 600
400 400
200 200
0 0
100 200 300 400 800 900 1000 1100 1200
TP runs TP runs
1000 1000
Total Constraint Violation
Total Constraint Violation
800 800
600 600
400 400
200 200
0 0
400 500 600 700 800 1200 1300 1400 1500 1600
TP runs TP runs
Total Constraint Violation for each of the 293 test problems using SOMPP Version 6
59. 20000 20000
# Function evaluations
# Function evaluations
15000 15000
10000 10000
5000 5000
100 200 300 400 800 900 1000 1100 1200
TP runs TP runs
20000 20000
# Function Evaluations
# Function Evaluations
15000 15000
10000 10000
5000 5000
400 500 600 700 800 1200 1300 1400 1500 1600
TP runs TP runs
# function evaluations made for each of the 293 test problems using SOMPP Version 6
60. The improved performance of SOMPP Version-6 becomes more evident
from the following histogram.
Here frequency relates to the number of test runs that converged to that
particular order of magnitude of relative error.
It is seen from figure 44 that in case of COMPP Version-6, noticeably more
test cases have converged to relative errors of orders of magnitude less
than 1.0 (higher histogram bars for log(relative error < 0).
61. Conclusion
1. The modified predator-prey (MPP) algorithm provides a means of searching for
optimal solutions, which is simplistic in execution, produces reliable solutions
and is computationally inexpensive.
2. Pertinent analysis results show that this algorithm is competent in producing
dependable optimal solutions, and for certain cases even does better than
most well known algorithms presently available in literature.
3. Performance of the constraint handling technique in driving solutions into the
feasible domain at the expense of a reasonable number of function evaluations
is also appreciable.
4. MPP employs the concept of weighted sum of objectives without any
normalization of the objectives, which leads to relatively poor distribution of
Pareto solutions in certain complex multi-objective cases. Nevertheless, the
inclusion of the concept of sectional convergence using biased weighing of
objectives and careful hypercube sizing ensures a desirable distribution of the
Pareto solutions even for these poorly behaved cases.
62. Conclusion
5. Single-objective optimization problems posed without explicit decision variable
limits (i.e., unbounded problems) are likely to diverge. This issue was
addressed by the relatively nascent concept of epidemic operator [28].
6. Equality constraints pose severe threats against convergence, especially in
problems with high number of design variables, because they create an
extremely constricted feasible region in a multi-dimensional search domain of
high order. However, SOMPP handles such problems with acceptable
accuracy, without the application of a computationally expensive penalty
function method.
7. MPP algorithm presents a concordant application of the basic traits of
evolutionary algorithms, classical weighed sum approach and certain ingenious
techniques such as sectional convergence, hypercube operator, epidemic
operator, etc to single- and multi-objective problems (constrained and
unconstrained).
8. A combination of such distinct features is rare in optimization literature and
provides a foundation to construct robust composite optimization algorithms
with features adaptive to both the problem and the progress of the algorithm
through the function space towards the Pareto front.