SlideShare ist ein Scribd-Unternehmen logo
1 von 52
Downloaden Sie, um offline zu lesen
Recent Advances in Real-Parameter
Evolutionary Algorithms
             Presenters: P. N. Suganthan & Qin Kai
             School of Electrical and Electronic Engineering
             Nanyang Technological University, Singapore

             Some Software Resources Available from:
             http://www.ntu.edu.sg/home/epnsugan
             SEAL’06 October 2006




RP-EAs & Related Issues Covered
 Benchmark Test Functions
 Methods Modeled After conventional GAs
 CMA-ES / Covariance Matrix Adapted Evolution Strategy
 PSO / Particle Swarm Optimization
 DE / Differential Evolution
 EDA / Estimation of Distribution Algorithms
 CEC’05, CEC’06 benchmarking

 Coverage is biased in favor of our research and results of
 the CEC-05/06 competitions while overlap with other
 SEAL tutorial presentations is reduced.

                           -2-
Benchmark Test Functions

             Resources available from
     http://www.ntu.edu.sg/home/epnsugan
             (limited to our own work)


             From Prof Xin Yao’s group
  http://www.cs.bham.ac.uk/research/projects/ecb/
            Includes diverse problems.


                        -3-




Why do we require benchmark problems?

  Why we need test functions?
    To evaluate a novel optimization algorithm’s
    property on different types of landscapes
    Compare different optimization algorithms
  Types of benchmarks
    Bound constrained problems
    Constrained problems
    Single / Multi-objective problems
    Static / Dynamic optimization problems
    Multimodal problems (niching, crowding, etc.)

                        -4-
Shortcomings in Bound constrained Benchmarks
   Some properties of benchmark functions may make them unrealistic
   or may be exploited by some algorithms:
       Global optimum having the same parameter values for different
       variablesdimensions
       Global optimum at the origin
       Global optimum lying in the center of the search range
       Global optimum on the bound
       Local optima lying along the coordinate axes
       no linkage among the variables / dimensions or the same linkages
       over the whole search range
       Repetitive landscape structure over the entire space

   Do real-world problems possess these properties?
   Liang et. al 2006c (Natural Computation) has more
      details.

                                 -5-




How to Solve?
     Shift the global optimum to a random position to make the
     global optimum to have different parameter values for
     different dimensions

     Rotate the functions as below:
                      F ( x) = f ( R * x)
       where R is an orthogonal rotation matrix

     Use different kinds of benchmark functions, different rotation
     matrices to compose a single test problem.

     Mix different properties of different basic test functions
     together to destroy repetitive structures and to construct
     novel Composition Functions


                                 -6-
Novel Composition Test Functions
   Compose the standard benchmark functions to construct a
   more challenging function with a randomly located global
   optimum and several randomly located deep local optima
   with different linkage properties over the search space.

   Gaussian functions are used to combine these benchmark
   functions and to blur individual functions’ structures mainly
   around the transition regions.

   More details in Liang, et al 2005.




                              -7-




Novel Composition Test Functions


                                                                   is.




                              -8-
Novel Composition Test Functions




                    -9-




Novel Composition Test Functions




                   -10-
A couple of Examples
     Many composition functions are available from our homepage




Composition Function 1 (F1):          Composition Function 2 (F2):
Made of Sphere Functions              Made of Griewank’s Functions

Similar analysis is needed for other benchmarks too
                               -11-




           Methods Modeled After
             Conventional GAs
           Some resources available from
            http://www.iitk.ac.in/kangal/




                               -12-
Major Steps in Standard GAs

   Population initialization
   Fitness evaluation
   Selection
   Crossover (or recombination)
   Mutation
   Replacement




                          -13-




Mean-Centric Vs Parent-Centric Recombination


     Individual recombination
     preserves the mean of parents
     (mean-centric) – works well if
     the population is randomly
     initialized within the search
     range and the global optimum
     is near the centre of the
     search range.
     Individual recombination
     biases offspring towards a
     particular parent, but with
     equal probability to each
     parent (parent-centric)

                          -14-
Mean-Centric Recombination [Deb-01]
      SPX : Simplex crossover (Tsutsui et al. 1999)
      UNDX : Unimodal Normally distributed crossover
      (Ono et al. 1997, Kita 1998)
      BLX : Blend crossover (Eshelman et al 1993)
      Arithmetic crossover (Michalewicz et al 1991)
      Please refer to [Deb-01] for details.
Parent-Centric Recombination
      SBX (Simulated Binary crossover) : variable-wise
      Vector-wise parent-centric recombination (PCX)

      And many more …

                                  -15-




Minimal Generation Gap (MGG) Model
 1.   Select μ parents randomly
 2.   Generate λ offspring using a recombination scheme
 3.   Choose two parents at random from the population
 4.   One is replaced with the best of λ offspring and other with a
      solution by roulette-wheel selection from a combination of λ
      offspring and two parents

      Steady-state and elitist GA
      Typical values to use: population N = 300, λ=200,
      μ=3 (UNDX) and μ=n+1 (SPX) (n: dimensions)

      Satoh, H, Yamamura, M. and Kobayashi, S.: Minimal generation
      gap model for GAs considering both exploration and exploitation.
      Proc. of the IIZUKA 96, pp. 494-497 (1997).


                                  -16-
Generalized Generation Gap (G3) Model
 1.   Select the best parent and μ-1 other parents randomly
 2.   Generate λ offspring using a recombination scheme
 3.   Choose two parents at random from the population
 4.   Form a combination of two parents and λ offspring, choose best
      two solutions and replace the chosen two parents
      Parameters to tune: λ, μ and N




                                -17-




Mutation
  In RP-EAs, the main difference between crossover and
  mutation may be the number of parents used for the
  operation. In mutation, just 1. In crossover, 2 or more
  (excluding DE’s terminology).
  There are several mutation operators developed for real
  parameters
     Random mutation : the same as re-initialization of a variable
     Gaussian mutation: Xnew=Xold+N(0, σ )
     Cauchy mutation with EP (Xin Yao et. al 1999)
     Levy mutation / adaptive Levy mutation with EP (Lee, Xin Yao,
     2004)
  Making use of these RP-crossover (recombination), RP-mutation
  operators, RP-EAs can be developed in a manner similar to the
  Binary GA.
                                -18-
Covariance Matrix Adapted Evolution
              Strategy CMAES

      By Nikolaus Hansen & collaborators

        Extensive Tutorials / Codes available from
       http://www.bionik.tu-berlin.de/user/niko/cmaesintro.html
http://www.bionik.tu-berlin.de/user/niko/cmaes_inmatlab.html


                                  -19-




     Basic Evolution Strategy [BFM-02]

     Randomly generate an initial population of M solutions. Compute
     the fitness values of these M solutions.
     Use all vectors as parents to create nb offspring vectors by
     Gaussian mutation.
     Calculate the fitness of nb vectors, and prune the population to M
     fittest vectors.
     Go to the next step if the stopping condition is satisfied. Otherwise,
     go to Step 2.
     Choose the fittest one in the population in the last generation as the
     optimal solution.




                                  -20-
Covariance Matrix Adapted Evolution Strategy CMAES

    The basic ES generates solutions around the parents
    CMAES keeps track of the evolution path.
    It updates mean location of the population, covariance
    matrix and the standard deviation of the sampling normal
    distribution after every generation.
    The solutions are ranked which is not a straight forward
    step in multi-objective / constrained problems.
    Ranked solutions contribute according to their ranks to the
    updating of the mean position, covariance matrix, and the
    step-size parameter.
    Covariance analysis reveals the interactions / couplings
    among the parameters.

                             -21-




 CMAES Iterative Equations [Hansen 2006]




                             -22-
CMAES [HO-04]

   Performs well on rotated / linked problems with bound
   constraints.
   It learns global linkages between parameters.
   The cumulation operation integrates a degree of
   forgetting.
   The CMAES implementation is self-adaptive.
   CMAES is not highly effective in solving constrained
   problems, multi-objective problems, etc.
   CMAES is not as simple as PSO / DE.




                           -23-




Outline of PSO
  PSO
  PSO variants
  CLPSO
  DMS-L-PSO
  Constrained Optimization
  Constraint-Handling Techniques
  DMS-C-PSO
  PSO for Multi-Objective Optimization




                           -24-
Particle Swarm Optimizer




    Introduced by Kennedy and Eberhart in 1995
    (Eberhart & Kennedy,1995; Kennedy & Eberhart,1995)
    Emulates flocking behavior of birds to solve optimization problems
    Each solution in the landscape is a particle
    All particles have fitness values and velocities



                                           -25-




Particle Swarm Optimizer
 Two versions of PSO
    Global version (May not be used alone to solve multimodal
    problems): Learning from the personal best (pbest) and the
    best position achieved by the whole population (gbest)
    Vi d ← c1 * rand1i d ∗ ( pbesti d − X i d ) + c2 * rand 2i d ∗ ( gbest d − X i d )
    X i d ← X i d + Vi d          i – particle counter & d – dimension counter
    Local Version: Learning from the pbest and the best position
    achieved in the particle's neighborhood population (lbest)
    Vi d ← c1 * rand1i d ∗ ( pbesti d − X i d ) + c2 * rand 2i d ∗ (lbestk d − X i d )
    X i d ← X i d + Vi d

 The random numbers (rand1 & rand2) should be
 generated for each dimension of each particle in every
 iteration.
                                           -26-
Particle Swarm Optimizer
    where c1 and c2 in the equation are the acceleration constants
    rand 1i d and rand 2i d are two random numbers in the range [0,1];
    Xi = ( X i1 , X i 2 ,..., X i D ) represents the position of the ith particle;
    Vi = (Vi1 ,Vi 2 ,...,Vi D ) represents the rate of the position change
    (velocity) for particle i.
    pbest i = ( pbesti1 , pbesti 2 ,..., pbesti D ) represents the best
    previous position (the position giving the best fitness value) of
    the ith particle;
    gbest = ( gbest1 , gbest 2 ,..., gbest D ) represents the best previous
    position of the population;
    lbest k = (lbestk 1 , lbestk 2 ,..., lbestk D ) represents the best previous
    position achieved by the neighborhood of the particle;



                                    -27-




PSO variants
 Modifying the Parameters
    Inertia weight ω (Shi & Eberhart, 1998; Shi & Eberhart, 2001; Eberhart
    & Shi, 2001, …)
    Constriction coefficient (Clerc,1999; Clerc & Kennedy, 2002)
    Time varying acceleration coefficients (Ratnaweera et al. 2004)
    Linearly decreasing Vmax (Fan & Shi, 2001)
    ……
 Using Topologies
    Extensive experimental studies (Kennedy, 1999; Kennedy & Mendes,
    2002, …)
    Dynamic neighborhood (Suganthan,1999; Hu and Eberhart, 2002;
    Peram et al. 2003)
    Combine the global version and local version together (Parsopoulos
    and Vrahatis, 2004 ) named as the unified PSO.
    Fully informed PSO or FIPS (Mendes & Kennedy 2004) and so on …


                                    -28-
PSO variants and Applications
 Hybrid PSO Algorithms
    PSO + selection operator (Angeline,1998)
    PSO + crossover operator (Lovbjerg, 2001)
    PSO + mutation operator (Lovbjerg & Krink, 2002; Blackwell &
    Bentley,2002; … …)
    PSO + cooperative approach (Bergh & Engelbrecht, 2004)
     …
 Various Optimization Scenarios & Applications
     Binary Optimization (Kennedy & Eberhart, 1997; Agrafiotis et. al 2002; )
     Constrained Optimization (Parsopoulos et al. 2002; Hu & Eberhart,
     2002; … )
     Multi-objective Optimization (Ray et. al 2002; Coello et al. 2002/04; … )
     Dynamic Tracking (Eberhart & Shi 2001; … )
     Yagi-Uda antenna (Baskar et al 2005b), Photonic FBG design (Baskar
     et al 2005a), FBG sensor network design (Liang et al June 2006)

                                        -29-




Comprehensive Learning PSO
     Comprehensive Learning Strategy :

             Vi d ← w ∗ Vi d + c * rand i d ∗ ( pbest d ( d ) − X i d )
                                                      fi

     where f i = [ fi (1), fi (2),..., fi ( D)] defines which particle’s pbest particle
                                   d
     i should follow. pbest fi ( d ) can be the corresponding dimension of any
     particle’s pbest including its own pbest, and the decision depends
     on probability Pc, referred to as the learning probability, which can
     take different values for different particles.

     For each dimension of particle i, we generate a random number. If
     this random number is larger than Pci, this dimension will learn
     from its own pbest, otherwise it will learn from another randomly
     chosen particle’s pbest.




                                        -30-
Comprehensive Learning PSO
   The tournament selection (2) procedure is employed to
   choose the exemplar for each dimension of each particle.

   To ensure that a particle learns from good exemplars and to
   minimize the time wasted on poor directions, we allow the
   particle to learn from the exemplars until the particle ceases
   improving for a certain number of generations called the
   refreshing gap m (7 generations), after which we re-assign
   the dimensions of this particle.




                             -31-




Comprehensive Learning PSO
 We observe three main differences between the CLPSO
 and the original PSO:
   Instead of using particle’s own pbest and gbest as the
   exemplars, all particles’ pbests can potentially be used as the
   exemplars to guide a particle’s flying direction.
   Instead of learning from the same exemplar particle for all
   dimensions, each dimension of a particle in general can learn
   from different pbests for different dimensions for some
   generations. In other words, in any one iteration, each
   dimension of a particle may learn from the corresponding
   dimension of different particle’s pbest.
   Instead of learning from two exemplars (gbest and pbest) at
   the same time in every generation as in the original PSO, each
   dimension of a particle learns from just one exemplar for some
   generations.

                             -32-
Comprehensive Learning PSO

   Experiments have been conducted on16 test functions
   which can be grouped into four classes. The results
   demonstrated good performance of the CLPSO in
   solving multimodal problems when compared with eight
   other recent variants of the PSO.



   (The details can be found in Liang et al. 2006a. Matlab
   codes are also available for CLPSO and the 8 PSO
   variants.)




                          -33-




Dynamic Multi-Swarm PSO
   Constructed based on the local version of PSO with a
   novel neighborhood topology.


   This new neighborhood structure has two important
   characteristics:
      Small Sized Swarms
      Randomized Regrouping Schedule




                          -34-
Dynamic Multi-Swarm PSO


                                Regroup




   The population is divided into small swarms randomly.
   These sub-swarms use their own particles to search for better
   solutions and converge to good solutions.
   The whole population is regrouped into new sub-swarms
   periodically. New sub-swarms continue the search.
   This process is continued until a stop criterion is satisfied



                                          -35-




Dynamic Multi-Swarm PSO
   Adaptive Self-Learning Strategy
   Each Particle has a corresponding Pci. Every R generations,
   keep_idi is decided by Pci. If the generated random number is
   larger than or equal to Pci, this dimension will be set at the value of
   its own pbest, keep_idi(d) is set to1. Otherwise keep_idi(d) is set to
   0 and it will learn from the lbest and its own pbest as the PSO with
   constriction coefficient:
 If keep _ idi d = 0
      Vi d ← 0.729 ∗ Vi d + 1.49445 ∗ rand1i d ∗ ( pbesti d − X i d )
             +1.49445 ∗ rand 2i d ∗ (lbestk d − X i d )
      Vi d = min(Vmax , max(−Vmax , Vi d ))
                  d           d


      X i d ← X i d + Vi d
 Otherwise                    Somewhat similar to
      X i ← pbesti
         d             d
                              DE & CPSO
                                          -36-
Dynamic Multi-Swarm PSO
 Adaptive Self-Learning Strategy
   Assume Pc normally distributed in a range with mean(Pc) and a
   standard deviation of 0.1.
   Initially, mean(Pc) is set at 0.5 and different Pc values conforming to
   this normal distribution are generated for each individual in the
   current population.
   During every generation, the Pc values associated with the particles
   which find new pbest are recorded.
   When the sub-swarms are regrouped, the mean of normal distribution
   of Pc is recalculated according to all the recorded Pc values
   corresponding to successful movements during this period. The
   record of the successful Pc values will be cleared once the normal
   distribution’s mean is recalculated.
   As a result, the proper Pc value range for the current problem can be
   learned to suit the particular problem.


                                -37-




DMS-PSO with Local Search
    Since we achieve a larger diversity using DMS-PSO, at
    the same time, the convergence rate is reduced. In
    order to alleviate this problem and to enhance local
    search capabilities, a local search is incorporated:
       Every L generations, the pbests of five randomly chosen
       particles will be used as the starting points and the Quasi-
       Newton method (fminunc(…,…,…)) is employed to do the local
       search with the L_FEs as the maximum FEs.
       In the end of the search, all particles form a single swarm to
       become a global PSO version. The best solution achieved so
       far is refined using Quasi-Newton method every L generations
       with the 5*L_FEs as the maximum FEs.
       If local search results in improvements, the nearest pbest is
       replaced / updated.


                                -38-
Constrained Optimization
   Optimization of constrained problems is an important
   area in the optimization field.
   In general, the constrained problems can be
   transformed into the following form:

   Minimize              f (x), x = [ x1 , x2 ,..., xD ]
   subjected to:
                        gi (x) ≤ 0, i = 1,..., q
                        h j (x) = 0, j = q + 1,..., m

   q is the number of inequality constraints and m-q is the
   number of equality constraints.


                                         -39-




Constrained Optimization
   For convenience, the equality constraints can be transformed into
   inequality form:
                           | h j ( x) | −ε ≤ 0

  where ε is the allowed tolerance.

   Then, the constrained problems can be expressed as
   Minimize        f (x), x = [ x1 , x2 ,..., xD ]

   subjected to    G j (x) ≤ 0, j = 1,..., m,
                   G1,..., q (x) = g1,...q (x), Gq +1,..., m (x) = hq +1,...m (x) − ε


   If we denote with F the feasible region and S the whole search
   space, x ∈ F if x ∈ S and all constraints are satisfied. In this case,
   x is a feasible solution.


                                         -40-
Constraint-Handling (CH) Techniques
   Penalty Functions:
      Static Penalties (Homaifar et al.,1994;…)
      Dynamic Penalty (Joines & Houck,1994; Michalewicz&
      Attia,1994;…)
      Adaptive Penalty (Eiben et al. 1998; Coello, 1999; Tessema &
      Gary Yen 2006, …)
      …

   Superiority of feasible solutions
      Start with a population of feasible individuals (Michalewicz,
      1992; Hu & Eberhart, 2002; …)
      Feasible favored comparing criterion (Ray, 2002; Takahama &
      Sakai, 2005; … )
      Specially designed operators (Michalewicz, 1992; …)
      …


                              -41-




Constraint-Handling (CH) Techniques
   Separation of objective and constraints
      Stochastic Ranking (Runarsson & Yao, TEC, Sept 2000)
      Co-evolution methods (Liang 2006e)
      Multi-objective optimization techniques (Coello, 2000;
      Mezura-Montes & Coello, 2002;… )
      Feasible solution search followed by optimization of
      objective (Venkatraman & Gary Yen, 2005)
      …

   While most CH techniques are modular (i.e. we can pick one CH
   technique and one search method independently), there are also
   CH techniques embedded as an integral part of the EA.
   More details in the tutorial presented by Prof Z. Michalewicz

                              -42-
DMS-PSO for Constrained Optimization

    Novel Constraint-Handling Mechanism
        Suppose that there are m constraints, the population is
        divided into n sub-swarms with sn members in each sub-
        swarm and the population size is ps (ps=n*sn). n is a
        positive integer and ‘n=m’ is not required.

        The objective and constraints are assigned to the sub-
        swarms adaptively according to the difficulties of the
        constraints.

        By this way, it is expected to have population of feasible
        individuals with high fitness values.



                                           -43-




DMS-PSO’s Constraint-Handling Mechanism
 How to assign the objective and constraints to each sub-swarm?

    Define
                     ⎧1 if                a>b
                 a>b=⎨
                     ⎩0 if                a≤b
                         ps

                        ∑ ( g (x
                        j =1
                               i    j   ) > 0)
                 pi =                            , i = 1, 2,..., m
                                   ps

    Thus         fp = 1 − p, p = [ p1 , p2 ,..., pm ]
                        m
                  fp + ∑ ( pi / m) = 1
                        i =1




                                           -44-
DMS-PSO’s Constraint-Handling Mechanism
 For each sub-swarm,
     Using roulette selection according to fp and pi / m to assign the
     objective function or a single constraint as its target.
     If sub-swarm i is assigned to improve constraint j, set obj(i)=j and if
     sub-swarm i is assigned to improve the objective function, set
     obj(i)=0.


     Assigning swarm member for this sub-swarm. Sort the unassigned
     particles according to obj(i), and assign the best and sn-1 worst
     particles to sub-swarm i.




                                                     -45-




DMS-PSO’s Comparison Criteria
         1. If obj(i) = obj(j) = k (particle i and j handling the same constraint k),
         particle i wins if
                        Gk ( x i ) < Gk ( x j ) with Gk ( x j ) > 0
                   or V ( x i ) < V ( x j ) & Gk ( x i ), Gk (x j ) ≤ 0
                   or f ( xi ) < f ( x j ) & V (x i ) == V ( x j )

         2. If obj(i) = obj(j) = 0 (particle i and j handling f(x) ) or obj(i) ≠ obj(j) (i
         and j handling different objectives), particle i wins if

                        V (xi ) < V (x j )
                   or f ( xi ) < f ( x j ) & V ( x i ) == V ( x j )
                              m
                   V (x) = ∑ ( weighti ⋅ Gi (x) ⋅ (Gi (x) ≥ 0))
                              i =1


                                         1/ Gi max
                   weighti =         m
                                                       , i = 1, 2,...m
                                  ∑ (1/ Gi max)
                                     i =1

                                                     -46-
DMS-PSO for Constrained Optimization
    Step 1: Initialization -
    Initialize ps particles (position X and velocity V), calculate f(X), Gj(X)
    (j=1,2...,m) for each particle.

    Step 2: Divide the population into sub-swarms and assign obj for
    each sub-swarm using the novel constraint-handling mechanism,
    calculate the mean value of Pc (except in the first generation,
    mean(Pc)=0.5), calculate Pc for each particle. Then empty Pc.

    Step 3: Update the particles according to their objectives; update
    pbest and gbest of each particle according to the same
    comparison criteria, record the Pc value if pbest is updated.




                                   -47-




DMS-PSO for Constrained Optimization
  Step 5: Local Search-
  Every L generations, randomly choose 5 particles’ pbest and start local
  search with Sequential Quadratic Programming (SQP) method using
  these solutions as start points (fmincon(…,…,…) function in Matlab is
  employed). The maximum fitness evaluations for each local search is
  L_FEs.

  Step 6: If FEs≤0.7*Max_FEs, go to Step 3. Otherwise go to Step 7.

  Step 7: Merge the sub-swarms into one swarm and continue PSO
  (Global Single Swarm). Every L generations, start local search using
  gbest as start points using 5*L_FEs as the Max FEs. Stop search if
  FEs≥Max_FEs




                                   -48-
Multi-Objective Optimization
   Mathematically, we can use the following formula to
   express the multi-objective optimization problems (MOP):
   Minimize y = f (x) = ( f1 (x), f 2 (x),K , f m (x)) x ∈ [ Xmin, Xmax]
   subject to   g j (x) ≤ 0, j = 1,..., z
                hk (x) = 0, k = q + 1,..., m

   The objective of multi-objective optimization is to find a
   set of solutions which can express the Pareto-optimal set
   well, thus there are two goals for the optimization:
       1) Convergence to the Pareto-optimal set
       2) Diversity of solutions in the Pareto-optimal set



                                       -49-




Extending PSO for MOP
  Mainly used techniques:
      How to update pbest and gbest (or lbest)
        Execute non-domination comparison with pbest or gbest
        Execute non-domination comparison among all particles’
        pbests and their offspring in the entire population
      How to chose gbest (or lbest)
        Choose gbest (or lbest) from the recorded non-
        dominated solutions
        Choose good local guides
      How to keep diversity
        Crowding distance sorting
        Subpopulation


                                       -50-
Extending CLPSO and DMS-PSO for MOP
    Combine an external archive which is used to record the
    non-dominated solutions found so far.

    Use Non-dominated Sorting and Crowding distance Sorting
    which have been used in NSGAII (Deb et al., 2002) to sort
    the members in the external archive.

    Choose exemplars (CLPSO) or lbest (DMS-PSO) from the
    non-dominated solutions recorded in the external archive.

    Experiments show that MO-CLPSO and DMS-MO-PSO are
    both capable of converging to the true Pareto optimal front
    and maintaining a good diversity along the Pareto front.

 More details in Huang et. al Feb 2006 on MOCLPSO.
 Also please refer to SEAL-06 tutorials by Profs X. D. Li & G. Yen.

                               -51-




Outline
   Motivation for Differential Evolution (DE)

   Classic DE

   DE Variants

   Self-adaptive DE (SaDE)

   Trends in DE Research

   Real-parameter Estimation of Distribution Algorithms
   (rEDA)


                               -52-
Motivation for DE
  DE, proposed by Price and Storn in 1995 [PS95], was motivated by the attempts
to use Genetic Annealing [P94] to solve the Chebychev polynomial fitting
problem.
  Genetic annealing is a population-based, combinatorial optimization algorithm
that implements a thermodynamic annealing criterion via thresholds. Although
successfully applied to solve many combinatorial tasks, genetic annealing cannot
solve well the Chebychev problem.
  Price modified genetic annealing by using floating-point encoding instead of
bit-string one, arithmetic operations instead of logical ones, population-driven
differential mutation instead of bit-inversion mutation and removed the annealing
criterion. Storn suggested creating separate parent and children populations.
Eventually, Chebychev problem can be solved effectively and efficiently.
  DE is closely related to many other multi-point derivative free search methods
[PSL05] such as evolutionary strategies, genetic algorithms, Nelder and Mead
direct search and controlled random search.
                                     -53-




DE at a glance
    Characteristics
        Population-based stochastic direct search
        Self-referential mutation
        Simple but powerful
        Reliable, robust and efficient
        Easy parallelization
        Floating-point encoding
    Basic components
        Initialization
        Trial vector generation
             Mutation
             Recombination
        Replacement
    Feats
        DE demonstrated promising performance in 1st and 2nd
        ICEO [BD96][P97]

                                     -54-
Insight into classic DE (DE/rand/1/bin)
Initialization
A population Px,0 of Np D-dimensional parameter vectors xi,0=[x1,i,0,…,xD,i,0],
i=1,…,Np is randomly generated within the prescribed lower and upper bound bL=
[b1,L,…,bD,L] and bU=[b1,U,…,bD,U]
Example: the initial value (at generation g=0) of the jth parameter of the ith vector is
generated by: xj,i,0 = randj[0,1] ·(bj,U-bj,L) + bj,L, j=1,…,D, i=1,…,Np

Trial vector generation
At the gth generation, a trial population Pu,g consisting of Np D-dimensional trial
vectors vi,g=[v1,i,g,…vD,i,g] is generated via mutation and recombination operations
            =[v1,i,g
applied to the current population Px,g

Differential mutation: with respect to each vector xi,g in the current population,
called target vector, a mutant vector vi,g is generated by adding a scaled, randomly
sampled, vector difference to a basis vector randomly selected from the current
population

                                          -55-




Insight into classic DE (DE/rand/1/bin)
Example: at the gth generation, the ith mutant vector vi,g with respect to ith target
vector xi,g in the current population is generated by vi,g = xr0,g + F·(xr1,g-xr2,g),
i≠r0≠r1≠r2, mutation scale factor F∈(0,1+)
Discrete recombination: with respect to each target vector xi,g in the current
population, a trial vector ui,g is generated by crossing the target vector xi,g with the
corresponding mutant vector vi,g under a pre-specified crossover rate Cr∈[0,1]
Example: at the gth generation, the ith trial vector ui,g with respect to ith target
vector xi,g in the current population is generated by:
                            vj,i,g  if randj[0,1]≤Cr or j=jrand
                    uj,i,g=
                            xj,i,g otherwise
Replacement
If the trial vector ui,g has the better objective function value than that of its
corresponding target vector xi,g, it replaces the target vector in the (g+1)th
generation; otherwise the target vector remains in the (g+1)th generation


                                          -56-
Illustration of classic DE
        x2




                                                          x1
                     Illustration of classic DE


                                -57-




Illustration of classic DE
        x2
                                                   Target vector
                                       xi,g



                                  xr1,g
                       xr0,g

             Base vector                               Two randomly
                                          xr2,g
                                                       selected vectors

                                                          x1
                      Illustration of classic DE


                                -58-
Illustration of classic DE
        x2


                                        xi,g



                                  xr1,g
                       xr0,g


                                            xr2,g


                                                            x1
         Four operating vectors in 2D continuous space


                                 -59-




Illustration of classic DE
        x2


                                        xi,g

                vi,g
                                  xr1,g
                       xr0,g
                                          F·(xr1,g-xr2,g)
                                            xr2,g


                                                            x1
                               Mutation


                                 -60-
Illustration of classic DE
        x2


                                       xi,g

             vi,g                        ui,g
                                 xr1,g
                    xr0,g
                                         F·(xr1,g-xr2,g)
                                           xr2,g


                                                           x1

                            Crossover

                                -61-




Illustration of classic DE
        x2


                                       xi,g

                                         xi,g+1
                                 xr1,g
                    xr0,g


                                           xr2,g


                                                           x1

                            Replacement

                                -62-
Differential vector distribution




        A population of 5 vectors          20 generated difference vectors


Most important characteristics of DE: self-referential mutation!
ES: fixed probability distribution function with adaptive step-size
DE: adaptive distribution of difference vectors with fixed step-size

                                    -63-




DE variants
Modification of different components of DE can result in many DE
variants:
variants
Initialization
Uniform distribution and Gaussian distribution

Trial vector generation
  Base vector selection
      Random selection without replacement: r0=ceil(randi[0,1]·Np)
      Permutation selection: r0=permute[i]
      Random offset selection: r0=(i+rg)%Np (e.g. rg=2)
      Biased selection: global best, local best and better


                                    -64-
DE variants
  Differential mutation
       One difference vector: F·(xr1- xr2)
       Two difference vector: F·(xr1- xr2)+F·(xr3- xr4)
       Mutation scale factor F
             Crucial role: balance exploration and exploitation
            Dimension dependence: jitter (rotation variant) and dither
          (rotation invariant)
             Randomization: different distributions of F
DE/rand/1:          Vi ,G = X r ,G + F ⋅ (X r ,G − X r ,G )
                                       1                 2              3


DE/best/1:           Vi ,G = X best ,G + F ⋅ (X r ,G − X r ,G )
DE/current-to-best/1:Vi ,G = X i ,G + F ⋅ (X best ,G − X i ,G ) + F ⋅ (X r ,G − X r ,G )
                                                             1              2




                     Vi,G = Xr ,G + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G )
                                                                                              1           2


DE/rand/2:
                     Vi,G = Xbest,G + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G )
                                       1             2             3                 4            5


DE/best/2:                                               1              2                 3           4

                                              -65-




DE variants
 Recombination
      Discrete recombination (crossover) (rotation variant)
            One point and multi-point
            Exponential
            Binominal (uniform)
      Arithmetic recombination
            Line recombination (rotation invariant)
            Intermediate recombination (rotation variant)
            Extended intermediate recombination (rotation variant)
                              x2 discrete
                                                  xb                    x1
                     x2
                                                                 line
                                                                                         intermediate

                                     xa           discrete
                                                                                x1
                                              -66-
DE variants
 Crossover rate CR∈[0,1]
  Decomposable (small CR) and indecomposable functions (large CR)

  Degenerate cases in the trial vector generation
For example, in classic DE, r1=r2, r0=r1, r0=r2, i=r0, i=r1, i=r2
Better to generate mutually exclusive indices for target vector, base vector
and vectors constituting the difference vector

Replacement
  One-to-one replacement
  Neighborhood replacement



                                      -67-




Motivation for self-adaptation in DE
The performance of DE on different problems depends on:
  Population size
  Strategy and the associated parameter setting to generate trial vectors
  Replacement scheme

It is hard to choose a unique combination to successfully solve any problem at
hand
   Population size usually depends on the problem scale and complexity
   During evolution, different strategies coupled with specific parameter settings
   may favor different search stages
   Replacement schemes influence the population diversity
   Trial and error scheme may be a waste of computational time & resources
Automatically adapt the configuration in DE so as to generate effective
trial vectors during evolution

                                      -68-
Related works
Practical guideline [SP95], [SP97], [CDG99], [BO04], [PSL05],[GMK02]: for example,
Np∈[5D,10D]; Initial choice of F=0.5 and CR=0.1/0.9; Increase NP and/or F if premature
convergence happens. Conflicting conclusions with respect to different test functions.
                                                                            functions.
Fuzzy adaptive DE [LL02]: use fuzzy logical controllers whose inputs incorporate the relative
function values and individuals of successive generations to adapt the mutation and crossover
parameters.
Self-adaptive Pareto DE [A02]: encode crossover rate in each individual, which is
simultaneously evolved with other parameters. Mutation scale factor is generated for each
variable according to Gaussian distribution N(0,1).
Zaharie [Z02]: theoretically study the DE behavior so as to adapt the control parameters of DE
according to the evolution of population diversity.
Self-adaptive DE (1) [OSE05]: encode mutation scale factor in each individual, which is
simultaneously evolved with other parameters. Crossover rate is generated for each variable
according to Gaussian distribution N(0.5,0.15).
DE with self-adaptive population [T06]: population size, mutation scale factor and crossover
rate are all encoded into each individual.
Self-adaptive DE (2) [BGBMZ06]: encode mutation scale factor and crossover rate in each
individual, which are reinitialized according to two new probability variables.
                                          -69-




Self-adaptive DE (SaDE)
DE with strategy and parameter self-adaptation (QS05, HQS06)
Strategy adaptation: select one strategy from a pool of candidate
strategies with the probability proportional to its previous successful rate
to generate effective trial vectors during a certain learning period
Steps:
     Initialize selection probability pi=1/num_st, i=1,…,num_st for each strategy
     According to the current probabilities, we employ stochastic universal
     selection to assign one strategy to each target vector in the current population
     For each strategy, define vectors nsi and nfi, i=1,…num_st to store the number
     of trial vectors successfully entering the next generation or discarded by
     applying such strategy, respectively, within a specified number of generations,
     called “learning period (LP)”
     Once the current number of generations is over LP, the first element of nsi and
     nfi with respect to the earliest generation will be removed and the behavior in
     current generation will update nsi and nfi
                                          -70-
Self-adaptive DE (SaDE)
 ns1(1)     …   nsnum_st(1)       ns1(2)      …       nsnum_st(2)                  ns1(3)               …           nsnum_st(3)
            …                                 …                                                         …                               …

 ns1(L)     …   nsnum_st(L)       ns1(L+1)    …       nsnum_st(L+1)                ns1(L+2)             …           nsnum_st(L+2)

    The selection probability pi is updated by: ∑ nsi (∑ nsi + ∑ nf i ). Go to 2nd step
Parameter adaptation
Mutation scale factor (F): for each target vector in the current population, we
randomly generate F value according to a normal distribution N(0.5,0.3). Therefore,
99% F values fall within the range of [–0.4,1.4]

Crossover rate (CRj): when applying strategy j with respect to a target vector, the
corresponding CRj value is generated according to an assumed distribution, and those
CRj values that have generated trial vectors successfully entering the next generation
are recorded and updated every LP generations so as to update the parameters of the
CRj distribution. We hereby assume that each CRj, j=1,…,num_st is normally
distributed with its mean and standard deviation initialized to 0.5 and 0.1, respectively

                                                -71-




Instantiations
          In CEC’05, we use 2 strategies:
    DE/rand/1/bin:                                                (
                                      Vi ,G = X r ,G + F ⋅ X r ,G − X r ,G             )
    DE/current-to-best/2/bin: Vi,G = Xi,G + F ⋅ (Xbest,G − Xi,G ) + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G )
                                                  1                       2        3



                                                                                                    1               2           3       4


    LP = 50

          In CEC’06, we employ 4 strategies:
    DE/rand/1/bin:                                            (
                                      Vi,G = Xr ,G + F ⋅ Xr ,G − Xr G              )
                                                                  (                                                     )
                                                  1                   2       3,


    DE/rand/2/bin:                     Vi,G = Xr1 ,G + F ⋅ Xr2 ,G − Xr3,G + Xr4 ,G − Xr5,G
    DE/current-to-best/2/bin: Vi,G = Xi,G + F ⋅ (Xbest,G − Xi,G ) + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G )
                                                                                       (                            )
                                                                                                1               2           3       4


    DE/current-to-rand/1:                                 (       1
                                                                              )
                                       Vi,G = Xi,G + F ⋅ Xr ,G − Xi,G + F ⋅ Xr ,G − Xr ,G   2               3


    LP = 50

                                                -72-
Local search enhancement

To improve convergence speed, we apply a local search
procedure every 500 generations:


  To apply local search, we choose n = 0.05·Np individuals,
which include the individual having the best objective function
value and the n-1 individuals randomly selected from the top 50%
individuals in the current population


 We perform the local search by applying the Quasi-Newton
method to the selected n individuals

                                       -73-




SaDE for constrained optimization
Modification of the replacement procedure
Each trial vector ui,g is compared with its corresponding target vector
xi,g in the current population in terms of both the objective function
value and the degree of constraint violation

ui,g will replace xi,g if any of the following conditions is satisfied:
  ui,g is feasible while xi,g is not
  ui,g and xi,g are both feasible, and ui,g has a better or equal objective
function value than that of xi,g
  ui,g and xi,g are both infeasible, but ui,g has a smaller degree of
overall constraint violation

                                       -74-
SaDE for constrained optimization

The overall constraint violation is evaluated by a weighted
average constraint violation as follows:

       ∑im 1 wi Gi (x )
v(x ) = = m
          ∑i =1 wi

 wi =     1
        G maxi is a weighting factor

 G max i is the maximum violation of the ith constraint obtained so far




                                 -75-




Overview of DE research trends




 Number of publications per year in ISI web of knowledge database


                                 -76-
Overview of DE research trends
DE Applications
 Digital Filter Design                  Optimal Design of Shell-and-Tube Heat
                                       Exchangers
 Multiprocessor synthesis
                                         Optimization of an Alkylation's Reaction
 Neural network learning
                                         Optimization of Thermal Cracker Operation
 Diffraction grating design
                                        Optimization of Non-Linear Chemical
 Crystallographic characterization     Processes
  Beam weight optimization in            Optimum planning of cropping patterns
radiotherapy
                                         Optimization of Water Pumping System
  Heat transfer parameter estimation
in a trickle bed reactor                 Optimal Design of Gas Transmission Network
 Electricity market simulation          Differential Evolution for Multi-Objective
                                       Optimization
 Scenario-Integrated Optimization of
Dynamic Systems                          Bioinformatics


                                       -77-




DE Resources
Books




    1999 [CDG99]                 2004 [BO04]              2005 [PSL05]


Website http://www.icsi.berkeley.edu/~storn/code.htm

Matlab code of SaDE http://www.ntu.edu.sg/home/epnsugan

Blog http://dealgorithm.blogspot.com/
                                       -78-
Estimation of distribution algorithm (EDA)
New approach to evolutionary computation, which was proposed in
1996 [MP96] and described in details in [LL01, LLIB06]
Characteristics:
 Population-based
 No crossover and mutation operators
  In each generation, the underlying probability distribution of the
selected individuals is estimated by certain machine learning model,
which can capture the dependence among parameters
  New population of individuals is sampled from the estimated
probability distribution model
 Repeat the two previous steps until a stopping criterion is met

                                     -79-




Real-parameter EDA (rEDA)
 Without parameter dependencies
     Stochastic Hill-Climbing with Learning by Vectors of Normal Distribution
   (SHCLVND) [RK96]
    Population Based Incremental Learning – Continuous (PBILc) [SD98]
    Univariate Marginal Distribution Algorithm - Continuous (UMDAc) [LELP00]
 Bivariate parameter dependencies
    Mutual Information Maximization for Input Clustering - Continuous - Gaussian
   (MIMICGc) [LELP00]
 Multiple parameter dependencies
     Estimation of Multivariate Normal Algorithm: EMNAglobal, EMNAa, EMNAi
   [LLB01, LL01]
     Estimation of Gaussian Network Algorithm: EMNAee, EMNABGe, EMNABIC
   [LLB01, LL01]
    Iterated Density Evolutionary Algorithm (IDEA) [BT00]
    Real-coded Bayesian Optimization Algorithm (rBOA) [ARG04]

                                     -80-
rEDA resources
Books




            2001 [LL01]              2006 [LLIB06]
Websites
http://medal.cs.umsl.edu/
http://www.iba.k.u-tokyo.ac.jp/english/EDA.htm
http://www.evolution.re.kr/

                              -81-




 Benchmarking Evolutionary Algorithms


    CEC05 comparison results (Single obj. + bound const.)

    CEC06 comparison results (Single obj + general const.)




                              -82-
CEC’05 Comparison Results
 Algorithms involved in the comparison:
    BLX-GL50 (Garcia-Martinez & Lozano, 2005 ): Hybrid Real-Coded Genetic
    Algorithms with Female and Male Differentiation
    BLX-MA (Molina et al., 2005): Adaptive Local Search Parameters for Real-Coded
    Memetic Algorithms
    CoEVO (Posik, 2005): Mutation Step Co-evolution
    DE (Ronkkonen et al.,2005):Differential Evolution
    DMS-L-PSO: Dynamic Multi-Swarm Particle Swarm Optimizer with Local Search
    EDA (Yuan & Gallagher, 2005): Estimation of Distribution Algorithm
    G-CMA-ES (Auger & Hansen, 2005): A restart Covariance Matrix Adaptation
    Evolution Strategy with increasing population size
    K-PCX (Sinha et al., 2005): A Population-based, Steady-State real-parameter
    optimization algorithm with parent-centric recombination operator, a polynomial
    mutation operator and a niched -selection operation.
    L-CMA-ES (Auger & Hansen, 2005): A restart local search Covariance Matrix
    Adaptation Evolution Strategy
    L-SaDE (Qin & Suganthan, 2005): Self-adaptive Differential Evolution algorithm
    with Local Search
    SPC-PNX (Ballester et al.,2005): A steady-state real-parameter GA with PNX
    crossover operator

                                    -83-




CEC’05 Comparison Results
 Problems: 25 minimization problems (Suganthan et al. 2005)
 Dimensions: D=10, 30
 Runs / problem: 25
 Max_FES: 10000*D (Max_FES_10D= 100000; for 30D=300000; for
 50D=500000)
 Initialization: Uniform random initialization within the search space, except
 for problems 7 and 25, for which initialization ranges are specified. The
 same initializations are used for the comparison pairs (problems 1, 2, 3 & 4,
 problems 9 & 10, problems 15, 16 & 17, problems 18, 19 & 20, problems 21,
 22 & 23, problems 24 & 25).
 Global Optimum: All problems, except 7 and 25, have the global optimum
 within the given bounds and there is no need to perform search outside of
 the given bounds for these problems. 7 & 25 are exceptions without a
 search range and with the global optimum outside of the specified
 initialization ranges.


                                    -84-
CEC’05 Comparison Results
 Termination: Terminate before reaching Max_FES if the error in the
 function value is 10-8 or less.
 Ter_Err: 10-8 (termination error value)
 Successful Run: A run during which the algorithm achieves the fixed
 accuracy level within the Max_FES for the particular dimension.
 Success Rate= (# of successful runs) / total runs
 Success Performance = mean (FEs for successful runs)*(# of total runs) /
 (# of successful runs)




                                        -85-




CEC’05 Comparison Results
                    Success Rates of the 11 algorithms for 10-D




  *In the comparison, only the problems in which at least one algorithm
   succeeded once are considered.
                                        -86-
CEC’05 Comparison Results




                                                                        First three algorithms:
                                                                        1.     G-CMA-ES
                                                                        2.     DE
                                                                        3.     DMS-L-PSO




 Empirical distribution over all successful functions for 10-D (SP here
 means the Success Performance for each problem. SP=mean (FEs for
 successful runs)*(# of total runs) / (# of successful runs). SPbest is the
 minimal FES of all algorithms for each problem.)
                                           -87-




CEC’05 Comparison Results
                      Success Rates of the 11 algorithms for 30-D




      *In the comparison, only the problems which at least one algorithm
       succeeded once are considered. -88-
CEC’05 Comparison Results




                                                                      First three algorithms:
                                                                      1.     G-CMA-ES
                                                                      2.     DMS-L-PSO
                                                                      3.     L-CMA-ES




 Empirical distribution over all successful functions for 30-D (SP here
 means the Success Performance for each problem. SP=mean (FEs for
 successful runs)*(# of total runs) / (# of successful runs). SPbest is the
 minimal FES of all algorithms for each problem.)
                                           -89-




CEC’06 Comparison Results
 Involved Algorithms
      DE (Zielinski & Laur, 2006): Differential Evolution
      DMS-C-PSO (Liang & Suganthan, 2006): Dynamic Multi-Swarm Particle
      Swarm Optimizer with the New Constraint-Handling Mechanism
      ε- DE [TS06] Constrained Differential Evolution with Gradient-Based
      Mutation and Feasible Elites
      GDE (Kukkonen & Lampinen, 2006) : Generalized Differential Evolution
      jDE-2 (Brest & Zumer, 2006 ): Self-adaptive Differential Evolution
      MDE (Mezura-Montes, et al. 2006): Modified Differential Evolution
      MPDE (Tasgetiren & Suganthan,2006): Multi-Populated DE Algorithm
      PCX (Ankur Sinha, et al, 2006): A Population-Based, Parent Centric
      Procedure
      PESO+ (Munoz-Žavala et al, 2006): Particle Evolutionary Swarm Optimization
      Plus
      SaDE (Huang et al, 2006 ): Self-adaptive Differential Evolution Algorithm



                                           -90-
CEC’06 Comparison Results
        Problems: 24 minimization problems with constraints
        (Liang, 2006b)
        Runs / problem: 25 (total runs)
        Max_FES:            500,000
        Feasible Rate = (# of feasible runs) / total runs
        Success Rate = (# of successful runs) / total runs
        Success Performance = mean (FEs for successful
        runs)*(# of total runs) / (# of successful runs)
          The above three quantities are computed for each problem
          separately.
          Feasible Run: A run during which at least one feasible
          solution is found in Max_FES.
          Successful Run: A run during which the algorithm finds a
          feasible solution x satisfying f (x) − f (x*) ≤ 0.0001


                                       -91-




CEC’06 Comparison Results

Algorithms’ Parameters
DE                 NP, F, CR
DMS-PSO           ω , c1, c2 , Vmax, n, ns, R, L, L_FES
ε_DE               N, F, CR, Tc, Tmax, cp, Pg, Rg, Ne
GDE                NP, F, CR
jDE-2              NP, F, CR, k, l
MDE                µ, CR, Max_Gen, λ, Fα, Fβ,
MPDE               F, CR, np1, np2
PCX                N , λ, r (a different N is used for g02),
PESO+             ω , c1, c2 , n, not sensitive toω        , c1, c2
SaDE               NP, LP, LS_gap



                                       -92-
CEC’06 Comparison Results
             Success Rate for Problems 1-11




                         -93-




CEC’06 Comparison Results
        Success Rate for Problems 12-19,21,23,24




                         -94-
CEC’06 Comparison Results
                                                                                             1st   ε_DE
                                                                                             2nd   DMS-PSO
                                                                                             3rd   MDE, PCX
                                                                                             5th   SaDE
                                                                                             6th   MPDE
                                                                                             7th   DE
                                                                                             8th   jDE-2
                                                                                             9th   GDE, PESO+




 Empirical distribution over all functions( SP here means the Success Performance for each
 problem. SP=mean (FEs for successful runs)*(# of total runs) / (# of successful runs).
 SPbest is the minimal FES of all algorithms for each problem.)
                                                  -95-




Acknowledgement
    Thanks to the SEAL’06 organizers or giving us this
    opportunity

    Our research into real parameter evolutionary algorithms is
    financially supported by an A*Star (Agency for Science,
    Technology and Research), Singapore

    Thanks to Jane Liang Jing for her contributions in PSO,
    and CEC05/06 benchmarking studies.

    Some slides on G3-PCX were from Prof Kalyanmoy Deb


                                                  -96-
References
 [A02] H. A. Abbass, "Self-adaptive pareto differential evolution," Proc. of 2000 IEEE Congress on Evolutionary
    Computation, pp. 831-836, Honolulu, USA, 2002.
Agrafiotis, D. K.and Cedeno, W. (2002), "Feature selection for structure-activity correlation using binary particle
    swarms,” Journal of Medicinal Chemistry, 2002, 45, 1098-1107.
[ARG04] C. W. Ahn, R. S. Ramakrishna, and D. E. Goldberg, “Real-coded Bayesian optimization algorithm:
       bringing the strength of BOA into the continuous world,” LNCS (GECCO'04), 3102: 840-851, 2004.
Angeline, P. J. (1998) "Using selection to improve particle swarm optimization," Proc. of the IEEE Congress on
    Evolutionary Computation (CEC 1998), Anchorage, Alaska, USA.
Auger, A. and Hansen, N. (2005), "A restart CMA evolution strategy with increasing population size," the 2005
    IEEE Congress on Evolutionary Computation, Vol.2, pp.1769 - 1776, 2005
Auger, A. and Hansen, N. (2005), "Performance evaluation of an advanced local search evolutionary algorithm,"
    2005 IEEE Congress on Evolutionary Computation, Vol.2, pp.1777 - 1784, 2005
[BO04] B. V. Babu, G. Onwubolu (eds.), New Optimization Techniques in Engineering, Springer, Berlin, 2004.
[BFM-02] T. Back, D. B. Fogel and Z. Michalewicz, Handbook of Evolutionary Computation, 1st edition, IOP
    Publishing Ltd., UK, 2002.
Ballester, P. J., Stephenson, J., Carter, J. N. and Gallagher, K. (2005), "Real-Parameter Optimization Performance
    Study on the CEC-2005 benchmark with SPC-PNX," 2005 IEEE Congress on Evolutionary Computation,
    Vol.1, pp.498-505, 2005
S. Baskar, R. T. Zheng, A. Alphones, N. Q. Ngo, P. N. Suganthan (2005a), “Particle Swarm Optimization for the
    Design of Low-Dispersion Fiber Bragg Gratings,” IEEE Photonics Technology Letters, 17(3):615-617, 2005.
S. Baskar, A. Alphones, P. N. Suganthan, J. J. Liang (2005b), “Design of Yagi-Uda Antennas Using Particle Swarm
    Optimization with new learning strategy”, IEE Proc. on Antenna and Propagation 152(5):340-346 OCT. 2005
Bergh van den F. and Engelbrecht, A. P. (2004) "A cooperative approach to particle swarm optimization," IEEE
    Transactions on Evolutionary Computation; 8 (3):225~239
[BD96] H. Bersini, M. Dorigo, L. Gambarella, S. Langerman and G. Seront, "Results of the first international
    contest on evolutionary optimization," P. IEEE Int. Conf. on Evolutionary Computation, Nagoya, Japan, 1996.
Blackwell, T. M. and Bentley, P. J. (2002 ) "Don't push me! Collision-avoiding swarms," Proc. of the IEEE
    Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA.
Brest, J., Zumer, V. and Maucec, M. S. (2006) "Self-Adaptive Differential Evolution Algorithm in Constrained
    Real-Parameter Optimization," in Proc. of the sixth IEEE Conference on Evolutionary Computation, 2006.
[BGBMZ06] J. Brest, S. Greiner, B. Boškovic, M. Mernik, and V. Zumer, "Self-adapting control parameters in
    differential evolution: a comparative study on numerical benchmark problems," IEEE Transactions on
    Evolutionary Computation, 2006
 [BT00] P. A. N. Bosman and D. Thierens, “Continuous iterated density estimation evolutionary algorithms within
    the IDEA framework,” in M. Pelikan and A. O. Rodriguez (eds.), P. of the Optimization by Building and Using
    Probabilistic Models OBUPM Workshop at GECCO’00, pp.197-200, San Francisco, CA, Morgan Kauffmann.
Clerc, M. (1999), "The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,"
    Proc of the IEEE Congress on Evolutionary Computation (CEC 1999), pp. 1951-1957, 1999.
Clerc M. and Kennedy, J. (2002), The particle swarm-explosion, stability, and convergence in a multidimensional
    complex space IEEE Transactions on Evolutionary Computation, vol. 6, pp. 58-73, 2002.
 Coello Coello A. C., Constraint-handling using an evolutionary multiobjective optimization technique. Civil
    Engineering and Environmental Systems, 17:319-346, 2000.
Coello Coello A. C. and Lechuga, M. S. 2002 "MOPSO: a proposal for multiple objective particle swarm
    optimization," Proc. of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA.
[CDG99] D. Corne, M. Dorigo, and F. Glover (eds.), New Ideas in Optimization, McGraw-Hill, 1999.
[DK05] Das, S., and Konar, A.,(2005) "An improved differential evolution scheme for noisy optimization
    problems."Lecture Notes Comput SC 3776: 417-421 2005.
Deb, K.,Pratap, A., Agarwal, S. and Meyarivan (2002), T. "A fast and elitist multi-objective genetic algorithms:
    NSGA-II," IEEE Trans. on Evolutionary Computation. 6(2): 182-197, 2002
[Deb-01] K. Deb. Multi-Objective Optimization using Evolutionary Algorithms, Wiley, Chichester, 2001
    (http://www.iitk.ac.in/kangal/papers).
Eberhart R. C. and Shi, Y. 2001 "Tracking and optimizing dynamic systems with particle swarms," Proceedings of
    the IEEE Congress on Evolutionary Computation (CEC 2001), Seoul, Korea. pp. 94-97.
Eberhart, R. C. and Shi, Y. (2001), "Particle swarm optimization: developments, applications and resources,"
    Proc. of the IEEE Congress on Evolutionary Computation (CEC 2001), Seoul, Korea.
Eberhart, R.C and Kennedy, J. (1995), "A new optimizer using particle swarm theory," Proc. of the Sixth
    International Symposium on Micromachine and Human Science, Nagoya, Japan, pp. 39-43.
Eiben, A. E., Hauw van der J. K. and Hermert van J. I. (1998), "Adaptive penalties for evolutionary gragh coloring,"
    in Artificial Evolution' 97, pp. 95-106, Berlin,.
Coello Coello A. Carlos (1999), "Self-Adaptation Penalties for GA-based Optimization," in Proceedings of the 1999
    Congress on Evolutionary Computation, Washington, D.C., Vol. 1, pp. 573-580.
Fan, H. Y. and Shi, Y. (2001), "Study on Vmax of particle swarm optimization," Proc. of the Workshop on Particle
    Swarm Optimization 2001, Indianapolis, IN.
[FJ06] V. Feoktistov and S. Janaqi, “New energetic selection principle in differential evolution” ENTERPRISE
    INFORMATION SYSTEMS VI :151-157, 2006
[GMK02] R. Gaemperle, S. D. Mueller and P. Koumoutsakos, “A Parameter Study for Differential Evolution”, A.
    Grmela, N. E. Mastorakis, editors, Advances in Intelligent Systems, Fuzzy Systems, Evolutionary Computation,
    WSEAS Press, pp. 293-298, 2002.
Garcia-Martinez C. and Lozano M. (2005), "Hybrid real-coded genetic algorithms with female and male
    differentiation," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp. 896 - 903, 2005.
N. Hansen (2006), ‘Covariance Matrix Adaptation (CMA) Evolution Strategy,’ Tutorial at PPSN-06, Reykjavik,
    Iceland, http://www.bionik.tu-berlin.de/user/niko/cmaesintro.html
[HO-04] N. Hansen, A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies”,
    Evolutionary Computation, 9(2), pp. 159-195, 2004.
Homaifar, A., Lai, S. H. Y. and Qi X., (1994)"Constrained Optimization via Genetic Algorithms," Simulation,
    62(4):242-254.
Hu X. and Eberhart, R. C. (2002) "Multiobjective optimization using dynamic neighborhood particle swarm
    optimization," Proc. IEEE Congress on Evolutionary Computation (CEC 2002), Hawaii, pp. 1677-1681, 2002.
Hu X. and Eberhart, R. C. (2002)" Solving constrained nonlinear optimization problems with particle swarm
    optimization". Proc. of World Multiconf. on Systemics, Cybernetics and Informatics (SCI 2002), Orlando, USA.
[HQS06] Huang, V. L., Qin, A. K. and Suganthan, P. N. (2006) "Self-adaptive Differential Evolution Algorithm for
    Constrained Real-Parameter Optimiation," Proc. of IEEE Congress on Evolutionary Computation, 2006
V. L. Huang, P. N. Suganthan and J. J. Liang, “Comprehensive Learning Particle Swarm Optimizer for Solving
    Multiobjective Optimization Problems”, Int. J of Intelligent Systems, Vol. 21, No. 2, pp. 209-226, Feb. 2006.
[IKL03] J. Ilonen, J.-K. Kamarainen and J. Lampinen, “Differential Evolution Training Algorithm for Feed-Forward
    Neural Networks,” In: Neural Processing Letters vol. 7, no. 1, pp. 93-105. 2003.
[IL04] A. Iorio and X. Li. “Solving Rotated Multi-objective Optimization Problems Using Differential Evolution,”
    Australian Conference on Artificial Intelligence 2004: 861-872
Joines J. and Houck, C. (1994) "On the use of non-stationary penalty functions to solve nonlinear constrained
    optimization problems with GAs," P. IEEE Conf. on Evolutionary Computation, pp.579-584, Orlando, Florida.
Kennedy, J. and Mendes, R. (2002), "Population structure and particle swarm performance," Proc. of the IEEE
    Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA.
Kennedy, J. (1999) "Small worlds and mega-minds: effects of neighborhood topology on particle swarm
    performance," Proc. IEEE Congress on Evolutionary Computation (CEC 1999), pp. 1931-1938.
Kennedy, J. and Eberhart, R. C. 1997 "A discrete binary version of the particle swarm algorithm," Proc. of the
    World Multiconference on Systemics,Cybernetics and Informatics, Piscataway, NJ, pp. 4104-4109, 1997.
Kennedy, J. and Eberhart, R. C. (1995), "Particle swarm optimization," Proc. of IEEE International Conference on
    Neural Networks, Piscataway, NJ, pp. 1942-1948.
Kukkonen S. and Lampinen, J. (2006) "Constrained Real-Parameter Optimization with Generalized Differential
    Evolution," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006.
[LELP00] P. Larrañaga, R. Etxeberria, J. A. Lozano, and J. M. Peña, “Optimization in continuous domains by
    learning and simulation of Gaussian networks,” in Proc. of the Workshop in Optimization by Building and using
    Probabilistic Models. A Workshop within the GECCO 2000, pp. 201-204, Las Vegas, Nevada, USA.
[LLB01] P. Larrañaga, J. A. Lozano and E. Bengoetxea, “Estimation of distribution algorithms based on
    multivariate normal and Gaussian networks,” Technical Report KZZA-IK-1-01, Department of Computer
    Science and Artificial Intelligence, University of the Basque Country, 2001.
[LL01] P. Larrañaga and J. A. Lozano. Estimation of Distribution Algorithms: A New Tool for Evolutionary
    Computation, Kluwer Academic Publishers, 2001.
Laskari, E. C., Parsopoulos, K. E. and Vrahatis, M. N. 2002, "Particle swarm optimization for minimax problems,"
    Proc. of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA.
Chang-Yong Lee, Xin Yao, “Evolutionary programming using mutations based on the Levy probability
    distribution’, IEEE Transactions on Evolutionary Computation, Volume 8, Issue 1, Feb. 2004 Page(s):1 - 13
[L05] Li, X. (2005), "Efficient Differential Evolution using Speciation for Multimodal Function Optimization",
    Proc. of GECCO'05, eds. Beyer, Hans-Georg et al., Washington DC, USA, 25-29 June, p.873-880.
Liang, J. J., Suganthan, P. N., Qin, A. K. and Baskar, S (2006a). "Comprehensive Learning Particle Swarm
      Optimizer for Global Optimization of Multimodal Functions," IEEE Transactions on Evolutionary
      Computation, June 2006.
Liang, J. J. Runarsson Thomas Philip, Mezura-Montes Efren, Clerc, Maurice, Suganthan, P. N., Coello Coello A.
      Carlos and Deb, K. (2006b) "Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on
      Constrained Real-Parameter Optimization," TR, Nanyang Technological University, Singapore, March 2006.
J. J. Liang, S. Baskar, P. N. Suganthan and A. K. Qin (2006c), “Performance Evaluation of Multiagent Genetic
      Algorithm”, Volume 5, Number 1, pp. 83 – 96, Natural Computing, March 2006.
J. J. Liang, P. N. Suganthan, C. C. Chan, V. L. Huang (2006d), “Wavelength detection in FBG sensor network using
      tree search dynamic multi-swarm particle swarm optimizer”, IEEE Photonics Tech. Lets, 18(12):1305-1307.
J. J. Liang & P. N. Suganthan (2006e), "Dynamic Multi-Swarm Particle Swarm Optimizer with a Novel Constrain-
      Handling Mechanism," in Proc. of IEEE Congress on Evolutionary Computation, pp. 9-16, 2006.
J. J. Liang, P. N. Suganthan and K. Deb, "Novel composition test functions for numerical global optimization,"
      Proc. of IEEE International Swarm Intelligence Symposium, pp. 68-75, 2005.
[LL02] J. Liu and J. Lampinen. “Adaptive Parameter Control of Differential Evolution,” In Proceedings of the 8th
      International Conference on Soft Computing (MENDEL 2002), pages 19–26, 2002.
Lovbjerg, M., Rasmussen, T. K. and Krink, T. (2001) "Hybrid particle swarm optimiser with breeding and
      subpopulations," Proc. of the Genetic and Evolutionary Computation Conference 2001 (GECCO 2001).
Lovbjerg M. and Krink, T. (2002)"Extending particle swarm optimisers with self-organized criticality," Proc. of the
      IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA.
[LLIB06] J. A. Lozano, P. Larrañaga, I. Inza, and E. Bengoetxea (eds.), Towards a New Evolutionary Computation:
      Advances on Estimation of Distribution Algorithms Series: Studies in Fuzzy & Soft Computing, Springer, 2006.
Mezura-Montes, E. , Velazquez-Reyes J. and Coello Coello A. Carlos (2006) "Modified Differential Evolution for
      Constrained Optimization," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006.
E. Mezura-Montes and C. A. C. Coello, "A Numerical Comparison of some Multiobjective-Based Techniques to
      Handle Constraints in Genetic Algorithms", Technical Report EVOCINV-0302002, Evolutionary Computation
      Group at CINVESTAV, México, 2002.
Michalewicz, Z. and Attia, N. (1994), "Evolutionary Optimization of Constrained Problems," in Proceedings of the
      third Annual Conference on Evolutionary Programming, pp. 98-108.
Michalewicz, Z.(1992), "Genetic Algorithms + Data Structures = Evolution Programs," Springer-Verlag.
Molina, D., Herrera, F. and Lozano, M. (2005)"Adaptive Local Search Parameters for Real-Coded Memetic
      Algorithms," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.888 - 895, 2005
[MP96] H. Mühlenbein and G. Paaß, “From recombination of genes to the estimation of distributions I. Binary
      parameters,” in Proc. of Parallel Problem Solving from Nature - PPSN IV. LNCS 1411, pp. 178-187, 1996.
Munoz-Zavala A. E., Hernandez-Aguirre A., Villa-Diharce E. R. and Botello-Rionda, S. (2006) "PESO+ for
      Constrained optimization," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006.
[OSE05] G. H. M. Omran, A. Salman, and A. P. Engelbrecht, "Self-adaptive differential evolution," International
      Conference on Computational Intelligence and Security (CIS'05), Xi'an, China, December 2005.
Parsopoulos, K. E. and Vrahatis, M. N. (2004), "UPSO- A Unified Particle Swarm Optimization Scheme," Lecture
      Series on Computational Sciences, pp. 868-873.
Parsopoulos, K. E. and Vrahatis, M. N. 2002 "Particle swarm optimization method for constrained optimization
      problems," Proc. of the Euro-International Symposium on Computational Intelligence.
Peram, T. , Veeramachaneni, K. , and Mohan, C. K. , (2003 )"Fitness-distance-ratio based particle swarm
      optimization," Proc. IEEE Swarm Intelligence System, Indianapolis, Indiana, USA., pp. 174-181.
Posik, P. (2005), "Real-Parameter Optimization Using the Mutation Step Co-evolution," the 2005 IEEE Congress on
      Evolutionary Computation, Vol.1, pp.872 - 879, 2005
[PSL05] K. V. Price, R. Storn, J. Lampinen, Differential Evolution - A Practical Approach to Global Optimization,
      Springer, Berlin, 2005.
[P99] K. V. Price, “An Introduction to Differential Evolution”. In: Corne, D., Dorigo, M.,and Glover, F. (eds.): New
      Ideas in Optimization. McGraw-Hill, London (UK):79-108,1999.
[P97] K. V. Price, "Differential evolution vs. the functions of the 2nd ICEO," in Proc. of 1997 IEEE International
      Conference on Evolutionary Computation (ICEC’97), pp. 153-157, Indianapolis, USA, 1997.
[P94] K. V. Price, "Genetic annealing," Dr. Dobb's Journal, Miller Freeman, San Mateo, Ca., pp. 127-132, 1994.
[QS05] Qin, A. K. and Suganthan, P. N. (2005), "Self-adaptive differential evolution algorithm for numerical
      optimization," the 2005 IEEE Congress on Evolutionary Computation, Vol.2, pp.1785-1791, 2005
Ratnaweera, A., Halgamuge, S. and Watson, H. (2002) ,"Particle swarm optimization with self-adaptive acceleration
      coefficients," Proc. of Int. Conf. on Fuzzy Systems and Knowledge Discovery 2002 (FSKD 2002), pp. 264-268,
Ratnaweera, A. ,Halgamuge, S. and Watson, H. (2004), "Self-organizing hierachical particle swarm optimizer with
     time varying accelerating coefficients," IEEE Transactions on Evolutionary Computation, 8(3): 240-255
Ray T. and Liew, K. M. 2002, A swarm metaphor for multiobjective design optimization Engineering Optimization,
     vol. 34, pp. 141-153, 2002.
Ray, T. (2002) "Constraint Robust Optimal Design using a Multiobjective Evolutionary Algorithm," in Proceedings
     of Congress on Evolutionary Computation 2002 (CEC'2002), vol.1, pp. 419-424,
[RKP05] Ronkkonen, J. , Kukkonen, S. and Price, K.V. , (2005), "Real-Parameter Optimization with Differential
     Evolution," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.506 - 513, 2005
[RK96] S. Rudlof and M. Köppen, “Stochastic hill climbing by vectors of normal distributions. in Proc. of the First
     Online Workshop on Soft Computing (WSC1), Nagoya, Japan, 1996.
Runarsson, T.P., Xin Yao, “Stochastic ranking for constrained evolutionary optimization”, IEEE Transactions on
     Evolutionary Computation, Vol. 4, Issue 3, Sept. 2000 Page(s):284 - 294
 [SD98] M. Sebag and A. Ducoulombier, “Extending population-based incremental learning to continuous search
     spaces,” Proc. Conference on Parallel Problem Solving from Nature - PPSN V, pp. 418-427, Berlin, 1998.
Shi, Y. and Eberhart, R. C. , (1998), "A modified particle swarm optimizer," Proc. of the IEEE Congress on
     Evolutionary Computation (CEC 1998), Piscataway, NJ, pp. 69-73.
Shi, Y. and Eberhart, R. C., (2001) "Particle swarm optimization with fuzzy adaptive inerita weight," Proc. of the
     Workshop on Particle Swarm Optimization, Indianapolis, IN.
Shi, Y. and Krohling, R. A. 2002 "Co-evolutionary particle swarm optimization to solve Min-max problems,"
     Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA.
Sinha, A., Srinivasan A. and Deb, K. (2006) "A Population-Based, Parent Centric Procedure for Constrained Real-
     Parameter Optimization," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006.
Sinha, S. Tiwari and Deb, K. (2005), "A Population-Based, Steady-State Procedure for Real-Parameter
     Optimization," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.514 - 521, 2005
[SP95] R. Storn and K. V. Price, “Differential Evolution - a simple and efficient adaptive scheme for global
     optimization over continuous spaces,” TR-95-012, ICSI, http://http.icsi.berkeley.edu/~storn/litera.html
[SP97] R. Storn and K. V. Price, “Differential evolution-A simple and Efficient Heuristic for Global Optimization
     over Continuous Spaces,” Journal of Global Optimization, 11:341-359,1997.
Suganthan P. N., Hansen, N. Liang, J. J. Deb, K. Chen, Y.-P. Auger, A. and Tiwari, S. (2005), "Problem Definitions
     and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization," Technical Report,
     Nanyang Technological University, Singapore, May 2005 and KanGAL Report #2005005, IIT Kanpur, India.
Suganthan, P. N.,(1999)"Particle swarm optimizer with neighborhood operator," Proc. of the IEEE Congress on
     Evolutionary Computation (CEC 1999), Piscataway, NJ, pp. 1958-1962.
[TS06] Takahama T. and Sakai, S. (2006) "Constrained Optimization by the Constrained Differential Evolution with
     Gradient-Based Mutation and Feasible Elites," Proc. IEEE Congress on Evolutionary Computation, 2006.
Takahama T. and S. Sakai, (2005) "Constrained optimization by constrained particle swarm optimizer with -level
     control," Proc. IEEE Int. Workshop on Soft Computing as Transdisciplinary Science and Technology
     (WSTST’05), Muroran, Japan, May 2005, pp. 1019–1029.
Tasgetiren, M. Fatih and Suganthan, P. N. (2006) "A Multi-Populated Differential Evolution Algorithm for Solving
     Constrained Optimization Problem," Proc. IEEE Congress on Evolutionary Computation, 2006.
[TPV06] Tasoulis, DK., Plagianakos, VP., and Vrahatis, MN.(2006),“Differential evolution algorithms for finding
     predictive gene subsets in microarray data” Int Fed Info Proc 204: 484-491 2006.
[T06] J. Teo, "Exploring dynamic self-adaptive populations in differential evolution," Soft Computing - A Fusion of
     Foundations, Methodologies and Applications, 10(8): 673-686, 2006.
Tessema, B. Yen, G.G., “A Self Adaptive Penalty Function Based Algorithm for Constrained Optimization”, IEEE
     Congress on Evolutionary Computation, CEC 2006 16-21 July 2006 Page(s):246 – 253.
Venkatraman, S. Yen, G.G., “A Generic Framework for Constrained Optimization Using Genetic Algorithms”,
     IEEE Transactions on Evolutionary Computation, Volume 9, Issue 4, Aug. 2005 Page(s):424 – 435.
X. Yao, Y. Liu, G. M. Lin, “Evolutionary programming made faster”, IEEE Transactions on Evolutionary
     Computation, Volume 3, Issue 2, July 1999 Page(s):82 - 102
Yuan, B. and Gallagher, M. (2005), "Experimental results for the special session on real-parameter optimization at
     CEC 2005: a simple, continuous EDA," IEEE Congress on Evolutionary Computation, pp.1792 - 1799, 2005
[Z02] D. Zaharie, “Critical values for the control parameters of differential evolution algorithms,” in in Proc. of the
     8th Int.l Conference on Soft Computing (MENDEL 2002), pp. 62-67, Brno,Czech Republic, June 2002.
 [ZX03] W. J. Zhang and X. F. Xie. “DEPSO: hybrid particle swarm with differential evolution operator.” IEEE
     International Conference on Systems, Man & Cybernetics (SMCC), Washington D C, USA, 2003: 3816-3821.
Zielinski K. and Laur, R., (2006) "Constrained Singel-Objective Optimization Using Differential Evolution", in
     Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006.

Weitere ähnliche Inhalte

Ähnlich wie Recent Advances in Real-Parameter Evolutionary Algorithms

Thesis_Presentation
Thesis_PresentationThesis_Presentation
Thesis_PresentationMaruf Alam
 
LNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine LearningLNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine Learningbutest
 
Online learning in estimation of distribution algorithms for dynamic environm...
Online learning in estimation of distribution algorithms for dynamic environm...Online learning in estimation of distribution algorithms for dynamic environm...
Online learning in estimation of distribution algorithms for dynamic environm...André Gonçalves
 
A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
 
PPSN 2004 - 3rd session
PPSN 2004 - 3rd sessionPPSN 2004 - 3rd session
PPSN 2004 - 3rd sessionJuan J. Merelo
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
RBHF_SDM_2011_Jie
RBHF_SDM_2011_JieRBHF_SDM_2011_Jie
RBHF_SDM_2011_JieMDO_Lab
 
A parsimonious SVM model selection criterion for classification of real-world ...
A parsimonious SVM model selection criterion for classification of real-world ...A parsimonious SVM model selection criterion for classification of real-world ...
A parsimonious SVM model selection criterion for classification of real-world ...o_almasi
 
Evolutionary Design of Swarms (SSCI 2014)
Evolutionary Design of Swarms (SSCI 2014)Evolutionary Design of Swarms (SSCI 2014)
Evolutionary Design of Swarms (SSCI 2014)Benjamin Bengfort
 
Medical diagnosis classification
Medical diagnosis classificationMedical diagnosis classification
Medical diagnosis classificationcsandit
 
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...cscpconf
 
Evaluation of a hybrid method for constructing multiple SVM kernels
Evaluation of a hybrid method for constructing multiple SVM kernelsEvaluation of a hybrid method for constructing multiple SVM kernels
Evaluation of a hybrid method for constructing multiple SVM kernelsinfopapers
 
SigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerSigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerIan Dewancker
 
Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...
Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...
Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...Weiyang Tong
 
Semantic Genetic Programming Tutorial
Semantic Genetic Programming TutorialSemantic Genetic Programming Tutorial
Semantic Genetic Programming TutorialAlbertoMoraglio
 
AIAA-SDM-SequentialSampling-2012
AIAA-SDM-SequentialSampling-2012AIAA-SDM-SequentialSampling-2012
AIAA-SDM-SequentialSampling-2012OptiModel
 

Ähnlich wie Recent Advances in Real-Parameter Evolutionary Algorithms (20)

Thesis_Presentation
Thesis_PresentationThesis_Presentation
Thesis_Presentation
 
Sota
SotaSota
Sota
 
LNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine LearningLNCS 5050 - Bilevel Optimization and Machine Learning
LNCS 5050 - Bilevel Optimization and Machine Learning
 
04 1 evolution
04 1 evolution04 1 evolution
04 1 evolution
 
Online learning in estimation of distribution algorithms for dynamic environm...
Online learning in estimation of distribution algorithms for dynamic environm...Online learning in estimation of distribution algorithms for dynamic environm...
Online learning in estimation of distribution algorithms for dynamic environm...
 
A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test Functions
 
ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18ML_in_QM_JC_02-10-18
ML_in_QM_JC_02-10-18
 
PPSN 2004 - 3rd session
PPSN 2004 - 3rd sessionPPSN 2004 - 3rd session
PPSN 2004 - 3rd session
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
RBHF_SDM_2011_Jie
RBHF_SDM_2011_JieRBHF_SDM_2011_Jie
RBHF_SDM_2011_Jie
 
A parsimonious SVM model selection criterion for classification of real-world ...
A parsimonious SVM model selection criterion for classification of real-world ...A parsimonious SVM model selection criterion for classification of real-world ...
A parsimonious SVM model selection criterion for classification of real-world ...
 
Evolutionary Design of Swarms (SSCI 2014)
Evolutionary Design of Swarms (SSCI 2014)Evolutionary Design of Swarms (SSCI 2014)
Evolutionary Design of Swarms (SSCI 2014)
 
Medical diagnosis classification
Medical diagnosis classificationMedical diagnosis classification
Medical diagnosis classification
 
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
MEDICAL DIAGNOSIS CLASSIFICATION USING MIGRATION BASED DIFFERENTIAL EVOLUTION...
 
Evaluation of a hybrid method for constructing multiple SVM kernels
Evaluation of a hybrid method for constructing multiple SVM kernelsEvaluation of a hybrid method for constructing multiple SVM kernels
Evaluation of a hybrid method for constructing multiple SVM kernels
 
SigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerSigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_Primer
 
Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...
Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...
Multi-Domain Diversity Preservation to Mitigate Particle Stagnation and Enab...
 
Semantic Genetic Programming Tutorial
Semantic Genetic Programming TutorialSemantic Genetic Programming Tutorial
Semantic Genetic Programming Tutorial
 
AIAA-SDM-SequentialSampling-2012
AIAA-SDM-SequentialSampling-2012AIAA-SDM-SequentialSampling-2012
AIAA-SDM-SequentialSampling-2012
 
BOIL
BOILBOIL
BOIL
 

Kürzlich hochgeladen

Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxMaryGraceBautista27
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfphamnguyenenglishnb
 
Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)cama23
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxiammrhaywood
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptxSherlyMaeNeri
 
FILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinoFILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinojohnmickonozaleda
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...Nguyen Thanh Tu Collection
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONHumphrey A Beña
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4MiaBumagat1
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxAshokKarra1
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptxmary850239
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...Postal Advocate Inc.
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Celine George
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Celine George
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYKayeClaireEstoconing
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSJoshuaGantuangco2
 

Kürzlich hochgeladen (20)

Science 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptxScience 7 Quarter 4 Module 2: Natural Resources.pptx
Science 7 Quarter 4 Module 2: Natural Resources.pptx
 
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdfAMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
AMERICAN LANGUAGE HUB_Level2_Student'sBook_Answerkey.pdf
 
Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)Global Lehigh Strategic Initiatives (without descriptions)
Global Lehigh Strategic Initiatives (without descriptions)
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptxECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
ECONOMIC CONTEXT - PAPER 1 Q3: NEWSPAPERS.pptx
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Judging the Relevance and worth of ideas part 2.pptx
Judging the Relevance  and worth of ideas part 2.pptxJudging the Relevance  and worth of ideas part 2.pptx
Judging the Relevance and worth of ideas part 2.pptx
 
FILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipinoFILIPINO PSYCHology sikolohiyang pilipino
FILIPINO PSYCHology sikolohiyang pilipino
 
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
HỌC TỐT TIẾNG ANH 11 THEO CHƯƠNG TRÌNH GLOBAL SUCCESS ĐÁP ÁN CHI TIẾT - CẢ NĂ...
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATIONTHEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
THEORIES OF ORGANIZATION-PUBLIC ADMINISTRATION
 
ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4ANG SEKTOR NG agrikultura.pptx QUARTER 4
ANG SEKTOR NG agrikultura.pptx QUARTER 4
 
Karra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptxKarra SKD Conference Presentation Revised.pptx
Karra SKD Conference Presentation Revised.pptx
 
4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx4.18.24 Movement Legacies, Reflection, and Review.pptx
4.18.24 Movement Legacies, Reflection, and Review.pptx
 
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
USPS® Forced Meter Migration - How to Know if Your Postage Meter Will Soon be...
 
Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17Field Attribute Index Feature in Odoo 17
Field Attribute Index Feature in Odoo 17
 
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
Incoming and Outgoing Shipments in 3 STEPS Using Odoo 17
 
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITYISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
ISYU TUNGKOL SA SEKSWLADIDA (ISSUE ABOUT SEXUALITY
 
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTSGRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
GRADE 4 - SUMMATIVE TEST QUARTER 4 ALL SUBJECTS
 
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptxFINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
FINALS_OF_LEFT_ON_C'N_EL_DORADO_2024.pptx
 

Recent Advances in Real-Parameter Evolutionary Algorithms

  • 1. Recent Advances in Real-Parameter Evolutionary Algorithms Presenters: P. N. Suganthan & Qin Kai School of Electrical and Electronic Engineering Nanyang Technological University, Singapore Some Software Resources Available from: http://www.ntu.edu.sg/home/epnsugan SEAL’06 October 2006 RP-EAs & Related Issues Covered Benchmark Test Functions Methods Modeled After conventional GAs CMA-ES / Covariance Matrix Adapted Evolution Strategy PSO / Particle Swarm Optimization DE / Differential Evolution EDA / Estimation of Distribution Algorithms CEC’05, CEC’06 benchmarking Coverage is biased in favor of our research and results of the CEC-05/06 competitions while overlap with other SEAL tutorial presentations is reduced. -2-
  • 2. Benchmark Test Functions Resources available from http://www.ntu.edu.sg/home/epnsugan (limited to our own work) From Prof Xin Yao’s group http://www.cs.bham.ac.uk/research/projects/ecb/ Includes diverse problems. -3- Why do we require benchmark problems? Why we need test functions? To evaluate a novel optimization algorithm’s property on different types of landscapes Compare different optimization algorithms Types of benchmarks Bound constrained problems Constrained problems Single / Multi-objective problems Static / Dynamic optimization problems Multimodal problems (niching, crowding, etc.) -4-
  • 3. Shortcomings in Bound constrained Benchmarks Some properties of benchmark functions may make them unrealistic or may be exploited by some algorithms: Global optimum having the same parameter values for different variablesdimensions Global optimum at the origin Global optimum lying in the center of the search range Global optimum on the bound Local optima lying along the coordinate axes no linkage among the variables / dimensions or the same linkages over the whole search range Repetitive landscape structure over the entire space Do real-world problems possess these properties? Liang et. al 2006c (Natural Computation) has more details. -5- How to Solve? Shift the global optimum to a random position to make the global optimum to have different parameter values for different dimensions Rotate the functions as below: F ( x) = f ( R * x) where R is an orthogonal rotation matrix Use different kinds of benchmark functions, different rotation matrices to compose a single test problem. Mix different properties of different basic test functions together to destroy repetitive structures and to construct novel Composition Functions -6-
  • 4. Novel Composition Test Functions Compose the standard benchmark functions to construct a more challenging function with a randomly located global optimum and several randomly located deep local optima with different linkage properties over the search space. Gaussian functions are used to combine these benchmark functions and to blur individual functions’ structures mainly around the transition regions. More details in Liang, et al 2005. -7- Novel Composition Test Functions is. -8-
  • 5. Novel Composition Test Functions -9- Novel Composition Test Functions -10-
  • 6. A couple of Examples Many composition functions are available from our homepage Composition Function 1 (F1): Composition Function 2 (F2): Made of Sphere Functions Made of Griewank’s Functions Similar analysis is needed for other benchmarks too -11- Methods Modeled After Conventional GAs Some resources available from http://www.iitk.ac.in/kangal/ -12-
  • 7. Major Steps in Standard GAs Population initialization Fitness evaluation Selection Crossover (or recombination) Mutation Replacement -13- Mean-Centric Vs Parent-Centric Recombination Individual recombination preserves the mean of parents (mean-centric) – works well if the population is randomly initialized within the search range and the global optimum is near the centre of the search range. Individual recombination biases offspring towards a particular parent, but with equal probability to each parent (parent-centric) -14-
  • 8. Mean-Centric Recombination [Deb-01] SPX : Simplex crossover (Tsutsui et al. 1999) UNDX : Unimodal Normally distributed crossover (Ono et al. 1997, Kita 1998) BLX : Blend crossover (Eshelman et al 1993) Arithmetic crossover (Michalewicz et al 1991) Please refer to [Deb-01] for details. Parent-Centric Recombination SBX (Simulated Binary crossover) : variable-wise Vector-wise parent-centric recombination (PCX) And many more … -15- Minimal Generation Gap (MGG) Model 1. Select μ parents randomly 2. Generate λ offspring using a recombination scheme 3. Choose two parents at random from the population 4. One is replaced with the best of λ offspring and other with a solution by roulette-wheel selection from a combination of λ offspring and two parents Steady-state and elitist GA Typical values to use: population N = 300, λ=200, μ=3 (UNDX) and μ=n+1 (SPX) (n: dimensions) Satoh, H, Yamamura, M. and Kobayashi, S.: Minimal generation gap model for GAs considering both exploration and exploitation. Proc. of the IIZUKA 96, pp. 494-497 (1997). -16-
  • 9. Generalized Generation Gap (G3) Model 1. Select the best parent and μ-1 other parents randomly 2. Generate λ offspring using a recombination scheme 3. Choose two parents at random from the population 4. Form a combination of two parents and λ offspring, choose best two solutions and replace the chosen two parents Parameters to tune: λ, μ and N -17- Mutation In RP-EAs, the main difference between crossover and mutation may be the number of parents used for the operation. In mutation, just 1. In crossover, 2 or more (excluding DE’s terminology). There are several mutation operators developed for real parameters Random mutation : the same as re-initialization of a variable Gaussian mutation: Xnew=Xold+N(0, σ ) Cauchy mutation with EP (Xin Yao et. al 1999) Levy mutation / adaptive Levy mutation with EP (Lee, Xin Yao, 2004) Making use of these RP-crossover (recombination), RP-mutation operators, RP-EAs can be developed in a manner similar to the Binary GA. -18-
  • 10. Covariance Matrix Adapted Evolution Strategy CMAES By Nikolaus Hansen & collaborators Extensive Tutorials / Codes available from http://www.bionik.tu-berlin.de/user/niko/cmaesintro.html http://www.bionik.tu-berlin.de/user/niko/cmaes_inmatlab.html -19- Basic Evolution Strategy [BFM-02] Randomly generate an initial population of M solutions. Compute the fitness values of these M solutions. Use all vectors as parents to create nb offspring vectors by Gaussian mutation. Calculate the fitness of nb vectors, and prune the population to M fittest vectors. Go to the next step if the stopping condition is satisfied. Otherwise, go to Step 2. Choose the fittest one in the population in the last generation as the optimal solution. -20-
  • 11. Covariance Matrix Adapted Evolution Strategy CMAES The basic ES generates solutions around the parents CMAES keeps track of the evolution path. It updates mean location of the population, covariance matrix and the standard deviation of the sampling normal distribution after every generation. The solutions are ranked which is not a straight forward step in multi-objective / constrained problems. Ranked solutions contribute according to their ranks to the updating of the mean position, covariance matrix, and the step-size parameter. Covariance analysis reveals the interactions / couplings among the parameters. -21- CMAES Iterative Equations [Hansen 2006] -22-
  • 12. CMAES [HO-04] Performs well on rotated / linked problems with bound constraints. It learns global linkages between parameters. The cumulation operation integrates a degree of forgetting. The CMAES implementation is self-adaptive. CMAES is not highly effective in solving constrained problems, multi-objective problems, etc. CMAES is not as simple as PSO / DE. -23- Outline of PSO PSO PSO variants CLPSO DMS-L-PSO Constrained Optimization Constraint-Handling Techniques DMS-C-PSO PSO for Multi-Objective Optimization -24-
  • 13. Particle Swarm Optimizer Introduced by Kennedy and Eberhart in 1995 (Eberhart & Kennedy,1995; Kennedy & Eberhart,1995) Emulates flocking behavior of birds to solve optimization problems Each solution in the landscape is a particle All particles have fitness values and velocities -25- Particle Swarm Optimizer Two versions of PSO Global version (May not be used alone to solve multimodal problems): Learning from the personal best (pbest) and the best position achieved by the whole population (gbest) Vi d ← c1 * rand1i d ∗ ( pbesti d − X i d ) + c2 * rand 2i d ∗ ( gbest d − X i d ) X i d ← X i d + Vi d i – particle counter & d – dimension counter Local Version: Learning from the pbest and the best position achieved in the particle's neighborhood population (lbest) Vi d ← c1 * rand1i d ∗ ( pbesti d − X i d ) + c2 * rand 2i d ∗ (lbestk d − X i d ) X i d ← X i d + Vi d The random numbers (rand1 & rand2) should be generated for each dimension of each particle in every iteration. -26-
  • 14. Particle Swarm Optimizer where c1 and c2 in the equation are the acceleration constants rand 1i d and rand 2i d are two random numbers in the range [0,1]; Xi = ( X i1 , X i 2 ,..., X i D ) represents the position of the ith particle; Vi = (Vi1 ,Vi 2 ,...,Vi D ) represents the rate of the position change (velocity) for particle i. pbest i = ( pbesti1 , pbesti 2 ,..., pbesti D ) represents the best previous position (the position giving the best fitness value) of the ith particle; gbest = ( gbest1 , gbest 2 ,..., gbest D ) represents the best previous position of the population; lbest k = (lbestk 1 , lbestk 2 ,..., lbestk D ) represents the best previous position achieved by the neighborhood of the particle; -27- PSO variants Modifying the Parameters Inertia weight ω (Shi & Eberhart, 1998; Shi & Eberhart, 2001; Eberhart & Shi, 2001, …) Constriction coefficient (Clerc,1999; Clerc & Kennedy, 2002) Time varying acceleration coefficients (Ratnaweera et al. 2004) Linearly decreasing Vmax (Fan & Shi, 2001) …… Using Topologies Extensive experimental studies (Kennedy, 1999; Kennedy & Mendes, 2002, …) Dynamic neighborhood (Suganthan,1999; Hu and Eberhart, 2002; Peram et al. 2003) Combine the global version and local version together (Parsopoulos and Vrahatis, 2004 ) named as the unified PSO. Fully informed PSO or FIPS (Mendes & Kennedy 2004) and so on … -28-
  • 15. PSO variants and Applications Hybrid PSO Algorithms PSO + selection operator (Angeline,1998) PSO + crossover operator (Lovbjerg, 2001) PSO + mutation operator (Lovbjerg & Krink, 2002; Blackwell & Bentley,2002; … …) PSO + cooperative approach (Bergh & Engelbrecht, 2004) … Various Optimization Scenarios & Applications Binary Optimization (Kennedy & Eberhart, 1997; Agrafiotis et. al 2002; ) Constrained Optimization (Parsopoulos et al. 2002; Hu & Eberhart, 2002; … ) Multi-objective Optimization (Ray et. al 2002; Coello et al. 2002/04; … ) Dynamic Tracking (Eberhart & Shi 2001; … ) Yagi-Uda antenna (Baskar et al 2005b), Photonic FBG design (Baskar et al 2005a), FBG sensor network design (Liang et al June 2006) -29- Comprehensive Learning PSO Comprehensive Learning Strategy : Vi d ← w ∗ Vi d + c * rand i d ∗ ( pbest d ( d ) − X i d ) fi where f i = [ fi (1), fi (2),..., fi ( D)] defines which particle’s pbest particle d i should follow. pbest fi ( d ) can be the corresponding dimension of any particle’s pbest including its own pbest, and the decision depends on probability Pc, referred to as the learning probability, which can take different values for different particles. For each dimension of particle i, we generate a random number. If this random number is larger than Pci, this dimension will learn from its own pbest, otherwise it will learn from another randomly chosen particle’s pbest. -30-
  • 16. Comprehensive Learning PSO The tournament selection (2) procedure is employed to choose the exemplar for each dimension of each particle. To ensure that a particle learns from good exemplars and to minimize the time wasted on poor directions, we allow the particle to learn from the exemplars until the particle ceases improving for a certain number of generations called the refreshing gap m (7 generations), after which we re-assign the dimensions of this particle. -31- Comprehensive Learning PSO We observe three main differences between the CLPSO and the original PSO: Instead of using particle’s own pbest and gbest as the exemplars, all particles’ pbests can potentially be used as the exemplars to guide a particle’s flying direction. Instead of learning from the same exemplar particle for all dimensions, each dimension of a particle in general can learn from different pbests for different dimensions for some generations. In other words, in any one iteration, each dimension of a particle may learn from the corresponding dimension of different particle’s pbest. Instead of learning from two exemplars (gbest and pbest) at the same time in every generation as in the original PSO, each dimension of a particle learns from just one exemplar for some generations. -32-
  • 17. Comprehensive Learning PSO Experiments have been conducted on16 test functions which can be grouped into four classes. The results demonstrated good performance of the CLPSO in solving multimodal problems when compared with eight other recent variants of the PSO. (The details can be found in Liang et al. 2006a. Matlab codes are also available for CLPSO and the 8 PSO variants.) -33- Dynamic Multi-Swarm PSO Constructed based on the local version of PSO with a novel neighborhood topology. This new neighborhood structure has two important characteristics: Small Sized Swarms Randomized Regrouping Schedule -34-
  • 18. Dynamic Multi-Swarm PSO Regroup The population is divided into small swarms randomly. These sub-swarms use their own particles to search for better solutions and converge to good solutions. The whole population is regrouped into new sub-swarms periodically. New sub-swarms continue the search. This process is continued until a stop criterion is satisfied -35- Dynamic Multi-Swarm PSO Adaptive Self-Learning Strategy Each Particle has a corresponding Pci. Every R generations, keep_idi is decided by Pci. If the generated random number is larger than or equal to Pci, this dimension will be set at the value of its own pbest, keep_idi(d) is set to1. Otherwise keep_idi(d) is set to 0 and it will learn from the lbest and its own pbest as the PSO with constriction coefficient: If keep _ idi d = 0 Vi d ← 0.729 ∗ Vi d + 1.49445 ∗ rand1i d ∗ ( pbesti d − X i d ) +1.49445 ∗ rand 2i d ∗ (lbestk d − X i d ) Vi d = min(Vmax , max(−Vmax , Vi d )) d d X i d ← X i d + Vi d Otherwise Somewhat similar to X i ← pbesti d d DE & CPSO -36-
  • 19. Dynamic Multi-Swarm PSO Adaptive Self-Learning Strategy Assume Pc normally distributed in a range with mean(Pc) and a standard deviation of 0.1. Initially, mean(Pc) is set at 0.5 and different Pc values conforming to this normal distribution are generated for each individual in the current population. During every generation, the Pc values associated with the particles which find new pbest are recorded. When the sub-swarms are regrouped, the mean of normal distribution of Pc is recalculated according to all the recorded Pc values corresponding to successful movements during this period. The record of the successful Pc values will be cleared once the normal distribution’s mean is recalculated. As a result, the proper Pc value range for the current problem can be learned to suit the particular problem. -37- DMS-PSO with Local Search Since we achieve a larger diversity using DMS-PSO, at the same time, the convergence rate is reduced. In order to alleviate this problem and to enhance local search capabilities, a local search is incorporated: Every L generations, the pbests of five randomly chosen particles will be used as the starting points and the Quasi- Newton method (fminunc(…,…,…)) is employed to do the local search with the L_FEs as the maximum FEs. In the end of the search, all particles form a single swarm to become a global PSO version. The best solution achieved so far is refined using Quasi-Newton method every L generations with the 5*L_FEs as the maximum FEs. If local search results in improvements, the nearest pbest is replaced / updated. -38-
  • 20. Constrained Optimization Optimization of constrained problems is an important area in the optimization field. In general, the constrained problems can be transformed into the following form: Minimize f (x), x = [ x1 , x2 ,..., xD ] subjected to: gi (x) ≤ 0, i = 1,..., q h j (x) = 0, j = q + 1,..., m q is the number of inequality constraints and m-q is the number of equality constraints. -39- Constrained Optimization For convenience, the equality constraints can be transformed into inequality form: | h j ( x) | −ε ≤ 0 where ε is the allowed tolerance. Then, the constrained problems can be expressed as Minimize f (x), x = [ x1 , x2 ,..., xD ] subjected to G j (x) ≤ 0, j = 1,..., m, G1,..., q (x) = g1,...q (x), Gq +1,..., m (x) = hq +1,...m (x) − ε If we denote with F the feasible region and S the whole search space, x ∈ F if x ∈ S and all constraints are satisfied. In this case, x is a feasible solution. -40-
  • 21. Constraint-Handling (CH) Techniques Penalty Functions: Static Penalties (Homaifar et al.,1994;…) Dynamic Penalty (Joines & Houck,1994; Michalewicz& Attia,1994;…) Adaptive Penalty (Eiben et al. 1998; Coello, 1999; Tessema & Gary Yen 2006, …) … Superiority of feasible solutions Start with a population of feasible individuals (Michalewicz, 1992; Hu & Eberhart, 2002; …) Feasible favored comparing criterion (Ray, 2002; Takahama & Sakai, 2005; … ) Specially designed operators (Michalewicz, 1992; …) … -41- Constraint-Handling (CH) Techniques Separation of objective and constraints Stochastic Ranking (Runarsson & Yao, TEC, Sept 2000) Co-evolution methods (Liang 2006e) Multi-objective optimization techniques (Coello, 2000; Mezura-Montes & Coello, 2002;… ) Feasible solution search followed by optimization of objective (Venkatraman & Gary Yen, 2005) … While most CH techniques are modular (i.e. we can pick one CH technique and one search method independently), there are also CH techniques embedded as an integral part of the EA. More details in the tutorial presented by Prof Z. Michalewicz -42-
  • 22. DMS-PSO for Constrained Optimization Novel Constraint-Handling Mechanism Suppose that there are m constraints, the population is divided into n sub-swarms with sn members in each sub- swarm and the population size is ps (ps=n*sn). n is a positive integer and ‘n=m’ is not required. The objective and constraints are assigned to the sub- swarms adaptively according to the difficulties of the constraints. By this way, it is expected to have population of feasible individuals with high fitness values. -43- DMS-PSO’s Constraint-Handling Mechanism How to assign the objective and constraints to each sub-swarm? Define ⎧1 if a>b a>b=⎨ ⎩0 if a≤b ps ∑ ( g (x j =1 i j ) > 0) pi = , i = 1, 2,..., m ps Thus fp = 1 − p, p = [ p1 , p2 ,..., pm ] m fp + ∑ ( pi / m) = 1 i =1 -44-
  • 23. DMS-PSO’s Constraint-Handling Mechanism For each sub-swarm, Using roulette selection according to fp and pi / m to assign the objective function or a single constraint as its target. If sub-swarm i is assigned to improve constraint j, set obj(i)=j and if sub-swarm i is assigned to improve the objective function, set obj(i)=0. Assigning swarm member for this sub-swarm. Sort the unassigned particles according to obj(i), and assign the best and sn-1 worst particles to sub-swarm i. -45- DMS-PSO’s Comparison Criteria 1. If obj(i) = obj(j) = k (particle i and j handling the same constraint k), particle i wins if Gk ( x i ) < Gk ( x j ) with Gk ( x j ) > 0 or V ( x i ) < V ( x j ) & Gk ( x i ), Gk (x j ) ≤ 0 or f ( xi ) < f ( x j ) & V (x i ) == V ( x j ) 2. If obj(i) = obj(j) = 0 (particle i and j handling f(x) ) or obj(i) ≠ obj(j) (i and j handling different objectives), particle i wins if V (xi ) < V (x j ) or f ( xi ) < f ( x j ) & V ( x i ) == V ( x j ) m V (x) = ∑ ( weighti ⋅ Gi (x) ⋅ (Gi (x) ≥ 0)) i =1 1/ Gi max weighti = m , i = 1, 2,...m ∑ (1/ Gi max) i =1 -46-
  • 24. DMS-PSO for Constrained Optimization Step 1: Initialization - Initialize ps particles (position X and velocity V), calculate f(X), Gj(X) (j=1,2...,m) for each particle. Step 2: Divide the population into sub-swarms and assign obj for each sub-swarm using the novel constraint-handling mechanism, calculate the mean value of Pc (except in the first generation, mean(Pc)=0.5), calculate Pc for each particle. Then empty Pc. Step 3: Update the particles according to their objectives; update pbest and gbest of each particle according to the same comparison criteria, record the Pc value if pbest is updated. -47- DMS-PSO for Constrained Optimization Step 5: Local Search- Every L generations, randomly choose 5 particles’ pbest and start local search with Sequential Quadratic Programming (SQP) method using these solutions as start points (fmincon(…,…,…) function in Matlab is employed). The maximum fitness evaluations for each local search is L_FEs. Step 6: If FEs≤0.7*Max_FEs, go to Step 3. Otherwise go to Step 7. Step 7: Merge the sub-swarms into one swarm and continue PSO (Global Single Swarm). Every L generations, start local search using gbest as start points using 5*L_FEs as the Max FEs. Stop search if FEs≥Max_FEs -48-
  • 25. Multi-Objective Optimization Mathematically, we can use the following formula to express the multi-objective optimization problems (MOP): Minimize y = f (x) = ( f1 (x), f 2 (x),K , f m (x)) x ∈ [ Xmin, Xmax] subject to g j (x) ≤ 0, j = 1,..., z hk (x) = 0, k = q + 1,..., m The objective of multi-objective optimization is to find a set of solutions which can express the Pareto-optimal set well, thus there are two goals for the optimization: 1) Convergence to the Pareto-optimal set 2) Diversity of solutions in the Pareto-optimal set -49- Extending PSO for MOP Mainly used techniques: How to update pbest and gbest (or lbest) Execute non-domination comparison with pbest or gbest Execute non-domination comparison among all particles’ pbests and their offspring in the entire population How to chose gbest (or lbest) Choose gbest (or lbest) from the recorded non- dominated solutions Choose good local guides How to keep diversity Crowding distance sorting Subpopulation -50-
  • 26. Extending CLPSO and DMS-PSO for MOP Combine an external archive which is used to record the non-dominated solutions found so far. Use Non-dominated Sorting and Crowding distance Sorting which have been used in NSGAII (Deb et al., 2002) to sort the members in the external archive. Choose exemplars (CLPSO) or lbest (DMS-PSO) from the non-dominated solutions recorded in the external archive. Experiments show that MO-CLPSO and DMS-MO-PSO are both capable of converging to the true Pareto optimal front and maintaining a good diversity along the Pareto front. More details in Huang et. al Feb 2006 on MOCLPSO. Also please refer to SEAL-06 tutorials by Profs X. D. Li & G. Yen. -51- Outline Motivation for Differential Evolution (DE) Classic DE DE Variants Self-adaptive DE (SaDE) Trends in DE Research Real-parameter Estimation of Distribution Algorithms (rEDA) -52-
  • 27. Motivation for DE DE, proposed by Price and Storn in 1995 [PS95], was motivated by the attempts to use Genetic Annealing [P94] to solve the Chebychev polynomial fitting problem. Genetic annealing is a population-based, combinatorial optimization algorithm that implements a thermodynamic annealing criterion via thresholds. Although successfully applied to solve many combinatorial tasks, genetic annealing cannot solve well the Chebychev problem. Price modified genetic annealing by using floating-point encoding instead of bit-string one, arithmetic operations instead of logical ones, population-driven differential mutation instead of bit-inversion mutation and removed the annealing criterion. Storn suggested creating separate parent and children populations. Eventually, Chebychev problem can be solved effectively and efficiently. DE is closely related to many other multi-point derivative free search methods [PSL05] such as evolutionary strategies, genetic algorithms, Nelder and Mead direct search and controlled random search. -53- DE at a glance Characteristics Population-based stochastic direct search Self-referential mutation Simple but powerful Reliable, robust and efficient Easy parallelization Floating-point encoding Basic components Initialization Trial vector generation Mutation Recombination Replacement Feats DE demonstrated promising performance in 1st and 2nd ICEO [BD96][P97] -54-
  • 28. Insight into classic DE (DE/rand/1/bin) Initialization A population Px,0 of Np D-dimensional parameter vectors xi,0=[x1,i,0,…,xD,i,0], i=1,…,Np is randomly generated within the prescribed lower and upper bound bL= [b1,L,…,bD,L] and bU=[b1,U,…,bD,U] Example: the initial value (at generation g=0) of the jth parameter of the ith vector is generated by: xj,i,0 = randj[0,1] ·(bj,U-bj,L) + bj,L, j=1,…,D, i=1,…,Np Trial vector generation At the gth generation, a trial population Pu,g consisting of Np D-dimensional trial vectors vi,g=[v1,i,g,…vD,i,g] is generated via mutation and recombination operations =[v1,i,g applied to the current population Px,g Differential mutation: with respect to each vector xi,g in the current population, called target vector, a mutant vector vi,g is generated by adding a scaled, randomly sampled, vector difference to a basis vector randomly selected from the current population -55- Insight into classic DE (DE/rand/1/bin) Example: at the gth generation, the ith mutant vector vi,g with respect to ith target vector xi,g in the current population is generated by vi,g = xr0,g + F·(xr1,g-xr2,g), i≠r0≠r1≠r2, mutation scale factor F∈(0,1+) Discrete recombination: with respect to each target vector xi,g in the current population, a trial vector ui,g is generated by crossing the target vector xi,g with the corresponding mutant vector vi,g under a pre-specified crossover rate Cr∈[0,1] Example: at the gth generation, the ith trial vector ui,g with respect to ith target vector xi,g in the current population is generated by: vj,i,g if randj[0,1]≤Cr or j=jrand uj,i,g= xj,i,g otherwise Replacement If the trial vector ui,g has the better objective function value than that of its corresponding target vector xi,g, it replaces the target vector in the (g+1)th generation; otherwise the target vector remains in the (g+1)th generation -56-
  • 29. Illustration of classic DE x2 x1 Illustration of classic DE -57- Illustration of classic DE x2 Target vector xi,g xr1,g xr0,g Base vector Two randomly xr2,g selected vectors x1 Illustration of classic DE -58-
  • 30. Illustration of classic DE x2 xi,g xr1,g xr0,g xr2,g x1 Four operating vectors in 2D continuous space -59- Illustration of classic DE x2 xi,g vi,g xr1,g xr0,g F·(xr1,g-xr2,g) xr2,g x1 Mutation -60-
  • 31. Illustration of classic DE x2 xi,g vi,g ui,g xr1,g xr0,g F·(xr1,g-xr2,g) xr2,g x1 Crossover -61- Illustration of classic DE x2 xi,g xi,g+1 xr1,g xr0,g xr2,g x1 Replacement -62-
  • 32. Differential vector distribution A population of 5 vectors 20 generated difference vectors Most important characteristics of DE: self-referential mutation! ES: fixed probability distribution function with adaptive step-size DE: adaptive distribution of difference vectors with fixed step-size -63- DE variants Modification of different components of DE can result in many DE variants: variants Initialization Uniform distribution and Gaussian distribution Trial vector generation Base vector selection Random selection without replacement: r0=ceil(randi[0,1]·Np) Permutation selection: r0=permute[i] Random offset selection: r0=(i+rg)%Np (e.g. rg=2) Biased selection: global best, local best and better -64-
  • 33. DE variants Differential mutation One difference vector: F·(xr1- xr2) Two difference vector: F·(xr1- xr2)+F·(xr3- xr4) Mutation scale factor F Crucial role: balance exploration and exploitation Dimension dependence: jitter (rotation variant) and dither (rotation invariant) Randomization: different distributions of F DE/rand/1: Vi ,G = X r ,G + F ⋅ (X r ,G − X r ,G ) 1 2 3 DE/best/1: Vi ,G = X best ,G + F ⋅ (X r ,G − X r ,G ) DE/current-to-best/1:Vi ,G = X i ,G + F ⋅ (X best ,G − X i ,G ) + F ⋅ (X r ,G − X r ,G ) 1 2 Vi,G = Xr ,G + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G ) 1 2 DE/rand/2: Vi,G = Xbest,G + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G ) 1 2 3 4 5 DE/best/2: 1 2 3 4 -65- DE variants Recombination Discrete recombination (crossover) (rotation variant) One point and multi-point Exponential Binominal (uniform) Arithmetic recombination Line recombination (rotation invariant) Intermediate recombination (rotation variant) Extended intermediate recombination (rotation variant) x2 discrete xb x1 x2 line intermediate xa discrete x1 -66-
  • 34. DE variants Crossover rate CR∈[0,1] Decomposable (small CR) and indecomposable functions (large CR) Degenerate cases in the trial vector generation For example, in classic DE, r1=r2, r0=r1, r0=r2, i=r0, i=r1, i=r2 Better to generate mutually exclusive indices for target vector, base vector and vectors constituting the difference vector Replacement One-to-one replacement Neighborhood replacement -67- Motivation for self-adaptation in DE The performance of DE on different problems depends on: Population size Strategy and the associated parameter setting to generate trial vectors Replacement scheme It is hard to choose a unique combination to successfully solve any problem at hand Population size usually depends on the problem scale and complexity During evolution, different strategies coupled with specific parameter settings may favor different search stages Replacement schemes influence the population diversity Trial and error scheme may be a waste of computational time & resources Automatically adapt the configuration in DE so as to generate effective trial vectors during evolution -68-
  • 35. Related works Practical guideline [SP95], [SP97], [CDG99], [BO04], [PSL05],[GMK02]: for example, Np∈[5D,10D]; Initial choice of F=0.5 and CR=0.1/0.9; Increase NP and/or F if premature convergence happens. Conflicting conclusions with respect to different test functions. functions. Fuzzy adaptive DE [LL02]: use fuzzy logical controllers whose inputs incorporate the relative function values and individuals of successive generations to adapt the mutation and crossover parameters. Self-adaptive Pareto DE [A02]: encode crossover rate in each individual, which is simultaneously evolved with other parameters. Mutation scale factor is generated for each variable according to Gaussian distribution N(0,1). Zaharie [Z02]: theoretically study the DE behavior so as to adapt the control parameters of DE according to the evolution of population diversity. Self-adaptive DE (1) [OSE05]: encode mutation scale factor in each individual, which is simultaneously evolved with other parameters. Crossover rate is generated for each variable according to Gaussian distribution N(0.5,0.15). DE with self-adaptive population [T06]: population size, mutation scale factor and crossover rate are all encoded into each individual. Self-adaptive DE (2) [BGBMZ06]: encode mutation scale factor and crossover rate in each individual, which are reinitialized according to two new probability variables. -69- Self-adaptive DE (SaDE) DE with strategy and parameter self-adaptation (QS05, HQS06) Strategy adaptation: select one strategy from a pool of candidate strategies with the probability proportional to its previous successful rate to generate effective trial vectors during a certain learning period Steps: Initialize selection probability pi=1/num_st, i=1,…,num_st for each strategy According to the current probabilities, we employ stochastic universal selection to assign one strategy to each target vector in the current population For each strategy, define vectors nsi and nfi, i=1,…num_st to store the number of trial vectors successfully entering the next generation or discarded by applying such strategy, respectively, within a specified number of generations, called “learning period (LP)” Once the current number of generations is over LP, the first element of nsi and nfi with respect to the earliest generation will be removed and the behavior in current generation will update nsi and nfi -70-
  • 36. Self-adaptive DE (SaDE) ns1(1) … nsnum_st(1) ns1(2) … nsnum_st(2) ns1(3) … nsnum_st(3) … … … … ns1(L) … nsnum_st(L) ns1(L+1) … nsnum_st(L+1) ns1(L+2) … nsnum_st(L+2) The selection probability pi is updated by: ∑ nsi (∑ nsi + ∑ nf i ). Go to 2nd step Parameter adaptation Mutation scale factor (F): for each target vector in the current population, we randomly generate F value according to a normal distribution N(0.5,0.3). Therefore, 99% F values fall within the range of [–0.4,1.4] Crossover rate (CRj): when applying strategy j with respect to a target vector, the corresponding CRj value is generated according to an assumed distribution, and those CRj values that have generated trial vectors successfully entering the next generation are recorded and updated every LP generations so as to update the parameters of the CRj distribution. We hereby assume that each CRj, j=1,…,num_st is normally distributed with its mean and standard deviation initialized to 0.5 and 0.1, respectively -71- Instantiations In CEC’05, we use 2 strategies: DE/rand/1/bin: ( Vi ,G = X r ,G + F ⋅ X r ,G − X r ,G ) DE/current-to-best/2/bin: Vi,G = Xi,G + F ⋅ (Xbest,G − Xi,G ) + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G ) 1 2 3 1 2 3 4 LP = 50 In CEC’06, we employ 4 strategies: DE/rand/1/bin: ( Vi,G = Xr ,G + F ⋅ Xr ,G − Xr G ) ( ) 1 2 3, DE/rand/2/bin: Vi,G = Xr1 ,G + F ⋅ Xr2 ,G − Xr3,G + Xr4 ,G − Xr5,G DE/current-to-best/2/bin: Vi,G = Xi,G + F ⋅ (Xbest,G − Xi,G ) + F ⋅ (Xr ,G − Xr ,G + Xr ,G − Xr ,G ) ( ) 1 2 3 4 DE/current-to-rand/1: ( 1 ) Vi,G = Xi,G + F ⋅ Xr ,G − Xi,G + F ⋅ Xr ,G − Xr ,G 2 3 LP = 50 -72-
  • 37. Local search enhancement To improve convergence speed, we apply a local search procedure every 500 generations: To apply local search, we choose n = 0.05·Np individuals, which include the individual having the best objective function value and the n-1 individuals randomly selected from the top 50% individuals in the current population We perform the local search by applying the Quasi-Newton method to the selected n individuals -73- SaDE for constrained optimization Modification of the replacement procedure Each trial vector ui,g is compared with its corresponding target vector xi,g in the current population in terms of both the objective function value and the degree of constraint violation ui,g will replace xi,g if any of the following conditions is satisfied: ui,g is feasible while xi,g is not ui,g and xi,g are both feasible, and ui,g has a better or equal objective function value than that of xi,g ui,g and xi,g are both infeasible, but ui,g has a smaller degree of overall constraint violation -74-
  • 38. SaDE for constrained optimization The overall constraint violation is evaluated by a weighted average constraint violation as follows: ∑im 1 wi Gi (x ) v(x ) = = m ∑i =1 wi wi = 1 G maxi is a weighting factor G max i is the maximum violation of the ith constraint obtained so far -75- Overview of DE research trends Number of publications per year in ISI web of knowledge database -76-
  • 39. Overview of DE research trends DE Applications Digital Filter Design Optimal Design of Shell-and-Tube Heat Exchangers Multiprocessor synthesis Optimization of an Alkylation's Reaction Neural network learning Optimization of Thermal Cracker Operation Diffraction grating design Optimization of Non-Linear Chemical Crystallographic characterization Processes Beam weight optimization in Optimum planning of cropping patterns radiotherapy Optimization of Water Pumping System Heat transfer parameter estimation in a trickle bed reactor Optimal Design of Gas Transmission Network Electricity market simulation Differential Evolution for Multi-Objective Optimization Scenario-Integrated Optimization of Dynamic Systems Bioinformatics -77- DE Resources Books 1999 [CDG99] 2004 [BO04] 2005 [PSL05] Website http://www.icsi.berkeley.edu/~storn/code.htm Matlab code of SaDE http://www.ntu.edu.sg/home/epnsugan Blog http://dealgorithm.blogspot.com/ -78-
  • 40. Estimation of distribution algorithm (EDA) New approach to evolutionary computation, which was proposed in 1996 [MP96] and described in details in [LL01, LLIB06] Characteristics: Population-based No crossover and mutation operators In each generation, the underlying probability distribution of the selected individuals is estimated by certain machine learning model, which can capture the dependence among parameters New population of individuals is sampled from the estimated probability distribution model Repeat the two previous steps until a stopping criterion is met -79- Real-parameter EDA (rEDA) Without parameter dependencies Stochastic Hill-Climbing with Learning by Vectors of Normal Distribution (SHCLVND) [RK96] Population Based Incremental Learning – Continuous (PBILc) [SD98] Univariate Marginal Distribution Algorithm - Continuous (UMDAc) [LELP00] Bivariate parameter dependencies Mutual Information Maximization for Input Clustering - Continuous - Gaussian (MIMICGc) [LELP00] Multiple parameter dependencies Estimation of Multivariate Normal Algorithm: EMNAglobal, EMNAa, EMNAi [LLB01, LL01] Estimation of Gaussian Network Algorithm: EMNAee, EMNABGe, EMNABIC [LLB01, LL01] Iterated Density Evolutionary Algorithm (IDEA) [BT00] Real-coded Bayesian Optimization Algorithm (rBOA) [ARG04] -80-
  • 41. rEDA resources Books 2001 [LL01] 2006 [LLIB06] Websites http://medal.cs.umsl.edu/ http://www.iba.k.u-tokyo.ac.jp/english/EDA.htm http://www.evolution.re.kr/ -81- Benchmarking Evolutionary Algorithms CEC05 comparison results (Single obj. + bound const.) CEC06 comparison results (Single obj + general const.) -82-
  • 42. CEC’05 Comparison Results Algorithms involved in the comparison: BLX-GL50 (Garcia-Martinez & Lozano, 2005 ): Hybrid Real-Coded Genetic Algorithms with Female and Male Differentiation BLX-MA (Molina et al., 2005): Adaptive Local Search Parameters for Real-Coded Memetic Algorithms CoEVO (Posik, 2005): Mutation Step Co-evolution DE (Ronkkonen et al.,2005):Differential Evolution DMS-L-PSO: Dynamic Multi-Swarm Particle Swarm Optimizer with Local Search EDA (Yuan & Gallagher, 2005): Estimation of Distribution Algorithm G-CMA-ES (Auger & Hansen, 2005): A restart Covariance Matrix Adaptation Evolution Strategy with increasing population size K-PCX (Sinha et al., 2005): A Population-based, Steady-State real-parameter optimization algorithm with parent-centric recombination operator, a polynomial mutation operator and a niched -selection operation. L-CMA-ES (Auger & Hansen, 2005): A restart local search Covariance Matrix Adaptation Evolution Strategy L-SaDE (Qin & Suganthan, 2005): Self-adaptive Differential Evolution algorithm with Local Search SPC-PNX (Ballester et al.,2005): A steady-state real-parameter GA with PNX crossover operator -83- CEC’05 Comparison Results Problems: 25 minimization problems (Suganthan et al. 2005) Dimensions: D=10, 30 Runs / problem: 25 Max_FES: 10000*D (Max_FES_10D= 100000; for 30D=300000; for 50D=500000) Initialization: Uniform random initialization within the search space, except for problems 7 and 25, for which initialization ranges are specified. The same initializations are used for the comparison pairs (problems 1, 2, 3 & 4, problems 9 & 10, problems 15, 16 & 17, problems 18, 19 & 20, problems 21, 22 & 23, problems 24 & 25). Global Optimum: All problems, except 7 and 25, have the global optimum within the given bounds and there is no need to perform search outside of the given bounds for these problems. 7 & 25 are exceptions without a search range and with the global optimum outside of the specified initialization ranges. -84-
  • 43. CEC’05 Comparison Results Termination: Terminate before reaching Max_FES if the error in the function value is 10-8 or less. Ter_Err: 10-8 (termination error value) Successful Run: A run during which the algorithm achieves the fixed accuracy level within the Max_FES for the particular dimension. Success Rate= (# of successful runs) / total runs Success Performance = mean (FEs for successful runs)*(# of total runs) / (# of successful runs) -85- CEC’05 Comparison Results Success Rates of the 11 algorithms for 10-D *In the comparison, only the problems in which at least one algorithm succeeded once are considered. -86-
  • 44. CEC’05 Comparison Results First three algorithms: 1. G-CMA-ES 2. DE 3. DMS-L-PSO Empirical distribution over all successful functions for 10-D (SP here means the Success Performance for each problem. SP=mean (FEs for successful runs)*(# of total runs) / (# of successful runs). SPbest is the minimal FES of all algorithms for each problem.) -87- CEC’05 Comparison Results Success Rates of the 11 algorithms for 30-D *In the comparison, only the problems which at least one algorithm succeeded once are considered. -88-
  • 45. CEC’05 Comparison Results First three algorithms: 1. G-CMA-ES 2. DMS-L-PSO 3. L-CMA-ES Empirical distribution over all successful functions for 30-D (SP here means the Success Performance for each problem. SP=mean (FEs for successful runs)*(# of total runs) / (# of successful runs). SPbest is the minimal FES of all algorithms for each problem.) -89- CEC’06 Comparison Results Involved Algorithms DE (Zielinski & Laur, 2006): Differential Evolution DMS-C-PSO (Liang & Suganthan, 2006): Dynamic Multi-Swarm Particle Swarm Optimizer with the New Constraint-Handling Mechanism ε- DE [TS06] Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites GDE (Kukkonen & Lampinen, 2006) : Generalized Differential Evolution jDE-2 (Brest & Zumer, 2006 ): Self-adaptive Differential Evolution MDE (Mezura-Montes, et al. 2006): Modified Differential Evolution MPDE (Tasgetiren & Suganthan,2006): Multi-Populated DE Algorithm PCX (Ankur Sinha, et al, 2006): A Population-Based, Parent Centric Procedure PESO+ (Munoz-Žavala et al, 2006): Particle Evolutionary Swarm Optimization Plus SaDE (Huang et al, 2006 ): Self-adaptive Differential Evolution Algorithm -90-
  • 46. CEC’06 Comparison Results Problems: 24 minimization problems with constraints (Liang, 2006b) Runs / problem: 25 (total runs) Max_FES: 500,000 Feasible Rate = (# of feasible runs) / total runs Success Rate = (# of successful runs) / total runs Success Performance = mean (FEs for successful runs)*(# of total runs) / (# of successful runs) The above three quantities are computed for each problem separately. Feasible Run: A run during which at least one feasible solution is found in Max_FES. Successful Run: A run during which the algorithm finds a feasible solution x satisfying f (x) − f (x*) ≤ 0.0001 -91- CEC’06 Comparison Results Algorithms’ Parameters DE NP, F, CR DMS-PSO ω , c1, c2 , Vmax, n, ns, R, L, L_FES ε_DE N, F, CR, Tc, Tmax, cp, Pg, Rg, Ne GDE NP, F, CR jDE-2 NP, F, CR, k, l MDE µ, CR, Max_Gen, λ, Fα, Fβ, MPDE F, CR, np1, np2 PCX N , λ, r (a different N is used for g02), PESO+ ω , c1, c2 , n, not sensitive toω , c1, c2 SaDE NP, LP, LS_gap -92-
  • 47. CEC’06 Comparison Results Success Rate for Problems 1-11 -93- CEC’06 Comparison Results Success Rate for Problems 12-19,21,23,24 -94-
  • 48. CEC’06 Comparison Results 1st ε_DE 2nd DMS-PSO 3rd MDE, PCX 5th SaDE 6th MPDE 7th DE 8th jDE-2 9th GDE, PESO+ Empirical distribution over all functions( SP here means the Success Performance for each problem. SP=mean (FEs for successful runs)*(# of total runs) / (# of successful runs). SPbest is the minimal FES of all algorithms for each problem.) -95- Acknowledgement Thanks to the SEAL’06 organizers or giving us this opportunity Our research into real parameter evolutionary algorithms is financially supported by an A*Star (Agency for Science, Technology and Research), Singapore Thanks to Jane Liang Jing for her contributions in PSO, and CEC05/06 benchmarking studies. Some slides on G3-PCX were from Prof Kalyanmoy Deb -96-
  • 49. References [A02] H. A. Abbass, "Self-adaptive pareto differential evolution," Proc. of 2000 IEEE Congress on Evolutionary Computation, pp. 831-836, Honolulu, USA, 2002. Agrafiotis, D. K.and Cedeno, W. (2002), "Feature selection for structure-activity correlation using binary particle swarms,” Journal of Medicinal Chemistry, 2002, 45, 1098-1107. [ARG04] C. W. Ahn, R. S. Ramakrishna, and D. E. Goldberg, “Real-coded Bayesian optimization algorithm: bringing the strength of BOA into the continuous world,” LNCS (GECCO'04), 3102: 840-851, 2004. Angeline, P. J. (1998) "Using selection to improve particle swarm optimization," Proc. of the IEEE Congress on Evolutionary Computation (CEC 1998), Anchorage, Alaska, USA. Auger, A. and Hansen, N. (2005), "A restart CMA evolution strategy with increasing population size," the 2005 IEEE Congress on Evolutionary Computation, Vol.2, pp.1769 - 1776, 2005 Auger, A. and Hansen, N. (2005), "Performance evaluation of an advanced local search evolutionary algorithm," 2005 IEEE Congress on Evolutionary Computation, Vol.2, pp.1777 - 1784, 2005 [BO04] B. V. Babu, G. Onwubolu (eds.), New Optimization Techniques in Engineering, Springer, Berlin, 2004. [BFM-02] T. Back, D. B. Fogel and Z. Michalewicz, Handbook of Evolutionary Computation, 1st edition, IOP Publishing Ltd., UK, 2002. Ballester, P. J., Stephenson, J., Carter, J. N. and Gallagher, K. (2005), "Real-Parameter Optimization Performance Study on the CEC-2005 benchmark with SPC-PNX," 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.498-505, 2005 S. Baskar, R. T. Zheng, A. Alphones, N. Q. Ngo, P. N. Suganthan (2005a), “Particle Swarm Optimization for the Design of Low-Dispersion Fiber Bragg Gratings,” IEEE Photonics Technology Letters, 17(3):615-617, 2005. S. Baskar, A. Alphones, P. N. Suganthan, J. J. Liang (2005b), “Design of Yagi-Uda Antennas Using Particle Swarm Optimization with new learning strategy”, IEE Proc. on Antenna and Propagation 152(5):340-346 OCT. 2005 Bergh van den F. and Engelbrecht, A. P. (2004) "A cooperative approach to particle swarm optimization," IEEE Transactions on Evolutionary Computation; 8 (3):225~239 [BD96] H. Bersini, M. Dorigo, L. Gambarella, S. Langerman and G. Seront, "Results of the first international contest on evolutionary optimization," P. IEEE Int. Conf. on Evolutionary Computation, Nagoya, Japan, 1996. Blackwell, T. M. and Bentley, P. J. (2002 ) "Don't push me! Collision-avoiding swarms," Proc. of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA. Brest, J., Zumer, V. and Maucec, M. S. (2006) "Self-Adaptive Differential Evolution Algorithm in Constrained Real-Parameter Optimization," in Proc. of the sixth IEEE Conference on Evolutionary Computation, 2006. [BGBMZ06] J. Brest, S. Greiner, B. Boškovic, M. Mernik, and V. Zumer, "Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems," IEEE Transactions on Evolutionary Computation, 2006 [BT00] P. A. N. Bosman and D. Thierens, “Continuous iterated density estimation evolutionary algorithms within the IDEA framework,” in M. Pelikan and A. O. Rodriguez (eds.), P. of the Optimization by Building and Using Probabilistic Models OBUPM Workshop at GECCO’00, pp.197-200, San Francisco, CA, Morgan Kauffmann. Clerc, M. (1999), "The swarm and the queen: towards a deterministic and adaptive particle swarm optimization," Proc of the IEEE Congress on Evolutionary Computation (CEC 1999), pp. 1951-1957, 1999. Clerc M. and Kennedy, J. (2002), The particle swarm-explosion, stability, and convergence in a multidimensional complex space IEEE Transactions on Evolutionary Computation, vol. 6, pp. 58-73, 2002. Coello Coello A. C., Constraint-handling using an evolutionary multiobjective optimization technique. Civil Engineering and Environmental Systems, 17:319-346, 2000. Coello Coello A. C. and Lechuga, M. S. 2002 "MOPSO: a proposal for multiple objective particle swarm optimization," Proc. of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA. [CDG99] D. Corne, M. Dorigo, and F. Glover (eds.), New Ideas in Optimization, McGraw-Hill, 1999. [DK05] Das, S., and Konar, A.,(2005) "An improved differential evolution scheme for noisy optimization problems."Lecture Notes Comput SC 3776: 417-421 2005. Deb, K.,Pratap, A., Agarwal, S. and Meyarivan (2002), T. "A fast and elitist multi-objective genetic algorithms: NSGA-II," IEEE Trans. on Evolutionary Computation. 6(2): 182-197, 2002 [Deb-01] K. Deb. Multi-Objective Optimization using Evolutionary Algorithms, Wiley, Chichester, 2001 (http://www.iitk.ac.in/kangal/papers). Eberhart R. C. and Shi, Y. 2001 "Tracking and optimizing dynamic systems with particle swarms," Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2001), Seoul, Korea. pp. 94-97. Eberhart, R. C. and Shi, Y. (2001), "Particle swarm optimization: developments, applications and resources," Proc. of the IEEE Congress on Evolutionary Computation (CEC 2001), Seoul, Korea. Eberhart, R.C and Kennedy, J. (1995), "A new optimizer using particle swarm theory," Proc. of the Sixth International Symposium on Micromachine and Human Science, Nagoya, Japan, pp. 39-43.
  • 50. Eiben, A. E., Hauw van der J. K. and Hermert van J. I. (1998), "Adaptive penalties for evolutionary gragh coloring," in Artificial Evolution' 97, pp. 95-106, Berlin,. Coello Coello A. Carlos (1999), "Self-Adaptation Penalties for GA-based Optimization," in Proceedings of the 1999 Congress on Evolutionary Computation, Washington, D.C., Vol. 1, pp. 573-580. Fan, H. Y. and Shi, Y. (2001), "Study on Vmax of particle swarm optimization," Proc. of the Workshop on Particle Swarm Optimization 2001, Indianapolis, IN. [FJ06] V. Feoktistov and S. Janaqi, “New energetic selection principle in differential evolution” ENTERPRISE INFORMATION SYSTEMS VI :151-157, 2006 [GMK02] R. Gaemperle, S. D. Mueller and P. Koumoutsakos, “A Parameter Study for Differential Evolution”, A. Grmela, N. E. Mastorakis, editors, Advances in Intelligent Systems, Fuzzy Systems, Evolutionary Computation, WSEAS Press, pp. 293-298, 2002. Garcia-Martinez C. and Lozano M. (2005), "Hybrid real-coded genetic algorithms with female and male differentiation," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp. 896 - 903, 2005. N. Hansen (2006), ‘Covariance Matrix Adaptation (CMA) Evolution Strategy,’ Tutorial at PPSN-06, Reykjavik, Iceland, http://www.bionik.tu-berlin.de/user/niko/cmaesintro.html [HO-04] N. Hansen, A. Ostermeier, “Completely derandomized self-adaptation in evolution strategies”, Evolutionary Computation, 9(2), pp. 159-195, 2004. Homaifar, A., Lai, S. H. Y. and Qi X., (1994)"Constrained Optimization via Genetic Algorithms," Simulation, 62(4):242-254. Hu X. and Eberhart, R. C. (2002) "Multiobjective optimization using dynamic neighborhood particle swarm optimization," Proc. IEEE Congress on Evolutionary Computation (CEC 2002), Hawaii, pp. 1677-1681, 2002. Hu X. and Eberhart, R. C. (2002)" Solving constrained nonlinear optimization problems with particle swarm optimization". Proc. of World Multiconf. on Systemics, Cybernetics and Informatics (SCI 2002), Orlando, USA. [HQS06] Huang, V. L., Qin, A. K. and Suganthan, P. N. (2006) "Self-adaptive Differential Evolution Algorithm for Constrained Real-Parameter Optimiation," Proc. of IEEE Congress on Evolutionary Computation, 2006 V. L. Huang, P. N. Suganthan and J. J. Liang, “Comprehensive Learning Particle Swarm Optimizer for Solving Multiobjective Optimization Problems”, Int. J of Intelligent Systems, Vol. 21, No. 2, pp. 209-226, Feb. 2006. [IKL03] J. Ilonen, J.-K. Kamarainen and J. Lampinen, “Differential Evolution Training Algorithm for Feed-Forward Neural Networks,” In: Neural Processing Letters vol. 7, no. 1, pp. 93-105. 2003. [IL04] A. Iorio and X. Li. “Solving Rotated Multi-objective Optimization Problems Using Differential Evolution,” Australian Conference on Artificial Intelligence 2004: 861-872 Joines J. and Houck, C. (1994) "On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GAs," P. IEEE Conf. on Evolutionary Computation, pp.579-584, Orlando, Florida. Kennedy, J. and Mendes, R. (2002), "Population structure and particle swarm performance," Proc. of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA. Kennedy, J. (1999) "Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance," Proc. IEEE Congress on Evolutionary Computation (CEC 1999), pp. 1931-1938. Kennedy, J. and Eberhart, R. C. 1997 "A discrete binary version of the particle swarm algorithm," Proc. of the World Multiconference on Systemics,Cybernetics and Informatics, Piscataway, NJ, pp. 4104-4109, 1997. Kennedy, J. and Eberhart, R. C. (1995), "Particle swarm optimization," Proc. of IEEE International Conference on Neural Networks, Piscataway, NJ, pp. 1942-1948. Kukkonen S. and Lampinen, J. (2006) "Constrained Real-Parameter Optimization with Generalized Differential Evolution," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006. [LELP00] P. Larrañaga, R. Etxeberria, J. A. Lozano, and J. M. Peña, “Optimization in continuous domains by learning and simulation of Gaussian networks,” in Proc. of the Workshop in Optimization by Building and using Probabilistic Models. A Workshop within the GECCO 2000, pp. 201-204, Las Vegas, Nevada, USA. [LLB01] P. Larrañaga, J. A. Lozano and E. Bengoetxea, “Estimation of distribution algorithms based on multivariate normal and Gaussian networks,” Technical Report KZZA-IK-1-01, Department of Computer Science and Artificial Intelligence, University of the Basque Country, 2001. [LL01] P. Larrañaga and J. A. Lozano. Estimation of Distribution Algorithms: A New Tool for Evolutionary Computation, Kluwer Academic Publishers, 2001. Laskari, E. C., Parsopoulos, K. E. and Vrahatis, M. N. 2002, "Particle swarm optimization for minimax problems," Proc. of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA. Chang-Yong Lee, Xin Yao, “Evolutionary programming using mutations based on the Levy probability distribution’, IEEE Transactions on Evolutionary Computation, Volume 8, Issue 1, Feb. 2004 Page(s):1 - 13 [L05] Li, X. (2005), "Efficient Differential Evolution using Speciation for Multimodal Function Optimization", Proc. of GECCO'05, eds. Beyer, Hans-Georg et al., Washington DC, USA, 25-29 June, p.873-880.
  • 51. Liang, J. J., Suganthan, P. N., Qin, A. K. and Baskar, S (2006a). "Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions," IEEE Transactions on Evolutionary Computation, June 2006. Liang, J. J. Runarsson Thomas Philip, Mezura-Montes Efren, Clerc, Maurice, Suganthan, P. N., Coello Coello A. Carlos and Deb, K. (2006b) "Problem Definitions and Evaluation Criteria for the CEC 2006 Special Session on Constrained Real-Parameter Optimization," TR, Nanyang Technological University, Singapore, March 2006. J. J. Liang, S. Baskar, P. N. Suganthan and A. K. Qin (2006c), “Performance Evaluation of Multiagent Genetic Algorithm”, Volume 5, Number 1, pp. 83 – 96, Natural Computing, March 2006. J. J. Liang, P. N. Suganthan, C. C. Chan, V. L. Huang (2006d), “Wavelength detection in FBG sensor network using tree search dynamic multi-swarm particle swarm optimizer”, IEEE Photonics Tech. Lets, 18(12):1305-1307. J. J. Liang & P. N. Suganthan (2006e), "Dynamic Multi-Swarm Particle Swarm Optimizer with a Novel Constrain- Handling Mechanism," in Proc. of IEEE Congress on Evolutionary Computation, pp. 9-16, 2006. J. J. Liang, P. N. Suganthan and K. Deb, "Novel composition test functions for numerical global optimization," Proc. of IEEE International Swarm Intelligence Symposium, pp. 68-75, 2005. [LL02] J. Liu and J. Lampinen. “Adaptive Parameter Control of Differential Evolution,” In Proceedings of the 8th International Conference on Soft Computing (MENDEL 2002), pages 19–26, 2002. Lovbjerg, M., Rasmussen, T. K. and Krink, T. (2001) "Hybrid particle swarm optimiser with breeding and subpopulations," Proc. of the Genetic and Evolutionary Computation Conference 2001 (GECCO 2001). Lovbjerg M. and Krink, T. (2002)"Extending particle swarm optimisers with self-organized criticality," Proc. of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA. [LLIB06] J. A. Lozano, P. Larrañaga, I. Inza, and E. Bengoetxea (eds.), Towards a New Evolutionary Computation: Advances on Estimation of Distribution Algorithms Series: Studies in Fuzzy & Soft Computing, Springer, 2006. Mezura-Montes, E. , Velazquez-Reyes J. and Coello Coello A. Carlos (2006) "Modified Differential Evolution for Constrained Optimization," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006. E. Mezura-Montes and C. A. C. Coello, "A Numerical Comparison of some Multiobjective-Based Techniques to Handle Constraints in Genetic Algorithms", Technical Report EVOCINV-0302002, Evolutionary Computation Group at CINVESTAV, México, 2002. Michalewicz, Z. and Attia, N. (1994), "Evolutionary Optimization of Constrained Problems," in Proceedings of the third Annual Conference on Evolutionary Programming, pp. 98-108. Michalewicz, Z.(1992), "Genetic Algorithms + Data Structures = Evolution Programs," Springer-Verlag. Molina, D., Herrera, F. and Lozano, M. (2005)"Adaptive Local Search Parameters for Real-Coded Memetic Algorithms," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.888 - 895, 2005 [MP96] H. Mühlenbein and G. Paaß, “From recombination of genes to the estimation of distributions I. Binary parameters,” in Proc. of Parallel Problem Solving from Nature - PPSN IV. LNCS 1411, pp. 178-187, 1996. Munoz-Zavala A. E., Hernandez-Aguirre A., Villa-Diharce E. R. and Botello-Rionda, S. (2006) "PESO+ for Constrained optimization," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006. [OSE05] G. H. M. Omran, A. Salman, and A. P. Engelbrecht, "Self-adaptive differential evolution," International Conference on Computational Intelligence and Security (CIS'05), Xi'an, China, December 2005. Parsopoulos, K. E. and Vrahatis, M. N. (2004), "UPSO- A Unified Particle Swarm Optimization Scheme," Lecture Series on Computational Sciences, pp. 868-873. Parsopoulos, K. E. and Vrahatis, M. N. 2002 "Particle swarm optimization method for constrained optimization problems," Proc. of the Euro-International Symposium on Computational Intelligence. Peram, T. , Veeramachaneni, K. , and Mohan, C. K. , (2003 )"Fitness-distance-ratio based particle swarm optimization," Proc. IEEE Swarm Intelligence System, Indianapolis, Indiana, USA., pp. 174-181. Posik, P. (2005), "Real-Parameter Optimization Using the Mutation Step Co-evolution," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.872 - 879, 2005 [PSL05] K. V. Price, R. Storn, J. Lampinen, Differential Evolution - A Practical Approach to Global Optimization, Springer, Berlin, 2005. [P99] K. V. Price, “An Introduction to Differential Evolution”. In: Corne, D., Dorigo, M.,and Glover, F. (eds.): New Ideas in Optimization. McGraw-Hill, London (UK):79-108,1999. [P97] K. V. Price, "Differential evolution vs. the functions of the 2nd ICEO," in Proc. of 1997 IEEE International Conference on Evolutionary Computation (ICEC’97), pp. 153-157, Indianapolis, USA, 1997. [P94] K. V. Price, "Genetic annealing," Dr. Dobb's Journal, Miller Freeman, San Mateo, Ca., pp. 127-132, 1994. [QS05] Qin, A. K. and Suganthan, P. N. (2005), "Self-adaptive differential evolution algorithm for numerical optimization," the 2005 IEEE Congress on Evolutionary Computation, Vol.2, pp.1785-1791, 2005 Ratnaweera, A., Halgamuge, S. and Watson, H. (2002) ,"Particle swarm optimization with self-adaptive acceleration coefficients," Proc. of Int. Conf. on Fuzzy Systems and Knowledge Discovery 2002 (FSKD 2002), pp. 264-268,
  • 52. Ratnaweera, A. ,Halgamuge, S. and Watson, H. (2004), "Self-organizing hierachical particle swarm optimizer with time varying accelerating coefficients," IEEE Transactions on Evolutionary Computation, 8(3): 240-255 Ray T. and Liew, K. M. 2002, A swarm metaphor for multiobjective design optimization Engineering Optimization, vol. 34, pp. 141-153, 2002. Ray, T. (2002) "Constraint Robust Optimal Design using a Multiobjective Evolutionary Algorithm," in Proceedings of Congress on Evolutionary Computation 2002 (CEC'2002), vol.1, pp. 419-424, [RKP05] Ronkkonen, J. , Kukkonen, S. and Price, K.V. , (2005), "Real-Parameter Optimization with Differential Evolution," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.506 - 513, 2005 [RK96] S. Rudlof and M. Köppen, “Stochastic hill climbing by vectors of normal distributions. in Proc. of the First Online Workshop on Soft Computing (WSC1), Nagoya, Japan, 1996. Runarsson, T.P., Xin Yao, “Stochastic ranking for constrained evolutionary optimization”, IEEE Transactions on Evolutionary Computation, Vol. 4, Issue 3, Sept. 2000 Page(s):284 - 294 [SD98] M. Sebag and A. Ducoulombier, “Extending population-based incremental learning to continuous search spaces,” Proc. Conference on Parallel Problem Solving from Nature - PPSN V, pp. 418-427, Berlin, 1998. Shi, Y. and Eberhart, R. C. , (1998), "A modified particle swarm optimizer," Proc. of the IEEE Congress on Evolutionary Computation (CEC 1998), Piscataway, NJ, pp. 69-73. Shi, Y. and Eberhart, R. C., (2001) "Particle swarm optimization with fuzzy adaptive inerita weight," Proc. of the Workshop on Particle Swarm Optimization, Indianapolis, IN. Shi, Y. and Krohling, R. A. 2002 "Co-evolutionary particle swarm optimization to solve Min-max problems," Proceedings of the IEEE Congress on Evolutionary Computation (CEC 2002), Honolulu, Hawaii USA. Sinha, A., Srinivasan A. and Deb, K. (2006) "A Population-Based, Parent Centric Procedure for Constrained Real- Parameter Optimization," in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006. Sinha, S. Tiwari and Deb, K. (2005), "A Population-Based, Steady-State Procedure for Real-Parameter Optimization," the 2005 IEEE Congress on Evolutionary Computation, Vol.1, pp.514 - 521, 2005 [SP95] R. Storn and K. V. Price, “Differential Evolution - a simple and efficient adaptive scheme for global optimization over continuous spaces,” TR-95-012, ICSI, http://http.icsi.berkeley.edu/~storn/litera.html [SP97] R. Storn and K. V. Price, “Differential evolution-A simple and Efficient Heuristic for Global Optimization over Continuous Spaces,” Journal of Global Optimization, 11:341-359,1997. Suganthan P. N., Hansen, N. Liang, J. J. Deb, K. Chen, Y.-P. Auger, A. and Tiwari, S. (2005), "Problem Definitions and Evaluation Criteria for the CEC 2005 Special Session on Real-Parameter Optimization," Technical Report, Nanyang Technological University, Singapore, May 2005 and KanGAL Report #2005005, IIT Kanpur, India. Suganthan, P. N.,(1999)"Particle swarm optimizer with neighborhood operator," Proc. of the IEEE Congress on Evolutionary Computation (CEC 1999), Piscataway, NJ, pp. 1958-1962. [TS06] Takahama T. and Sakai, S. (2006) "Constrained Optimization by the Constrained Differential Evolution with Gradient-Based Mutation and Feasible Elites," Proc. IEEE Congress on Evolutionary Computation, 2006. Takahama T. and S. Sakai, (2005) "Constrained optimization by constrained particle swarm optimizer with -level control," Proc. IEEE Int. Workshop on Soft Computing as Transdisciplinary Science and Technology (WSTST’05), Muroran, Japan, May 2005, pp. 1019–1029. Tasgetiren, M. Fatih and Suganthan, P. N. (2006) "A Multi-Populated Differential Evolution Algorithm for Solving Constrained Optimization Problem," Proc. IEEE Congress on Evolutionary Computation, 2006. [TPV06] Tasoulis, DK., Plagianakos, VP., and Vrahatis, MN.(2006),“Differential evolution algorithms for finding predictive gene subsets in microarray data” Int Fed Info Proc 204: 484-491 2006. [T06] J. Teo, "Exploring dynamic self-adaptive populations in differential evolution," Soft Computing - A Fusion of Foundations, Methodologies and Applications, 10(8): 673-686, 2006. Tessema, B. Yen, G.G., “A Self Adaptive Penalty Function Based Algorithm for Constrained Optimization”, IEEE Congress on Evolutionary Computation, CEC 2006 16-21 July 2006 Page(s):246 – 253. Venkatraman, S. Yen, G.G., “A Generic Framework for Constrained Optimization Using Genetic Algorithms”, IEEE Transactions on Evolutionary Computation, Volume 9, Issue 4, Aug. 2005 Page(s):424 – 435. X. Yao, Y. Liu, G. M. Lin, “Evolutionary programming made faster”, IEEE Transactions on Evolutionary Computation, Volume 3, Issue 2, July 1999 Page(s):82 - 102 Yuan, B. and Gallagher, M. (2005), "Experimental results for the special session on real-parameter optimization at CEC 2005: a simple, continuous EDA," IEEE Congress on Evolutionary Computation, pp.1792 - 1799, 2005 [Z02] D. Zaharie, “Critical values for the control parameters of differential evolution algorithms,” in in Proc. of the 8th Int.l Conference on Soft Computing (MENDEL 2002), pp. 62-67, Brno,Czech Republic, June 2002. [ZX03] W. J. Zhang and X. F. Xie. “DEPSO: hybrid particle swarm with differential evolution operator.” IEEE International Conference on Systems, Man & Cybernetics (SMCC), Washington D C, USA, 2003: 3816-3821. Zielinski K. and Laur, R., (2006) "Constrained Singel-Objective Optimization Using Differential Evolution", in Proceedings of the sixth IEEE Conference on Evolutionary Computation, 2006.