1. Evolutionary Algorithms and
Civil Engineering
BY
VAMSIDHAR TANKALA
&
SHREY MODI
DEPARTMENT OF CIVIL ENGINEERING
2. Consider the following problem
Minimize the function :
f ( x1 , x2 , x3 , x4 ) x x2 x3 x4
1
2 2 2 2
3. Various methods
Mathematical differentiation , and other plotting
techniques.
A computer program based on these techniques can
be easily formulated.
What are the issues to be considered ?
- Computational time
- Complexity of the problem – Increase the
parameters and observe the computational time
- Smoothness of the function , if not rugged
techniques work efficiently
4. Need for Evolutionary Techniques
Vagaries faced by the traditional techniques
Rugged landscape of the problem
Presence of many discontinuities
Simulation of the real world applications where
mathematical formulations are not available :
“BLACK-BOX APPROACHES”
One example : Dynamic traffic simulation.
5. Need for evolutionary procedures
“Genetic Algorithms are good at taking
large, potentially huge search spaces and
navigating them, looking for optimal
combinations of things, solutions you
might not otherwise find in a lifetime.”
- Salvatore Mangano
Computer Design, May 1995
6. Brief introduction to GA’s
Directed search algorithms based on the mechanics of
biological evolution
Developed by John Holland, University of Michigan
(1970’s)
To understand the adaptive processes of natural systems
To design artificial systems software that retains the robustness of
natural systems
Provide efficient, effective techniques for optimization
and machine learning applications
Widely-used today in business, scientific and engineering
circles
7. Genetic Algorithm
Outline of the steps involved in GA
Encoding
Initialization
Reproduction
Selection
Termination Criteria
8. Deb’s example
Consider a simple can design problem
A cylindrical can considered to have only two
parameters – the diameter d and height h.
Considering that the can needs to have a volume of
atleast 300 ml and the objective of the design is to
minimize the cost of the can material
9. Objective function
d2
Minimize f ( d , h) c ( dh )
2
d 2h
Subject to g1 (d , h) 300,
4
Variable bounds d min d d max
hmin h hmax
12. A Sample random generation
Cost -
30
23 30 11
Cost-
40
24 9
37
13. Selection Operator
Identify good(usually above-average) solutions in a
population.
Make multiple copies of good solutions.
Eliminate bad solutions from the population so that
multiple copies of good solutions can be placed in
the population.
21. Step 1: Encoding Problem
21
How to encode a solution of the problem
into chromosome ?
Types of Encoding
Binary coding 1 0 0 1 1 1 0 1
Difficult to apply directly
Not a natural coding
Real number coding 2.3352 5.3252 6.2895 4.1525
Mainly for constrained optimization problems
Integer coding 3 5 1 2 4 8 7 6
For combinatorial optimization problems
Ex. Quadratic Assignment Problems
22. Step 1: Encoding Problem (Cont.)
22
Coding Space and Solution Space
Decoding
Coding Space
Genetic Operations Solution Space
Evaluation and Selection
Encoding
23. Step 1: Encoding Problem (Cont.)
23
• Critical issues with encoding
Feasibility of a chromosome
solution decoded from a chromosome lies in a feasible region of the
problem
Legality of a chromosome
chromosomes represents a solution to a problem
Uniqueness of mapping (Between Chromosomes and solution to the
problem)
1 - n mapping (Undesired mapping)
n – 1 mapping (Undesired mapping) One chromosome
represents only one
1 – 1 mapping (Desired mapping)
solution to the
problem
24. Step 1: Encoding Problem (Cont.)
24
Coding Space
Solution Space
infeasible one
Feasible space
Coding Space
Solution Space
25. Step 2: Initialization
25
Create initial population of solutions
Randomly
Local search
Feasible Solutions
For optimization problem
Minimize: F (x1, x2, x3)
Binary encoding
1 0 1 1 0 0 1 1 1 0 0 1
x1 x2 x3
27. Step 3: Reproduction
27
Crossover operation (Based
on crossover probability) Crossover Points
Parent 1
Select parents from population
based on crossover probability Parent 2
Randomly select two points
between strings to perform
crossover operation
Offspring 1
Perform crossover operations
on selected strings Offspring 2
Known for Local search
operation
30. Step 3: Reproduction (Cont.)
30
Mutation operation (based on mutation
probability pm)
each bit of every individual is modified with probability
pm
main operator for global search (looking at new areas of
the search space)
pm usually small {0.001,…,0.01}
rule of thumb pm = 1/no. of bits in chromosome
31. Step 3: Reproduction (Cont.)
31
ith solution string from the population
For optimization
problem 0 0 1 0 1 0 1 1 0 1 0 0
Minimize: F (x1, x2, x3)
Let pm = 1/12 = 0.083 0.12 0.57 0.62 0.31 0. 01 0.73 0.83 0.63 0.02 0.26 0.94 0.63
Generate Random
number [0,1] for Mutation
each bit
Select bits having
probability less than 0 0 1 0 1 0 1 1 0 1 0 0
pm
Interchange the bits
with each other
32. Step 4: Selection (“Survival of the fittest”)
32
Directs the search Population
towards promising (pop_size)
regions in the search Crossover operation
space
Mutation operation
Basic issues
involved in selection
phase:
Sampling space:
Parents and Offspring
Regular sampling space:
all offspring + few parent =
pop_size
Offspring produced
33. Step 4: Selection (“Survival of the fittest”)
(Cont.)
33
Basic issues involved in selection phase:
Sampling space:
Enlarged sampling space: All offspring + All parent
Crossover operation
Population
(pop_size) Mutation operation
Offspring produced
34. Step 4: Selection (“Survival of the fittest”)
(Cont.)
34
Selection probability for kth individual
Sampling
fk
Mechanism: How to pk pop _ size
select chromosomes fj
j 1
from sampling space
Based on pk, cumulative probability is
Basic approaches calculated, and roulette wheel is
constructed
Stochastic Samplings
Roulette Wheel selection: Zone of kth
To determine survival individual
probability proportional to
the fitness value
randomly generate a
number between [0,1] and
select the individual
fk is the fitness value of kth individual
35. Step 4: Selection (“Survival of the fittest”)
(Cont.)
35
Deterministic Samplings:
select best pop_size individuals from the parents and offspring
No duplication of the individuals
Mixed Samplings:
both random and deterministic samplings are done
Step 5: Termination Criteria
Repeating the above steps until the termination criteria is
not satisfied
Termination criteria
maximum number of generations
no improvement in fitness values for fixed generation
36. Summary of Genetic Algorithms
36
Begin
{
initialize population;
evaluate population;
while (TerminationCriteriaNotSatisfied)
{
select parents for reproduction;
perform Crossover and mutation;
evaluate population;
}
}
37. Issues for GA Practitioners
37
Choosing basic implementation issues:
Encoding
Population size, Mutation rate, Crossover rate …..
Selection, Deletion policies
Types of Crossover, Mutation operators
Termination Criteria
Performance, scalability
Solution is only as good as the evaluation function
(often hardest part)
38. Benefits of Genetic Algorithms
38
Concept is easy to understand
Modular, separate from application
Supports multi-objective optimization
Good for “noisy” environments
Always an answer; answer gets better with time
Inherently parallel; easily distributed
Many ways to speed up and improve a GA-based
application as knowledge about problem domain is
gained
Easy to exploit previous or alternate solutions
Flexible building blocks for hybrid applications
Substantial history and range of use
39. When to Use a GA
39
Alternate solutions are too slow or overly
complicated
Need an exploratory tool to examine new
approaches
Problem is similar to one that has already been
successfully solved by using a GA
Want to hybridize with an existing solution
Benefits of the GA technology meet key problem
requirements
40. Some GA Application Types
40
Domain Application Types
Control gas pipeline, pole balancing, missile evasion, pursuit
Design semiconductor layout, aircraft design, keyboard configuration,
communication networks
Scheduling manufacturing, facility scheduling, resource allocation
Robotics trajectory planning
Machine Learning designing neural networks, improving classification algorithms, classifier
systems
Signal Processing filter design
Game Playing poker, checkers, prisoner’s dilemma
Combinatorial set covering, travelling salesman, routing, bin packing, graph colouring
and partitioning
Optimization
41. Sample Applications in Civil Engineering
Transportation Engineering
Brief discussion of following areas:
- Dynamic traffic simulation.
- Aggregate blending .
- Back calculation of Pavement Layer Modulii.
Numerous applications in Structural engineering ,
environmental, geotechnical and water resources
engineering.
Research articles are available in superfluity
concerning applications of GA in civil engineering
43. Inspiration
Ants are practically blind but they still manage to find their way
to the food. How do they do it?
These observations inspired a new type of algorithm called ant
algorithms (or ant systems).
Result of research on computational intelligence approaches to
combinatorial optimization.
The algorithm is modeled after the natural behavior of ants.
45. Natural behavior of ant
Nest Food
Obstacle
An obstacle has blocked the path of ants
46. Natural behavior of ant
Nest Food
Obstacle
What to do? Every ant flips a coin and choose a path
47. Natural behavior of ant
Nest Food
Obstacle
Finally, after some time shorter path reinforced
48. Natural Ants
Almost Blind.
Incapable of achieving complex task alone.
Rely on the phenomena of swarm intelligence for survival.
Capable of establishing shortest-route paths from their colony to feeding
sources and back.
Use stigmergic communication via pheromone trails.
49. Natural Ants
Follow existing pheromone trails with high probability.
What emerges is a form of autocatalytic behavior: the more ants follow a
trail, the more attractive that trail becomes for being followed.
The probability of a path choice increases with the number of times the
same path was chosen before.
51. Stigmergic
A term coined by French biologist Pierre-Paul Grasse, is interaction
through the environment.
Two individuals interact indirectly when one of them modifies the
environment and the other responds to the new environment at a later
time. This is stigmergy.
55. Basic Requirements
Since the ant algorithms are based on shortest path
finding methodology utilized by the ants in search
for their food, thus their implementation requires:
The problem to be solved must either be in graphical
format or could be expressed in graphical form.
Must be finite (i.e. must have a start and end).
56. Ant Algorithms
Ant systems are a population based approach. In this
respect it is similar to genetic algorithms.
Each ant is a simple agent with the following
characteristics:
It probabilistically chooses the node to visit with certain probability.
Uses a tabu list to avoid revisit to the node.
After the completion of tour it lays pheromone trail on each visited
edge.
57. Flowchart of Ant algorithms
Initialize Evaluate Update
Find Solutions Solutions Pheromone
Ants
Yes Is No Probabilistically find
STOP Termination New solutions based
Criteria met? On pheromone values
Update Evaluate
Pheromone Solutions
58. Initialization
Initialize Evaluate Update
Find Solutions Solutions Pheromone
Ants
Yes Is No Probabilistically find
STOP Termination New solutions based
Criteria met? On pheromone values
Update Evaluate
Pheromone Solutions
59. Initialization Initialize
Ants
Initially ants are randomly placed on the nodes.
Each edge is initialized with small amount of
pheromones.
Each edge‟s Visibility, a heuristic value equal to
the inverse of distance between the edge, is
initialized.
60. Find Solutions
Initialize Evaluate Update
Ants Find Solutions Solutions Pheromone
Yes Is No Probabilistically find
STOP Termination New solutions based
Criteria met? On pheromone values
Update Evaluate
Pheromone Solutions
61. Find Solutions Find Solutions
Each ant probabilistically select the next node to visit
with certain probability given by:
Cycle Quantity of pheromone
Number on edge i-j.
1
ij (t ) Distance between
Pij (t ) dij edge i-j
1
nodesij(t ) dij
jallowed α,β constants
Identified
Probability of transition
Using Tabu List
from node i to j
62. Tabu List
It is used by the ant to avoid revisit to any node.
It stores the node to be visited by the ant.
63. Pheromone Update Update
Pheromone
After each ant complete their tour, pheromone count
on each edge is updated using:
Pheromone laid by
Evaporation rate each ant that uses
edge (i,j)
Q
ij (t 1) (1 ) ij (t )
kColonythat Lk
used edge ( i , j )
Quantity of
pheromone Total distance traveled
on edge i-j during by ant k during its tour
cycle t+1.
64. Termination
The termination criteria commonly used are:
Designated Maximum number of cycles.
Specified CPU time limit.
Maximum number of cycles between two
improvements of the global best solution.
65. Control Parameters
Number of ants
Pheromone Weight ()
Visibility Weight (β)
Pheromone persistence ( )
Number of cycles
66. Ant Algorithms - Applications
Travelling Salesman Problem (TSP)
Facility Layout Problem - which can be shown to
be a Quadratic Assignment Problem (QAP)
Vehicle Routing
Stock Cutting (at Nottingham)
67. ANT COLONY
APPLICATION TO
TRAVELING SALESMAN
PROBLEM – AN EXAMPLE
ILLUSTRATION
68. Ant Colony Algorithms and TSP
Ant Colony Optimization was initially designed
for Traveling Salesman Problem.
At the start of the algorithm one ant is placed in
each city.
Assuming that the TSP is being represented as a
fully connected graph, each edge has an
intensity of trail on it. This represents the
pheromone trail laid by the ants.
69. Ant Colony Algorithms and TSP
The distance to the next town, is known as the
visibility, nij, and is defined as 1/dij, where, dij, is
the distance between cities i and j.
When an ant decides which town to move to
next, it does so with a probability that is based
on the visibility for that city and the amount of
trail intensity on the connecting edge.
70. Ant Colony Algorithms and TSP
At each cycle pheromone evaporation takes
place.
The evaporation rate,1- p, is a value between 0
and 1.
In order to stop ants visiting the same city in the
same tour a data structure, Tabu, is maintained.
75. Variants
Best and Worst Ant System
The best ant receives reward while the worst ant is punished.
If the search stucks at a local optimum, restart is employed.
Maximum and Minimum Ant System
An upper and lower bound are exposed on the pheromone
levels.
Search starts using the max.
Rank Based Ant System
The ants are sorted wrt. the fitnesses of each tour they find.
Their pheromone levels are adjusted accordingly
Elitist Ant System
The best tour found at each step receives an extra pheromone.
76. Concluding remarks on Ant algorithms
Ant algorithms are inspired by real ant colony.
Probability of ant following certain route is a
function
Pheromone intensity
Visibility
Evaporation
Ant algorithms are very suitable for problems
having graphical structures.
78. Inspiration
It was inspired from the swarms in nature such as
birds, fish, etc.
PSO algorithm has been originally developed to
imitate the motion of flock of birds.
Particle Swarm Optimization (PSO) applies concept
of social interaction for problem solving
79. Particle Swarm Algorithms
It was developed in 1995 by James Kennedy and Russ Eberhart.
PSO is a robust stochastic optimization technique based on the movement
and intelligence of swarms.
In PSO, a swarm of n individuals communicate either directly or indirectly
with one another search directions (gradients).
It has been applied successfully to a wide variety of search and optimization
problems
80. PSO Formulation
The algorithm uses a set of particles flying over a
search space to locate a global optimum.
A particle encodes a candidate solution to a problem
at hand.
During an iteration of PSO, each particle updates its
position according to its previous experience and the
experience of its neighbors.
81. Fundamentals of PSO
A particle (individual) is composed of:
Three vectors:
The x-vector records the current position (location) of the particle in the
search space,
The p-vector (pbest) records the location of the best solution found so far
by the particle, and
The v-vector contains a gradient (direction) for which particle will travel in
if undisturbed
82. PSO: Generic Algorithm Schema
Start
Initialize swarm with random position (x0)
and velocity vectors (v0)
For Each Particle
Evaluate Fitness Next Particle
If
Update Position
fitness(xt)> fitness (gbest)
xt+1= xt+1 + vt+1
gbest=xt
If Update velocity
vt 1 W vt c2 rand( 0,1 ) ( pbest xt )
fitness(xt)> fitness (pbest)
c3 rand( 0,1 ) ( gbest xt )]
pbest=xt
gbest= Global Best Position
pbest= Self Best Position If
c1 and c2= Acceleration Coefficients false
Terminate
W = Inertial Weight
true
gbest = output
End
83. Algorithm Implementation
The basic concept of PSO lies in accelerating each particle
toward the best position found by it so far (pbest) and the
global best position (gbest) obtained so far by any particle,
with a random weighted acceleration at each time step.
This is done by simply adding the v-vector to the x-vector to
get another x-vector (Xi = Xi + Vi).
Once the particle computes the new Xi it then evaluates its
new location. If x-fitness is better than p-fitness, then
pbest = Xi and p-fitness = x-fitness.
84. Psychosocial compromise
Particle’s best
position so
Particle’ far
s
Current
position pbest Global
x best
gbest position
attained
v
vt 1 W vt c1 rand(0, 1) (pbest x t ) c2 rand(0, 1) (gbest x t )]
gbest = Global Best Position c1 and c2 = Acceleration Coefficients
pbest= Self Best Position W = Inertial Weight
85. Initial parameters
Swarm size
Position of particles.
Velocity of particles.
Maximum number of iterations.
86. Control Parameters
Swarm size
Inertial Weight W
Acceleration Coefficients c1 and c2
Number of iterations
87. Inertia Weight W
A large inertia weight (w) facilitates a global search
while a small inertia weight facilitates a local search.
Larger W Greater Global Search Ability
Smaller W Greater Local Search Ability
88. Acceleration Coefficients
Determines the inclination of search.
C1 larger Greater Local Search Ability
than C2
C2 larger Greater Global Search Ability
than C1
89. Comparison with Evolutionary Algorithms (EAs)
Unlike EAs, in PSO there is no selection operator.
PSO does not implement survival of the fittest
strategy and all individuals are kept as members
of the population throughout the course.
91. Encoding Schema
Generally PSO is applied over problems involving real
variables.
However, through the use of proper encoding schema it can
be applied to solve hard combinatorial optimization
problems like Traveling Salesman Problem, Knapsack
Problem, Node Coloring, Sequencing and Scheduling.
92. Encoding Schema
For TSP, each particle’s position is coded in the form
of a one dimensional string whose dimensions equals
the number of cities that are to be visited.
The particles are randomly initialized with rank
vectors or priority numbers.
5 9 2 4 6 3
String
Priority
representation
Numbers
for TSP with 6
cities
93. Decoding Smallest
Priority
Number
Encoded
String 5 9 2 4 6 3
Decoded
String 1
Least priority is
assigned the first
city
94. Decoding
Smallest
Priority
Number
Encoded 5 9 N 4 6 3
String
Representing that it
has been decoded
Decoded
String 1 2
Least priority is Repeated till all
assigned the next cities are
city assigned
96. Solution Strategy by Particle Swarm Algorithm
Randomly initialize the particle‟s position (ranks) and velocity.
Decode the particles and evaluate objective.
Store the initial position in particle‟s memory.
Modify velocity using cognitive and social components and update
position.
Decode the particles „position and evaluate objective.
If the position of particle is better than the position stored in
memory, update memory.
Update the global best if a better particle is obtained.
Repeat the process till required no. of iterations are complete.
The particle with best position is the output.
100. Results of PSO on TSP with 10 Nodes
PSO took
relatively large
time to evolve
the optimal
solution
101. Concluding remarks on “Particle Swarm”
Fast convergence thus time requirement is less.
Global as well as Local search component.
Dependence on parameter tuning is less.
More effective on problems involving real values.
Chances of early convergence due to high
convergence speed.
103. Artificial Immune Systems
AIS are adaptive systems inspired by theoretical immunology
and observed immune functions, principles and models, which
are applied to complex problem domains (de Castro and Timmis)
A recently developed evolutionary technique inspired by theory
of Immunology
A way to study the response of immune system,
when a non-self Antigen pattern is recognized
by Antibody
104. Biological Immune System
Efficiency of the acquired response depends upon the ability
of antibodies to recognize the antigens, depends upon
1016 Antigens for
less than 100
antibody genes
o Generalization
o Screening
Self/Non-self
o Memory
Discriminatio
n
Ability to
remember
previous
infections
105. Artificial Immune System
History of Artificial Immune System
Initially developed from the theory of “immunology”
in mid 1980’s
In 1990, first use of immune algorithm to solve
optimization problem
In mid 1990: Application to Computer Security
In mid 1990: Machine Learning
106. Artificial Immune System
Artificial Immune System: An Optimization View
Objective
Entire Solution Functions
Building Blocks Constraints
Feasible Solutions
107. Artificial Immune Systems
Basic Elements
Immune Systems: To protect the body from the
foreign matters
Antigen: Any foreign disease causing elements
Antibody: Utilized to identify, bind and eliminate
antigens
108. General Framework for AIS- The AIS
Cycle
Population
Selection
Initialization
Cloning &
Evaluation
Hypermutation
109. Artificial Immune System
AIS: A Generic Framework
Immune
Algorithm
Affinity
Measures
Representation
Application
Domain
110. Flow of the Algorithm
Hypermutation
of each clone
Population P
of individuals
Clone Pool of
the population
Repository of
Probabilistically select P
best individual good solution
When search gets stagnated good solutions are sent to
the current population
111. Artificial Immune System
Artificial Immune System: An Assessment
Advantages
General Purpose AIS tools
Easily Extensible
Potential for distribution
Disadvantages
Parameter Sensitive
Computationally Expensive
112. Artificial Immune System
Distinctive Features & Their Applications:
Features Applications
Learning & Adaptation Security
Immunological Memory Pattern Recognition
Self/Non-self Classification Heuristic Optimization
Self Organizing Modeling & Agents Application
Localization & Circulation Clustering
Autonomous/Decentralized Concept Learning & Recommender System
113. Artificial Immune System
AIS: Potential Area of optimization
Fault & Anomaly Detection
Data Mining (Machine Learning, patter recognition)
Agent Based systems
Autonomous Control
Information Security System
Scheduling
114. Dynamic traffic simulation
CALIBARTION OF MESOSCOPIC TRAFFIC
SIMULATION USING POPULATION BASED
EVOLUTIONARY ALGORITHMS
- methodology to calibrate dynamic traffic
simulation models with real data acquired from
traffic counts and travel time measurements
acquired from GPS devices
115. Brief outlook
To use a tool called METROPOLIS
No Mathematical function involved, hence a need for
simulation arises – simulation of the real world
conditions.
A simulation of real time traffic will be processed in
the model and it gives different indicators as output.
One of the indicator is the travel time along the
defined paths in a network
Initially ,tested on toy networks (network containing
small networks)
116. Computational Details
Programmed in the following way :
- A GUI platform developed in java which works like
a compiler for optimization
- The main features of the compiler are :
* Any EA can be embedded
* Any problem can be optimized
ALGORITHM OPTIMISATION PROBLEM
PLATFORM
117. Overall framework
NODE-1
RANDOMLY CALCULATE
GENERATE FITNESS
PLATFORM NODE-2
TRAFFIC
VARIABLES FITNESS
NODE-3
R
E
S NODE-4
U
L
T
S