Many Decision Problems in business and social systems can be modeled using mathematical optimization, which seeks to maximize or minimize some objective which is a function of the decisions.
Stochastic Optimization Problems are mathematical programs where some of the data incorporated into the objective or constraints are Uncertain.
whereas, Deterministic Optimization Problems are formulated with known parameters.
(INDIRA) Call Girl Bhosari Call Now 8617697112 Bhosari Escorts 24x7
Stochastic Optimization
1. IN THE NAME OF GOD
Isfahan University of Technology
Department Of Electrical and Computer Engineering
Title:
Stochastic Optimization
Course instructor:
DR. Naghmeh Sadat Moayedian
By:
Mohammad Reza Jabbari
July 2018
3. 1.Introduction
3/31
Definition
Many Decision Problems in business and social systems can be
modeled using mathematical optimization, which seek to maximize
or minimize some objective which is a function of the decisions.
The feasible decisions are constrained by limits in resources,
minimum requirements, etc.
Objectives and constraints are functions of the variables and
problem data, such as costs, production rates, sales, or capacities.
4. Optimization Problems
Stochastic Optimization Problems
are mathematical programs where some of
the data incorporated into the objective or
constraints is Uncertain.
Stochastic
Optimization
Deterministic
Optimization
Deterministic Optimization Problems
are formulated with known parameters.
Real world problems almost invariably include some unknown and uncertain parameters.
Robust
Optimizatio
n
4/31
5. When some of the data is random, then Optimal Solutions and the Optimal Value to the optimization problem
are themselves random.
A distribution of optimal decisions is generally unimplementanble.
Probabilistic Distribution
Ideally, we would like one decision and one optimal objective value.
What do you tell your boss?
5/31
6. Problem Solving Methods
1. Probabilistically Constrained Models
2. Recourse Models
Try to find a decision which ensures that a set of constraints will hold with
a certain probability.
One logical way to pose the problem is to require that we make one
decision now and minimize the expected costs (or utilities) of the
consequences of that decision
disjoint Probabilistic Constraint Joint Probabilistic Constraint
Two-Stage Multi-Stage
6/31
7. The purpose of both methods is to:
Stochastic Optimization Problem Deterministic Optimization Problem
Deterministic Equivalent Problem
So DEP can be solved using linear or nonlinear optimization methods.
7/31
8. Sustainability and Power Planning
Supply Chain Management
Network optimization
Logistics
Financial Management
Location Analysis
etc.
2.Applications
8/31
9. Suppose that a factory can simultaneously produce two products from two raw materials 𝑥1and 𝑥2 with
below constraints:
1. The production cost of each unit of the first and second product is 𝑐1 = 2 and 𝑐2 = 3, respectively
2. The maximum storage capacity for storing raw materials is equal to b = 100.
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑥2 ≤ 100
2𝑥1 + 6𝑥2 ≥ 180
3𝑥1 + 3𝑥2 ≥ 162
𝑥1 ≥ 0 , 𝑥2 ≥ 0
𝑥1
∗
= 36 𝑥2
∗
= 18 𝑝∗ = 126
Example1:
!! Note that the assumption that parameters such as the minimum demand or cost per unit of raw material, etc.
are not realistic. Therefore, each of these parameters and coefficients can be a random variable:
ℎ1 = 180 + 𝝃 𝟏 𝑎21 = 2 + 𝜼 𝟏
ℎ1 = 162 + 𝝃 𝟐 𝑎32 = 3 + 𝜼 𝟐
Where
𝝃 𝟏~ 𝑁 0,12 𝜼 𝟏~ 𝑈(−0.8 , 0.8)
𝝃 𝟐~ 𝑁 0,9 𝜼 𝟐~ exponential (λ = 2.5)
9/31
10. The random variables 𝝃 𝟏 ،𝝃 𝟐 and 𝜼 𝟐 are unbounded. Therefore, we obtain a reliability level (95%) for them:
𝝃 𝟏 𝜖 −30.94 , 30.94 𝜼 𝟏 𝜖 [−23.18 , 23.18]
𝝃 𝟐 𝜖 −0.8 , 0.8 𝜼 𝟐 𝜖 [0.1.84]
Define the random vector 𝒘 = (𝝃 𝟏, 𝝃 𝟐, 𝜼 𝟏, 𝜼 𝟐) and rewrite the problem:
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 2𝑥1 + 3𝑥2
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑥1 + 𝑥2 ≤ 100
𝛼 𝒘 𝑥1 + 6𝑥2 ≥ ℎ1 𝒘
3𝑥1 + 𝛽 𝒘 𝑥2 ≥ ℎ2 𝒘
𝑥1 ≥ 0 , 𝑥2 ≥ 0
Before determining the random parameters of the problem (a certain realization of 𝒘), the optimal value of
the problem can not be calculated !!!
10/31
11. 3. Probabilistically Constrained Models
Definition:
If instead of persisting that the constraints of the problem hold for all values of random variables, suppose
that the constraints hold with a certain reliability level. This case is called an optimization problem with
Probabilistic constraints.
𝑃{
𝑗=1
𝑛
𝑎𝑖𝑗 𝑤 𝑥𝑗 ≥ 𝑏𝑖 𝑤 , 𝑖 = 1,2, … , 𝑚 ≥ 𝛼 𝑎𝑛𝑑 𝛼𝜖[0,1]
𝑃{
𝑗=1
𝑛
𝑎𝑖𝑗 𝑤 𝑥𝑗 ≥ 𝑏𝑖 𝑤 ≥ 𝛼𝑖 , 𝑖 = 1,2, … , 𝑚 𝑎𝑛𝑑 𝛼𝜖[0,1]
1. Dsjoint Probabilistic
Constraints
2. Joint Probabilistic
Constraints
11/31
13. Joint Probabilistic Constraints
In optimization problems with Joint Probabilistic Constraints, often apply conditions to probability
distribution in order to obtain DEP. The purpose of these conditions is to satisfy convexity of feasible
set or the objective function.
logarithmically
concave
Concept
Suppose that 𝑎𝑖𝑗 ’s coefficients are definite and deterministic and the
parameters 𝑏𝑖(𝑖 = 1, … , 𝑚1) are random, independent with the probability
corresponding to the logarithmically concave 𝑃𝑖 and the corresponding
probability distribution function 𝐹𝑖:
𝑃 𝐴𝑥 ≥ 𝑏 ≥ 𝛼
𝑖=1
𝑚1
𝑃𝑖 𝐴𝑖 𝑥 ≥ 𝑏𝑖 ≥ 𝛼
𝑖=1
𝑚1
𝐹𝑖 𝐴𝑖 𝑥 ≥ 𝛼
𝑖=1
𝑚1
𝐿𝑛( 𝐹𝑖 𝐴𝑖 𝑥 ) ≥ 𝐿𝑛(𝛼)
To show that this equivalent equation is convex we just need to show that the probability distribution
function 𝐹𝑖 ،is logarithmically concave.
13/31
14. Disjoint Probabilistic Constraints
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
In fact, the i-th constraint is satisfied with the least probability 1 − 𝛼𝑖.
In order to transform the above problem into a definite equivalent problem, seven state arise,
depending on which of the parameters of the problem is random. By examining its four independent
states, the remaining states can be obtained from the combination of previous states
14/31
15. State1:
• Assume that only 𝑐𝑗’s are random variables with mean 𝐸{𝑐𝑗}
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝐸{𝑐𝑗}𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 𝑖 = 1, … , 𝑚
𝑥𝑗 ≥ 0 𝑗 = 1, … , 𝑛
• In this case, it is enough to substitute 𝑐𝑗’s with their means.
15/31
16. State2:
• Assume that 𝑎𝑖𝑗’s are correlated variable with the mean 𝑬{𝒂𝒊𝒋}, the variance 𝑽𝒂𝒓{𝒂𝒊𝒋}, and the
covariance matrix 𝑪𝒐𝒗(𝒂𝒊𝒋, 𝒂𝒊′ 𝒋′).
• Also, we assume i-th constraint is in the following form:
𝑖𝑓 𝑑𝑒𝑓𝑖𝑛𝑒 𝑇𝑖 =
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
𝐸 𝑇𝑖 =
𝑗=1
𝑛
𝐸 𝑎𝑖𝑗 𝑥𝑗 𝑖 = 1, … , 𝑚
𝑉𝑎𝑟 𝑇𝑖 = 𝑚=1
𝑛
𝑉𝑎𝑟 𝑎𝑖𝑚 𝑥 𝑚
2
+ 𝑚=1
𝑛
𝑘=1
𝑘≠𝑚
𝑛
𝐶𝑜𝑣(𝑎𝑖𝑚, 𝑎𝑖𝑘) 𝑥 𝑚 𝑥 𝑘
16/31
18. State3:
• Assume that 𝑏𝑖’s are Gaussian variable with the mean 𝑬{𝒃𝒊}, the variance 𝑽𝒂𝒓 𝒃𝒊 :
• Also, we assume i-th constraint is in the following form:
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝑏𝑖 ≥ 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
• Like Previous state:
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 − 𝐸 𝑏𝑖
𝑉𝑎𝑟 𝑏𝑖
≤ 𝐾 𝛼 𝑖
𝑖 = 1, … , 𝑚
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 ≤ 𝐸 𝑏𝑖 + 𝐾 𝛼 𝑖
𝑉𝑎𝑟 𝑏𝑖 𝑖 = 1, … , 𝑚
18/31
19. State4:
• In this State, assume that 𝑏𝑖’s and 𝑎𝑖𝑗’s are Random Variable with the mean and variance 𝑬{𝒃𝒊},
𝑽𝒂𝒓 𝒃𝒊 , 𝑬{𝒂𝒊𝒋} and 𝑽𝒂𝒓{𝒂𝒊𝒋}
• Also, we assume i-th constraint is in the following form:
𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒 𝑧 =
𝑗=1
𝑛
𝑐𝑗 𝑥𝑗
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝑃
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 − 𝑏𝑖 ≤ 0 ≥ 1 − 𝛼𝑖 𝑖 = 1, … , 𝑚
𝛼𝑖 𝜖 0,1 , 𝑥𝑗 ≥ 0 𝑖 = 1, … , 𝑚 𝑗 = 1, … , 𝑛
if define ℎ𝑖 ≜
𝑗=1
𝑛
𝑎𝑖𝑗 𝑥𝑗 − 𝑏𝑖
𝐸 ℎ𝑖 =
𝑗=1
𝑛
𝐸 𝑎𝑖𝑗 𝑥𝑗 − 𝐸 𝑏𝑖 𝑖 = 1, … , 𝑚
𝑉𝑎𝑟 ℎ𝑖 = 𝑥 𝑇
𝐷𝑖 𝑥 , 𝑖 = 1, … , 𝑚 𝐷𝑖 =
)𝑉𝑎𝑟(𝑎𝑖1 )𝐶𝑜𝑣(𝑎𝑖1, 𝑎𝑖2 ⋯ )𝐶𝑜𝑣(𝑎𝑖1, 𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖1, 𝑏𝑖
)𝐶𝑜𝑣(𝑎𝑖2, 𝑎𝑖1 )𝑉𝑎𝑟(𝑎𝑖2 ⋯ )𝐶𝑜𝑣(𝑎𝑖2, 𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖1, 𝑏𝑖
⋮ ⋮ ⋱ ⋮ ⋮
)𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖1 )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖2 ⋯ )𝑉𝑎𝑟(𝑎𝑖𝑛 )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑏𝑖
)𝐶𝑜𝑣(𝑏𝑖, 𝑎𝑖1 )𝐶𝑜𝑣(𝑏𝑖, 𝑎𝑖1 ⋯ )𝐶𝑜𝑣(𝑎𝑖𝑛, 𝑎𝑖1 )𝑉𝑎𝑟(𝑏𝑖
20. • Base on Central Limit theory,ℎ𝑖 is approximately Gaussian Distribution.
𝑃 ℎ𝑖 ≤ 0 = 1 − 𝑄
−𝐸 ℎ𝑖
𝑉𝑎𝑟 ℎ𝑖
≥ 1 − 𝛼𝑖
−𝐸 ℎ𝑖
𝑉𝑎𝑟 ℎ𝑖
≥ 𝐾 𝛼 𝑖
𝐸 ℎ𝑖 + 𝐾 𝛼 𝑖
𝑉𝑎𝑟 ℎ𝑖 ≤ 0
• Therefore, the deterministic equivalent constraint is:
The next three States are combination the four previous sates and can easily obtained:
• State5: 𝑐𝑗’s and 𝑎𝑖𝑗’s are random variable
• State6: 𝑐𝑗’s and 𝑏𝑖’s are random variable
• State7: 𝑐𝑗’s and 𝑎𝑖𝑗’s and 𝑏𝑖’s are random variable
20/31
24. 4. Recourse Models
Recourse Models are those in which some decisions or recourse actions can be taken after uncertainty
is disclosed.
Definition:
Stochastic
Optimization
with Recource
Anticipative
Variables
Adoptive
Variables
The set of decisions is then divided into two groups:
• A number of decisions have to be taken before the
experiment. All these decisions are called first-stage
decisions and the period when these decisions
aretaken is called the first stage.
• A number of decisions can be taken after the
experiment. They are called second-stage
decisions. The corresponding period is called the
second stage.
24/31
25. Two-Stage Program with Fixed Recourse
The classical two-stage stochastic linear program with fixed recourse (originated by Dantzig [1955] and
Beale [1955]) is the problem of finding:
𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑧 = 𝑐 𝑇
𝑥 + 𝐸𝜉 {min
𝑦
𝑞( 𝒘) 𝑇
𝑦(𝒘)}
𝑠𝑢𝑏𝑗𝑒𝑐𝑡 𝑡𝑜
𝐴𝑥 = 𝑏
𝑇( 𝒘) 𝑥 + 𝑊𝑦( 𝒘) = ℎ(𝒘)
𝑥 ≥ 0 , 𝑦( 𝒘) ≥ 0
Where:
• 𝑥: The first-stage decisions )𝑛1 × 1)
• 𝑦: The second-stage decisions )𝑛2 × 1)
• 𝑊: Recourse Matrix (𝑚2 × 𝑛2)
• 𝜉 𝑇 𝒘 = (𝑞 𝑇 𝒘 , 𝑞 𝑇 𝒘 , 𝑞 𝑇 𝒘 , 𝑇1 𝒘 , … , 𝑇 𝑚2
𝒘
Each component of q , T , and h is thus a possible random variable.
For a given realization ω , the second-stage problem data q)ω) , h)ω) and T)ω) become known.
25/31
26. Example3:
Suppose we have to decide on the number of production X products.
• The production of each unit X costs $ 2.
• Customers' demand is random and there is a discrete distribution Ds with the probability ps (s = 1,
..., S).
• Customer demand must be met. We will have the opportunity to buy X from an external supplier to
meet the real demand of our customers. This purchase costs $ 3.
Question: How much should we produce at the moment, while we do not know the future demand of our
customers?
We assume S=2 and D1=500, p1=0.6
D2=700, p2=0.4
First Stage
Second Stage
D1=500, p1=0.6 D2=700, p2=0.4
26/31
27. Misleading solution:
The optimal solution is the expected amount of demand.
Correct solution:
The optimal solution is obtained through
stochastic Optimization.
26/31
27
The number of units of the X that
are currently produced (the first
stage).
x1 :the 1st stage
The number of units of X, which in the
second stage is purchased with the random
fulfillment of demand scenarios, Ds (s = 1, ...,
S).
y2s :the 2nd stage
29. If we choose expected value of demand: 𝑥∗ = 580
if D=700 → Optimal cost = 1350
29/31
30. Stochastic programming problems are formulated as mathematical programming tasks with the objective
and constraints defined as expectations of some random functions or probabilities of some sets of
scenarios.
Expectations are defined by multivariate integrals (scenarios distributed continuously) or finite series
(scenarios distributed discretely).
5. Conclusion
30/31