SlideShare ist ein Scribd-Unternehmen logo
1 von 103
LION11 1 / 40
The use of grossone in optimization: a survey and some
recent results
R. De Leone
School of Science and Technology
Universit`a di Camerino
June 2017
Outline of the talk
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
LION11 2 / 40
Single and Multi Objective Linear Programming
Nonlinear Optimization
Some recent results
Single and Multi Objective
Linear Programming
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 3 / 40
Linear Programming and the Simplex Method
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 4 / 40
min
x
cT x
subject to Ax = b
x ≥ 0
The simplex method proposed by George Dantzig in 1947
■ starts at a corner point (a Basic Feasible Solution, BFS)
■ verifies if the current point is optimal
■ if not, moves along an edge to a new corner point
until the optimal corner point is identified or it discovers that the problem
has no solution.
Preliminary results and notations
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 5 / 40
Let
X = {x ∈ IRn
: Ax = b, x ≥ 0}
where A ∈ IRm×n, b ∈ IRm, m ≤ n.
A point ¯x ∈ X is a Basic Feasible Solution (BFS) iff the columns of A
corresponding to positive components of ¯x are linearly independent.
Preliminary results and notations
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 5 / 40
Let
X = {x ∈ IRn
: Ax = b, x ≥ 0}
where A ∈ IRm×n, b ∈ IRm, m ≤ n.
A point ¯x ∈ X is a Basic Feasible Solution (BFS) iff the columns of A
corresponding to positive components of ¯x are linearly independent.
Let ¯x be a BFS and define ¯I = I(¯x) := {j : ¯xj > 0} then
rank(A.¯I) = |¯I|. Note: |¯I| ≤ m
Preliminary results and notations
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 5 / 40
Let
X = {x ∈ IRn
: Ax = b, x ≥ 0}
where A ∈ IRm×n, b ∈ IRm, m ≤ n.
A point ¯x ∈ X is a Basic Feasible Solution (BFS) iff the columns of A
corresponding to positive components of ¯x are linearly independent.
Let ¯x be a BFS and define ¯I = I(¯x) := {j : ¯xj > 0} then
rank(A.¯I) = |¯I|. Note: |¯I| ≤ m
Vertex Point, Extreme Points and Basic Feasible Solution Point coin-
cide
BFS ≡ Vertex ≡ Extreme Point
BFS and associated basis
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 6 / 40
Assume now
rank(A) = m ≤ n
A base B is a subset of m linearly independent columns of A.
B ⊆ {1, . . . , n} , det(A.B) = 0
N = {1, . . . , n} − B
Let ¯x be a BFS. .
BFS and associated basis
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 6 / 40
Assume now
rank(A) = m ≤ n
A base B is a subset of m linearly independent columns of A.
B ⊆ {1, . . . , n} , det(A.B) = 0
N = {1, . . . , n} − B
Let ¯x be a BFS. .
If |{j : ¯xj > 0}| = m the BFS is said to be non–degenerate and
there is only a single base B := {j : ¯xj > 0} associated to ¯x
Non-degenerate BFS
BFS and associated basis
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 6 / 40
Assume now
rank(A) = m ≤ n
A base B is a subset of m linearly independent columns of A.
B ⊆ {1, . . . , n} , det(A.B) = 0
N = {1, . . . , n} − B
Let ¯x be a BFS. .
If |{j : ¯xj > 0}| < m the BFS is said to be degenerate and
there are more than one base B1, B2, . . . , Bl associated to ¯x with
{j : ¯xj > 0} ⊆ Bi
Degenerate BFS
BFS and associated basis
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 6 / 40
Assume now
rank(A) = m ≤ n
A base B is a subset of m linearly independent columns of A.
B ⊆ {1, . . . , n} , det(A.B) = 0
N = {1, . . . , n} − B
Let ¯x be a BFS. Let B a base associated to ¯x.
Then
¯xN = 0, ¯xB = A−1
.B b ≥ 0
BFS and associated basis
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 6 / 40
Assume now
rank(A) = m ≤ n
A base B is a subset of m linearly independent columns of A.
B ⊆ {1, . . . , n} , det(A.B) = 0
N = {1, . . . , n} − B
Let ¯x be a BFS. Let B a base associated to ¯x.
Convergence of the Simplex Method
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 7 / 40
Convergence of the simplex method is ensured if all basis visited by the
method are nondegenerate
Convergence of the Simplex Method
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 7 / 40
Convergence of the simplex method is ensured if all basis visited by the
method are nondegenerate
In presence of degenerate BFS, the Simplex method may not terminate
(cycling)
Convergence of the Simplex Method
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 7 / 40
Convergence of the simplex method is ensured if all basis visited by the
method are nondegenerate
In presence of degenerate BFS, the Simplex method may not terminate
(cycling)
⇓
Hence, specific anti-cycling procedures must be implemented (Bland’s
rule, lexicographic order)
Lexicographic Rule
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 8 / 40
At each iteration of the simplex method we choose the leaving variable
using the lexicographic rule
Lexicographic Rule
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 8 / 40
Let B0 be the initial base and N0 = {1, . . . , n} − B0.
We can always assume, after columns reordering, that A has the form
A = A.Bo
... A.No
Lexicographic Rule
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 8 / 40
Let
¯ρ = min
i: ¯Aijr >0
(A.−1
B b)i
¯Aijr
if such minimum value is reached in only one index this is the leaving
variable.
OTHERWISE
Lexicographic Rule
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 8 / 40
Among the indices i for which
min
i: ¯Aijr >0
(A.−1
B b)i
¯Aijr
= ¯ρ
we choose the index for which
min
i: ¯Aijr >0
(A.−1
B A.Bo)i1
¯Aijr
If the minimum is reached by only one index this is the leaving variable.
OTHERWISE
Lexicographic Rule
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 8 / 40
Among the indices reaching the minimum value, choose the index for which
min
i: ¯Aijr >0
(A.−1
B A.Bo)i2
¯Aijr
Proceed in the same way.
This procedure will terminate providing a single index since the rows of the
matrix (A.−1
B A.Bo) are linearly independent.
Lexicographic rule and RHS perturbation
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 9 / 40
The procedure outlined in the previous slides is equivalent to perturb each
component of the RHS vector b by a very small quantity.
If this perturbation is small enough, the new linear programming problem is
nondegerate and the simplex method produces exactly the same pivot
sequence as the lexicographic pivot rule
However, is very difficult to determine how small this perturbation must be.
More often a symbolic perturbation is used (with higher computational
costs)
Lexicographic rule and RHS perturbation and ①
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 10 / 40
Replace bi with bi with
bi +
j∈Bo
Aij①−j
.
Lexicographic rule and RHS perturbation and ①
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 10 / 40
Replace bi with bi with
bi +
j∈Bo
Aij①−j
.
Let
e =





①−1
①−2
...
①−m





and
b = A.−1
B (b + A.Boe) = A.−1
B b + A.−1
B A.Boe.
Lexicographic rule and RHS perturbation and ①
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 10 / 40
Replace bi with bi with
bi +
j∈Bo
Aij①−j
.
Therefore b = (A.−1
B b)i +
m
k=1
(A.−1
B A.Bo)ik
①−k
and
min
i: ¯Aijr >0
(A.−1
B b)i +
m
k=1
(A.−1
B A.Bo)ik
①−k
¯Aijr
=
min
i: ¯Aijr >0
(A.−1
B b)i
¯Aijr
+
(A.−1
B A.Bo)i1
¯Aijr
①−1
+ . . . +
(A.−1
B A.Bo)im
¯Aijr
①−m
Lexicographic multi-objective Linear Programming
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 11 / 40
lexmax
x
c1T
x, c2T
x, . . . , crT x
subject to Ax = b
x ≥ 0
The set
S := {Ax = b, x ≥ 0}
is bounded and non-empty.
Lexicographic multi-objective Linear Programming
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 11 / 40
lexmax
x
c1T
x, c2T
x, . . . , crT x
subject to Ax = b
x ≥ 0
Preemptive Scheme
Starts considering the first objective function alone:
max
x
c1x
subject to Ax = b
x ≥ 0
Let x∗1 be an optimal solution and β1 = c1T
x∗1.
Lexicographic multi-objective Linear Programming
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 11 / 40
lexmax
x
c1T
x, c2T
x, . . . , crT x
subject to Ax = b
x ≥ 0
Preemptive Scheme
Then solve
max
x
c2T
x
subject to Ax = b
c1T
x = c1T
x∗1
x ≥ 0
Lexicographic multi-objective Linear Programming
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 11 / 40
lexmax
x
c1T
x, c2T
x, . . . , crT x
subject to Ax = b
x ≥ 0
Preemptive Scheme
Repeat above schema until either the last problem is solved or an unique
solution has been determined.
Lexicographic multi-objective Linear Programming
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 11 / 40
lexmax
x
c1T
x, c2T
x, . . . , crT x
subject to Ax = b
x ≥ 0
Non–Preemptive Scheme
Lexicographic multi-objective Linear Programming
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 11 / 40
lexmax
x
c1T
x, c2T
x, . . . , crT x
subject to Ax = b
x ≥ 0
Non–Preemptive Scheme
There always exists a finite scalar MIR such that the solution of the above
problem can be obtained by solving the one single-objective LP problem
max
x
˜cT x
subject to Ax = b
x ≥ 0
where ˜c =
r
i=1
M−i+1
ci
.
Non–Preemptive grossone-based scheme
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 12 / 40
Solve the LP
max
x
˜cT x
subject to Ax = b
x ≥ 0
where
˜c =
r
i=1
①−i+1
ci
Note that
˜cT
x = c1T
x ①0
+ c2T
x ①−1
+ . . . crT
x ①r−1
Non–Preemptive grossone-based scheme
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 12 / 40
Solve the LP
max
x
˜cT x
subject to Ax = b
x ≥ 0
where
˜c =
r
i=1
①−i+1
ci
Note that
˜cT
x = c1T
x ①0
+ c2T
x ①−1
+ . . . crT
x ①r−1
The main advantage of this scheme is that it does not require the
specification of a real scalar value M
Non–Preemptive grossone-based scheme
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 12 / 40
Solve the LP
max
x
˜cT x
subject to Ax = b
x ≥ 0
where
˜c =
r
i=1
①−i+1
ci
Note that
˜cT
x = c1T
x ①0
+ c2T
x ①−1
+ . . . crT
x ①r−1
M. Cococcioni, M. Pappalardo, Y.D. Sergeyev
Theoretical results
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 13 / 40
max
x
˜cT x
subject to Ax = b
x ≥ 0
Theoretical results
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 13 / 40
max
x
˜cT x
subject to Ax = b
x ≥ 0
If the LP above has solution, there is always a solution that a vertex.
All optimal solutions of the lexicographic problem are feasible for the above
problem and have the objective value.
Any optimal solutions of the lexicographic problem is optimal for the above
problem, and viceversa.
Theoretical results
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 13 / 40
max
x
˜cT x
subject to Ax = b
x ≥ 0
The dual problem is
min
π
bT π
subject to AT π ≤ ˜c
If ¯x is feasible for the primal problem and ¯π feasible for the dual problem
˜cT
¯x ≤ bT
¯π
Theoretical results
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 13 / 40
max
x
˜cT x
subject to Ax = b
x ≥ 0
The dual problem is
min
π
bT π
subject to AT π ≤ ˜c
If x∗ is feasible for the primal problem and π∗ feasible for the dual problem
and
˜cT
x∗
= bT
π∗
x∗ is primal optimal and π∗ dual optimal.
The gross-simplex Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Linear Programming
and the Simplex
Method
Preliminary results
and notations
BFS and associated
basis
Convergence of the
Simplex Method
Lexicographic Rule
Lexicographic rule and
RHS perturbation
Lexicographic rule and
RHS perturbation and
①
Lexicographic
multi-objective Linear
Programming
Non–Preemptive
grossone-based
scheme
Theoretical results
The gross-simplex
Algorithm
Nonlinear Optimization
Some recent resultsLION11 14 / 40
Main issues:
1) Solve
AT
.Bπ = ˜cB
Use LU decomposition of A.B. Note: no divisions by gross-number are
required.
2) Calculate reduced cost vector
¯˜cN = ˜cN − AT
.N π
Also in this case only multiplications and additions of gross-numbers are
required.
¯˜cN =
7.331 ①−1
+ 0.331 ①−2
4 0 − 3.331 ①−1
− 0.33 ①−2
¯˜cN =
3.67 ①−1
0.17 ①−2
4 ①0
+ 0.33 ①−1
− 0.17 ①−2
Nonlinear Optimization
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 15 / 40
The case of Equality Constraints
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 16 / 40
min
x
f(x)
subject to h(x) = 0
where f : IRn → IR and h : IRn → IRk
L(x, π) := f(x) +
k
j=1
πjhj(x) = f(x) + πT
h(x)
Penalty Functions
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 17 / 40
A penalty function P : IRn → IR satisfies the following condition
P(x)
= 0 if x belongs to the feasible region
> 0 otherwise
Penalty Functions
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 17 / 40
A penalty function P : IRn → IR satisfies the following condition
P(x)
= 0 if x belongs to the feasible region
> 0 otherwise
P(x) =
k
j=1
|hj(x)|
P(x) =
k
j=1
h2
j (x)
Exactness of a Penalty Function
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 18 / 40
The optimal solution of the constrained problem
min
x
f(x)
subject to h(x) = 0
can be obtained by solving the following unconstrained minimization
problem
min f(x) +
1
σ
P(x)
for sufficiently small but fixed σ > 0.
Exactness of a Penalty Function
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 18 / 40
The optimal solution of the constrained problem
min
x
f(x)
subject to h(x) = 0
can be obtained by solving the following unconstrained minimization
problem
min f(x) +
1
σ
P(x)
for sufficiently small but fixed σ > 0.
P(x) =
k
j=1
|hj(x)|
Exactness of a Penalty Function
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 18 / 40
The optimal solution of the constrained problem
min
x
f(x)
subject to h(x) = 0
can be obtained by solving the following unconstrained minimization
problem
min f(x) +
1
σ
P(x)
for sufficiently small but fixed σ > 0.
P(x) =
k
j=1
|hj(x)|
Non–smooth function!
Introducing ①
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 19 / 40
Let
P(x) =
k
j=1
h2
j (x)
Solve
min f(x) + ①P(x) =: φ (x, ①)
Convergence Results
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 20 / 40
min
x
f(x)
subject to h(x) = 0
(1)
Convergence Results
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 20 / 40
min
x
f(x)
subject to h(x) = 0
(1)
min
x
f(x) +
1
2
① h(x) 2
(2)
Convergence Results
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 20 / 40
min
x
f(x)
subject to h(x) = 0
(1)
min
x
f(x) +
1
2
① h(x) 2
(2)
Let
x∗
= x∗0
+ ①−1
x∗1
+ ①−2
x∗2
+ . . .
be a stationary point for (2) and assume that the LICQ condition holds at
x∗0
then
Convergence Results
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 20 / 40
min
x
f(x)
subject to h(x) = 0
(1)
min
x
f(x) +
1
2
① h(x) 2
(2)
Let
x∗
= x∗0
+ ①−1
x∗1
+ ①−2
x∗2
+ . . .
be a stationary point for (2) and assume that the LICQ condition holds at
x∗0
then
the pair x∗0, π∗ = h(1)(x∗) is a KKT point of (1).
Example 1
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 21 / 40
min
x
1
2x2
1 + 1
6 x2
2
subject to x1 + x2 = 1
The pair (x∗, π∗) with x∗ =


1
4
3
4

, π∗ = −1
4 is a KKT point.
Example 1
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 21 / 40
min
x
1
2x2
1 + 1
6 x2
2
subject to x1 + x2 = 1
The pair (x∗, π∗) with x∗ =


1
4
3
4

, π∗ = −1
4 is a KKT point.
f(x) + ①P(x) =
1
2
x2
1 +
1
6
x2
2 +
1
2
①(1 − x1 − x2)2
Example 1
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 21 / 40
f(x) + ①P(x) =
1
2
x2
1 +
1
6
x2
2 +
1
2
①(1 − x1 − x2)2
First Order Optimality Condition
x1 + ①(x1 + x2 − 1) = 0
1
3 x2 + ①(x1 + x2 − 1) = 0
Example 1
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 21 / 40
f(x) + ①P(x) =
1
2
x2
1 +
1
6
x2
2 +
1
2
①(1 − x1 − x2)2
x∗
1 =
1①
1 + 4①
, x∗
2 =
3①
1 + 4①
Example 1
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 21 / 40
f(x) + ①P(x) =
1
2
x2
1 +
1
6
x2
2 +
1
2
①(1 − x1 − x2)2
x∗
1 =
1①
1 + 4①
, x∗
2 =
3①
1 + 4①
x∗
1 =
1
4
− ①−1
(
1
16
−
1
64
①−1
. . .)
x∗
2 =
3
4
− ①−1
(
3
16
−
3
64
①−1
. . .)
Example 1
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 21 / 40
f(x) + ①P(x) =
1
2
x2
1 +
1
6
x2
2 +
1
2
①(1 − x1 − x2)2
x∗
1 =
1①
1 + 4①
, x∗
2 =
3①
1 + 4①
x∗
1 + x∗
2 − 1 =
1
4
−
1
16
①−1
+
1
64
①−2
. . .
+
3
4
−
3
16
①−1
+
3
64
①−2
. . . − 1
= −
4
16
①−1
−
3
16
①−1
+
4
64
①−2
. . .
and h(1)(x∗) = −1
4 = π∗
Example 2
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 22 / 40
min x1 + x2
subject to x2
1 + x2
2 − 2 = 0
L(x, π) = x1 + x2 + π x2
1 + x2
2 − 2
The optimal solution is x∗ =
−1
−1
and the pair x∗, π∗ = 1
2 satisfies
the KKT conditions.
Example 2
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 22 / 40
φ (x, ①) = x1 + x2 +
①
2
x2
1 + x2
2 − 2
2
First–Order Optimality Conditions



x1 + 2①x1 x2
1 + x2
2 − 2
2
= 0
x2 + 2①x2 x2
1 + x2
2 − 2
2
= 0
The solution is given by



x1 = −1 − ①−1 1
8 + ①−2
C
x2 = −1 − ①−1 1
8 + ①−2
C
Example 2
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 22 / 40
Moreover
x2
1 + x2
2 − 2 = 1 +
1
64
①−2
+ ①−4
C2 1
4
①−1
− 2①−2
−
1
4
①−3
C +
1 +
1
64
①−2
+ ①−4
C2 1
4
①−1
− 2①−2
−
1
4
①−3
C
=
1
2
①−1
+
1
32
− 4C ①−2
+ −
1
2
C ①−3
+ −2C2
Inequality Constraints
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 23 / 40
min
x
f(x)
subject to g(x) ≤ 0
h(x) = 0
where f : IRn → IR, g : IRn → IRm h : IRn → IRk.
L(x, π, µ) := f(x) +
m
i=1
µigi(x) +
k
j=1
πjhj(x)
= f(x) + µT
g(x) + πT
h(x)
Modified LICQ condition
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 24 / 40
Let x0 ∈ IRn. The Modified LICQ (MLICQ) condition is said to hold at x0 if
the vectors
∇gi(x0
), i : gi(x0
) ≥ 0, ∇hj(x0
), j = 1, . . . , k
are linearly independent.
Convergence Results
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 25 / 40
min
x
f(x)
subject to g(x) ≤ 0
h(x) = 0
min
x
f(x) +
①
2
max{0, gi(x)} 2
+
①
2
h(x) 2
x∗
= x∗0
+ ①−1
x∗1
+ ①−2
x∗2
+ . . .
⇓ (MLICQ)
Convergence Results
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 25 / 40
min
x
f(x)
subject to g(x) ≤ 0
h(x) = 0
min
x
f(x) +
①
2
max{0, gi(x)} 2
+
①
2
h(x) 2
x∗
= x∗0
+ ①−1
x∗1
+ ①−2
x∗2
+ . . .
⇓ (MLICQ)
x∗0
, µ∗
= g(1)
(x∗
), π∗
= h(1)
(x∗
)
The importance of CQs
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 26 / 40
min x1 + x2
subject to x2
1 + x2
2 − 2
2
= 0
L(x, π) = x1 + x2 + π x2
1 + x2
2 − 2
2
The optimal solution is x∗ =
−1
−1
The importance of CQs
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 26 / 40
φ (x, ①) = x1 + x2 +
①
2
x2
1 + x2
2 − 2
4
First–Order Optimality Conditions



1 + 4①x1 x2
1 + x2
2 − 2
3
= 0
1 + 4①x2 x2
1 + x2
2 − 2
3
= 0
The importance of CQs
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 26 / 40
Let the solution of the above system be
x∗
1 = x∗
2 = A + B①−1
+ C①−2
with A, B, and C ∈ IR. Now
4①x∗
1 = 4A① + 4B + 4C①−1
and
1 + 4①x∗
1 (x∗
1)2
+ (x∗
2)2
− 2
3
= 1 + 4A① + 4B + 4C①−1
2A2 − 2 + 2AB①−1
+ D①−2
3
.
The importance of CQs
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 26 / 40
1 + 4①x∗
1 (x∗
1)2
+ (x∗
2)2
− 2
3
= 1 + 4A① + 4B + 4C①−1
2A2
− 2 + 2AB①−1
+ D①−2
3
.
The importance of CQs
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 26 / 40
1 + 4①x∗
1 (x∗
1)2
+ (x∗
2)2
− 2
3
= 1 + 4A① + 4B + 4C①−1
2A2
− 2 + 2AB①−1
+ D①−2
3
.
If 2A2 − 2 = 0 there is still a term of the order ① unless A = 0.
The importance of CQs
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 26 / 40
1 + 4①x∗
1 (x∗
1)2
+ (x∗
2)2
− 2
3
= 1 + 4A① + 4B + 4C①−1
2A2
− 2 + 2AB①−1
+ D①−2
3
.
If 2A2 − 2 = 0 there is still a term of the order ① unless A = 0.
If 2A2 − 2 = 0 a term ①−1
can be factored out
1 + 4A① + 4B + 4C①−1
①−3
+2AB + D①−1
3
and the finite term cannot be equal to 0.
The importance of CQs
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 26 / 40
1 + 4①x∗
1 (x∗
1)2
+ (x∗
2)2
− 2
3
= 1 + 4A① + 4B + 4C①−1
2A2
− 2 + 2AB①−1
+ D①−2
3
.
If 2A2 − 2 = 0 there is still a term of the order ① unless A = 0.
If 2A2 − 2 = 0 a term ①−1
can be factored out
1 + 4A① + 4B + 4C①−1
①−3
+2AB + D①−1
3
and the finite term cannot be equal to 0.
When Constraint Qualification conditions do not hold, the solution
of ∇F(x) = 0 does not provide a KKT pair for the constrained
problem.
Conjugate Gradient Method
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 27 / 40
Data: Set k = 0, y0 = 0, r0 = b − Ay0.
If r0 = 0, then STOP. Else, set p0 = r0.
Step k: Compute αk = rT
k pk/pT
k Apk,
yk+1 = yk + αkpk,
rk+1 = rk − αkApk.
If rk+1 = 0, then STOP.
Else, set βk =
−rT
k+1Apk
pT
k Apk
=
rk+1
2
|rk
2
, and
pk+1 = rk+1 + βkpk,
k = k + 1.
Go to Step k.
pT
k Apk
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 28 / 40
When the matrix A is positive definite then
λm(A) pk
2
≤ pT
k Apk
and pT
k Apk is bounded from below.
If the matrix A is not positive definite, then such a bound does not hold,
being potentially pT
k Apk = 0,.
pT
k Apk
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 28 / 40
When the matrix A is positive definite then
λm(A) pk
2
≤ pT
k Apk
and pT
k Apk is bounded from below.
If the matrix A is not positive definite, then such a bound does not hold,
being potentially pT
k Apk = 0,.
R. De Leone, G. Fasano, Y.D. Sergeyev
Use
pT
k Apk = s①
where s = O(①−1
) if the Step k is a non-degenerate CG step, and
s = O(①−2
) if the Step k is a degenerate CG step.
Variable Metric Method for convex nonsmooth optimization
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 29 / 40
xk+1
= xk
− αk Bk
−1
ξk
a
where ξk
a current aggregate subgradient, and the positive definite
variable-metric n × n matrix, approximation of the Hessian matrix.
Then
Bk+1
= Bk
+ ∆k
and
Bk+1
δk
≈ γk
with γk = gk+1 − gk (subgradients) and δk = xk+1 − xk and diagonal.
The focus on the updating technique of matrix Bk
Matrix Updating scheme
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 30 / 40
min
b
Bδk − γk
subject to Bii ≥ ǫ
Bij = 0, i = j
Matrix Updating scheme
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 30 / 40
min
b
Bδk − γk
subject to Bii ≥ ǫ
Bij = 0, i = j
Bk+1
ii = max ǫ,
γk
i
δk
1
Matrix Updating scheme
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 30 / 40
min
b
Bδk − γk
subject to Bii ≥ ǫ
Bij = 0, i = j
M. Gaudioso, G. Giallombardo, M. Mukhametzhanov
¯γk
i =
γk
i if |γk
i | > ǫ
①−1
otherwise
¯δk
i =
δk
i if |δk
i | > ǫ
①−1
otherwise
Matrix Updating scheme
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
The case of Equality
Constraints
Penalty Functions
Exactness of a Penalty
Function
Introducing ①
Convergence Results
Example 1
Example 2
Inequality Constraints
Modified LICQ
condition
Convergence Results
The importance of
CQs
Conjugate Gradient
Method
pT
k Apk
Variable Metric
Method for convex
nonsmooth
optimization
Matrix Updating
scheme
LION11 30 / 40
min
b
Bδk − γk
subject to Bii ≥ ǫ
Bij = 0, i = j
bk
i =



①−1
if 0 <
¯γk
i
¯δk
i
≤ ǫ
¯γk
i
¯δk
i
otherwise
Bk+1
ii = max ①−1
, bk
i
Some recent results
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 31 / 40
Quadratic Problems
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 32 / 40
min
x
1
2xT Mx
subject to Ax = b
x ≥ 0
Quadratic Problems
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 32 / 40
min
x
1
2xT Mx
subject to Ax = b
x ≥ 0
KKT conditions
Mx + q − AT
u − v = 0
Ax − b = 0
x ≥ 0, v ≥ 0, xT
v = 0
Quadratic Problems
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 32 / 40
min
x
1
2xT Mx
subject to Ax = b
x ≥ 0
min
1
2
xT
Mx +
①
2
Ax − b 2
2 +
①
2
max{0, −x} 2
2 =: F(x)
Quadratic Problems
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 32 / 40
min
x
1
2xT Mx
subject to Ax = b
x ≥ 0
min
1
2
xT
Mx +
①
2
Ax − b 2
2 +
①
2
max{0, −x} 2
2 =: F(x)
∇F(x) = Mx + q + ①AT
(Ax − b) − ① max{0, −x}
Quadratic Problems
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 32 / 40
min
x
1
2xT Mx
subject to Ax = b
x ≥ 0
min
1
2
xT
Mx +
①
2
Ax − b 2
2 +
①
2
max{0, −x} 2
2 =: F(x)
∇F(x) = Mx + q + ①AT
(Ax − b) − ① max{0, −x}
x = x(0)
+ ①−1
x(1)
+ ①−2
x(2)
+ . . .
b = b(0)
+ ①−1
b(1)
+ ①−2
b(2)
+ . . .
A ∈ IRm×n
rank(A) = m
∇F(x) = 0
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 33 / 40
0 = Mx + q + ①AT A x(0) + ①−1
x(1) + ①−2
x(2) + . . .
−b(0) − ①−1
b(1) − ①−2
b(2) + . . .
+① max 0, −x(0) − ①−1
x(1) − ①−2
x(2) + . . .
∇F(x) = 0
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 33 / 40
0 = Mx + q + ①AT A x(0) + ①−1
x(1) + ①−2
x(2) + . . .
−b(0) − ①−1
b(1) − ①−2
b(2) + . . .
+① max 0, −x(0) − ①−1
x(1) − ①−2
x(2) + . . .
Looking at the ① terms
Ax(0)
− b(0)
= 0
max 0, −x(0)
= 0 and hence x(0)
≥ 0
∇F(x) = 0
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 33 / 40
0 = Mx + q + ①AT A x(0) + ①−1
x(1) + ①−2
x(2) + . . .
−b(0) − ①−1
b(1) − ①−2
b(2) + . . .
+① max 0, −x(0) − ①−1
x(1) − ①−2
x(2) + . . .
Looking at the ①0
terms
Mx(0)
+ q + AT
Ax(1)
− b(1)
− v = 0
where
vj = max 0, −x
(1)
j
only for the indices j for which x
(0)
j = 0, otherwise vj = 0
∇F(x) = 0
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 33 / 40
0 = Mx + q + ①AT A x(0) + ①−1
x(1) + ①−2
x(2) + . . .
−b(0) − ①−1
b(1) − ①−2
b(2) + . . .
+① max 0, −x(0) − ①−1
x(1) − ①−2
x(2) + . . .
Set
u = Ax(1)
− b(1)
vj =
0 if x
(0)
j = 0
max 0, −x
(1)
j otherwise
Then
Mx(0)
+ q + AT
u − v = 0
v ≥ 0, vT
x0
= 0
A Generic Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 34 / 40
min
x
f(x)
f(x) = ①f(1)
(x) + f(0)
(x) + ①−1
f(−1)
(x) + . . .
∇f(x) = ①∇f(1)
(x) + ∇f(0)
(x) + ①−1
∇f(−1)
(x) + . . .
A Generic Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 34 / 40
min
x
f(x)
At iteration k
A Generic Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 34 / 40
min
x
f(x)
At iteration k
If
∇f(1)
(xk
) = 0 and ∇f(0)
(xk
) = 0
STOP
A Generic Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 34 / 40
min
x
f(x)
At iteration k
otherwise find xk+1 such that
A Generic Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 34 / 40
min
x
f(x)
At iteration k
otherwise find xk+1 such that
If ∇f(1)(xk) = 0
f(1)
(xk+1
) ≤ f(1)
(xk
) + σ ∇f(1)
(xk
)
f(0)
(xk+1
) ≤ max
0≤j≤lk
f(0)
(xk−j
) + σ ∇f(0)
(xk
)
A Generic Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 34 / 40
min
x
f(x)
At iteration k
otherwise find xk+1 such that
If ∇f(1)(xk) = 0
f(0)
(xk+1
) ≤ f(0)
(xk
) + σ ∇f(0)
(xk
)
f(1)
(xk+1
) ≤ max
0≤j≤mk
f(1)
(xk−j
)
A Generic Algorithm
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 34 / 40
min
x
f(x)
m0 = 0, mk+1 ≤ max {mk + 1, M}
l0 = 0, kk+1 ≤ max {lk + 1, L}
σ(.) is a forcing function.
Non–monotone optimization technique, Zhang-Hager, Grippo-
Lampariello-Lucidi, Dai
Convergence
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 35 / 40
Case 1: ∃¯k such that ∇f(1)(xk) = 0, k ≥ ¯k
Then
f(1)
(xk+1
) ≤ max
0≤j≤mk
f(1)
(xk−j
), k ≥ ¯k
and hence
max
0≤i≤M
f(1)
(x
¯k+Ml+i
) ≤ max
0≤i≤M
f(1)
(x
¯k+M(l−1)+i
)
and
f(0)
(xk+1
) ≤ f(0)
(xk
) + σ ∇f(0)
(xk
) , k ≥ ¯k
Assuming that the level sets for f(1)(x0) and f(0)(x0) are compact sets,
then the sequence has at least one accumulation point x∗ and any
accumulation point satisfies ∇f(1)(x∗) = 0 and ∇f(0)(x∗) = 0
Convergence
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 35 / 40
Case 2: ∃ a subsequence jk such that ∇f(1)(xjk ) = 0
Then
f(1)
(xjk+1
) ≤ f(1)
(xjk
) + +σ ∇f(1)
(xjk
)
Again
max
0≤i≤M
f(1)
(xjk+Mt+i
) ≤ max
0≤i≤M
f(1)
(xjk+M(t−1)+i
)+σ ∇f(1)
(xjk
)
and hence ∇f(1)(xjk ) → 0.
Convergence
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 35 / 40
Case 2: ∃ a subsequence jk such that ∇f(1)(xjk ) = 0
Then
f(1)
(xjk+1
) ≤ f(1)
(xjk
) + +σ ∇f(1)
(xjk
)
Again
max
0≤i≤M
f(1)
(xjk+Mt+i
) ≤ max
0≤i≤M
f(1)
(xjk+M(t−1)+i
)+σ ∇f(1)
(xjk
)
and hence ∇f(1)(xjk ) → 0. Moreover,
max
0≤i≤L
f(0)
(xjk+Lt+i
) ≤ max
0≤i≤L
f(0)
(xjk+L(t−1)+i
)+σ ∇f(0)
(xjk
)
and hence ∇f(0)(xjk ) → 0.
Gradient Method
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 36 / 40
At iterations k calculate ∇f(xk).
If ∇f(1)(xk) = 0
xk+1
= min
α≥0,β≥0
f xk
− α∇f(1)
(xk
) − β∇f(0)
(xk
)
If ∇f(1)(xk) = 0
xk+1
= min
α≥0
f(0)
xk
− α∇f(0)
(xk
)
Example A
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 37 / 40
min
x
1
2x2
1 + 1
6 x2
2
subject to x1 + x2 − 1 = 0
f(x) =
1
2
x2
1 +
1
6
x2
2 +
1
2
①(1 − x1 − x2)2
x0
=
4
1
→ x1
=
0.31
0.69
→ x2
=
−0.1
0.39
→ x3
=
0.26
0.74
→
x4
=
−0.12
0.38
→ x5
=
0.25
0.75
Example B
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 38 / 40
min
x
x1 + x2
subject to x1
1 + x2
2 − 2 = 0
f(x) =
1
2
x2
1 +
1
6
x2
2 +
1
2
①(1 − x1 − x2)2
x0
=
0.25
0.75
→ x1
=
−1.22
−0.72
→ x2
=
−7.39
−6.89
→ x3
=
1.04
0.95
x4
=
−7.10
−7.19
→ x5
=
−1
−1
Conclusions (?)
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 39 / 40
■ The use of ① is extremely beneficial in various aspects in Linear and
Nonlinear Optimization
■ Difficult problems in NLP can be approached in a simpler way using ①
■ A new convergence theory for standard algorithms (gradient, Newton’s,
Quasi-Newton) needs to be developed in theis new framework
Outline of the talk
Single and Multi
Objective Linear
Programming
Nonlinear Optimization
Some recent results
Quadratic Problems
∇F (x) = 0
A Generic Algorithm
Convergence
Gradient Method
Example A
Example B
Conclusions (?)
LION11 40 / 40
Thanks for your attention

Weitere ähnliche Inhalte

Kürzlich hochgeladen

Forensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxForensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxkumarsanjai28051
 
Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...
Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...
Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...Christina Parmionova
 
bonjourmadame.tumblr.com bhaskar's girls
bonjourmadame.tumblr.com bhaskar's girlsbonjourmadame.tumblr.com bhaskar's girls
bonjourmadame.tumblr.com bhaskar's girlshansessene
 
Introduction of Human Body & Structure of cell.pptx
Introduction of Human Body & Structure of cell.pptxIntroduction of Human Body & Structure of cell.pptx
Introduction of Human Body & Structure of cell.pptxMedical College
 
6.2 Pests of Sesame_Identification_Binomics_Dr.UPR
6.2 Pests of Sesame_Identification_Binomics_Dr.UPR6.2 Pests of Sesame_Identification_Binomics_Dr.UPR
6.2 Pests of Sesame_Identification_Binomics_Dr.UPRPirithiRaju
 
Science (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and PitfallsScience (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and PitfallsDobusch Leonhard
 
Combining Asynchronous Task Parallelism and Intel SGX for Secure Deep Learning
Combining Asynchronous Task Parallelism and Intel SGX for Secure Deep LearningCombining Asynchronous Task Parallelism and Intel SGX for Secure Deep Learning
Combining Asynchronous Task Parallelism and Intel SGX for Secure Deep Learningvschiavoni
 
DNA isolation molecular biology practical.pptx
DNA isolation molecular biology practical.pptxDNA isolation molecular biology practical.pptx
DNA isolation molecular biology practical.pptxGiDMOh
 
Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...
Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...
Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...Sérgio Sacani
 
Abnormal LFTs rate of deco and NAFLD.pptx
Abnormal LFTs rate of deco and NAFLD.pptxAbnormal LFTs rate of deco and NAFLD.pptx
Abnormal LFTs rate of deco and NAFLD.pptxzeus70441
 
Gas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptxGas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptxGiovaniTrinidad
 
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11GelineAvendao
 
GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024Jene van der Heide
 
well logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptxwell logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptxzaydmeerab121
 
Oxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptxOxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptxfarhanvvdk
 
whole genome sequencing new and its types including shortgun and clone by clone
whole genome sequencing new  and its types including shortgun and clone by clonewhole genome sequencing new  and its types including shortgun and clone by clone
whole genome sequencing new and its types including shortgun and clone by clonechaudhary charan shingh university
 
DECOMPOSITION PATHWAYS of TM-alkyl complexes.pdf
DECOMPOSITION PATHWAYS of TM-alkyl complexes.pdfDECOMPOSITION PATHWAYS of TM-alkyl complexes.pdf
DECOMPOSITION PATHWAYS of TM-alkyl complexes.pdfDivyaK787011
 
How we decide powerpoint presentation.pptx
How we decide powerpoint presentation.pptxHow we decide powerpoint presentation.pptx
How we decide powerpoint presentation.pptxJosielynTars
 
Q4-Mod-1c-Quiz-Projectile-333344444.pptx
Q4-Mod-1c-Quiz-Projectile-333344444.pptxQ4-Mod-1c-Quiz-Projectile-333344444.pptx
Q4-Mod-1c-Quiz-Projectile-333344444.pptxtuking87
 

Kürzlich hochgeladen (20)

Forensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxForensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptx
 
Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...
Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...
Charateristics of the Angara-A5 spacecraft launched from the Vostochny Cosmod...
 
bonjourmadame.tumblr.com bhaskar's girls
bonjourmadame.tumblr.com bhaskar's girlsbonjourmadame.tumblr.com bhaskar's girls
bonjourmadame.tumblr.com bhaskar's girls
 
Introduction of Human Body & Structure of cell.pptx
Introduction of Human Body & Structure of cell.pptxIntroduction of Human Body & Structure of cell.pptx
Introduction of Human Body & Structure of cell.pptx
 
6.2 Pests of Sesame_Identification_Binomics_Dr.UPR
6.2 Pests of Sesame_Identification_Binomics_Dr.UPR6.2 Pests of Sesame_Identification_Binomics_Dr.UPR
6.2 Pests of Sesame_Identification_Binomics_Dr.UPR
 
Science (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and PitfallsScience (Communication) and Wikipedia - Potentials and Pitfalls
Science (Communication) and Wikipedia - Potentials and Pitfalls
 
Combining Asynchronous Task Parallelism and Intel SGX for Secure Deep Learning
Combining Asynchronous Task Parallelism and Intel SGX for Secure Deep LearningCombining Asynchronous Task Parallelism and Intel SGX for Secure Deep Learning
Combining Asynchronous Task Parallelism and Intel SGX for Secure Deep Learning
 
DNA isolation molecular biology practical.pptx
DNA isolation molecular biology practical.pptxDNA isolation molecular biology practical.pptx
DNA isolation molecular biology practical.pptx
 
Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...
Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...
Observation of Gravitational Waves from the Coalescence of a 2.5–4.5 M⊙ Compa...
 
Abnormal LFTs rate of deco and NAFLD.pptx
Abnormal LFTs rate of deco and NAFLD.pptxAbnormal LFTs rate of deco and NAFLD.pptx
Abnormal LFTs rate of deco and NAFLD.pptx
 
Gas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptxGas-ExchangeS-in-Plants-and-Animals.pptx
Gas-ExchangeS-in-Plants-and-Animals.pptx
 
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
WEEK 4 PHYSICAL SCIENCE QUARTER 3 FOR G11
 
GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024GenAI talk for Young at Wageningen University & Research (WUR) March 2024
GenAI talk for Young at Wageningen University & Research (WUR) March 2024
 
well logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptxwell logging & petrophysical analysis.pptx
well logging & petrophysical analysis.pptx
 
Oxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptxOxo-Acids of Halogens and their Salts.pptx
Oxo-Acids of Halogens and their Salts.pptx
 
whole genome sequencing new and its types including shortgun and clone by clone
whole genome sequencing new  and its types including shortgun and clone by clonewhole genome sequencing new  and its types including shortgun and clone by clone
whole genome sequencing new and its types including shortgun and clone by clone
 
DECOMPOSITION PATHWAYS of TM-alkyl complexes.pdf
DECOMPOSITION PATHWAYS of TM-alkyl complexes.pdfDECOMPOSITION PATHWAYS of TM-alkyl complexes.pdf
DECOMPOSITION PATHWAYS of TM-alkyl complexes.pdf
 
Let’s Say Someone Did Drop the Bomb. Then What?
Let’s Say Someone Did Drop the Bomb. Then What?Let’s Say Someone Did Drop the Bomb. Then What?
Let’s Say Someone Did Drop the Bomb. Then What?
 
How we decide powerpoint presentation.pptx
How we decide powerpoint presentation.pptxHow we decide powerpoint presentation.pptx
How we decide powerpoint presentation.pptx
 
Q4-Mod-1c-Quiz-Projectile-333344444.pptx
Q4-Mod-1c-Quiz-Projectile-333344444.pptxQ4-Mod-1c-Quiz-Projectile-333344444.pptx
Q4-Mod-1c-Quiz-Projectile-333344444.pptx
 

Empfohlen

2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by HubspotMarius Sescu
 
Everything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTEverything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTExpeed Software
 
Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsPixeldarts
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthThinkNow
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfmarketingartwork
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024Neil Kimberley
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)contently
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024Albert Qian
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsKurio // The Social Media Age(ncy)
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Search Engine Journal
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summarySpeakerHub
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next Tessa Mero
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentLily Ray
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best PracticesVit Horky
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project managementMindGenius
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...RachelPearson36
 

Empfohlen (20)

2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot2024 State of Marketing Report – by Hubspot
2024 State of Marketing Report – by Hubspot
 
Everything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPTEverything You Need To Know About ChatGPT
Everything You Need To Know About ChatGPT
 
Product Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage EngineeringsProduct Design Trends in 2024 | Teenage Engineerings
Product Design Trends in 2024 | Teenage Engineerings
 
How Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental HealthHow Race, Age and Gender Shape Attitudes Towards Mental Health
How Race, Age and Gender Shape Attitudes Towards Mental Health
 
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdfAI Trends in Creative Operations 2024 by Artwork Flow.pdf
AI Trends in Creative Operations 2024 by Artwork Flow.pdf
 
Skeleton Culture Code
Skeleton Culture CodeSkeleton Culture Code
Skeleton Culture Code
 
PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024PEPSICO Presentation to CAGNY Conference Feb 2024
PEPSICO Presentation to CAGNY Conference Feb 2024
 
Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)Content Methodology: A Best Practices Report (Webinar)
Content Methodology: A Best Practices Report (Webinar)
 
How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024How to Prepare For a Successful Job Search for 2024
How to Prepare For a Successful Job Search for 2024
 
Social Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie InsightsSocial Media Marketing Trends 2024 // The Global Indie Insights
Social Media Marketing Trends 2024 // The Global Indie Insights
 
Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024Trends In Paid Search: Navigating The Digital Landscape In 2024
Trends In Paid Search: Navigating The Digital Landscape In 2024
 
5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary5 Public speaking tips from TED - Visualized summary
5 Public speaking tips from TED - Visualized summary
 
ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd ChatGPT and the Future of Work - Clark Boyd
ChatGPT and the Future of Work - Clark Boyd
 
Getting into the tech field. what next
Getting into the tech field. what next Getting into the tech field. what next
Getting into the tech field. what next
 
Google's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search IntentGoogle's Just Not That Into You: Understanding Core Updates & Search Intent
Google's Just Not That Into You: Understanding Core Updates & Search Intent
 
How to have difficult conversations
How to have difficult conversations How to have difficult conversations
How to have difficult conversations
 
Introduction to Data Science
Introduction to Data ScienceIntroduction to Data Science
Introduction to Data Science
 
Time Management & Productivity - Best Practices
Time Management & Productivity -  Best PracticesTime Management & Productivity -  Best Practices
Time Management & Productivity - Best Practices
 
The six step guide to practical project management
The six step guide to practical project managementThe six step guide to practical project management
The six step guide to practical project management
 
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
Beginners Guide to TikTok for Search - Rachel Pearson - We are Tilt __ Bright...
 

Main

  • 1. LION11 1 / 40 The use of grossone in optimization: a survey and some recent results R. De Leone School of Science and Technology Universit`a di Camerino June 2017
  • 2. Outline of the talk Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results LION11 2 / 40 Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results
  • 3. Single and Multi Objective Linear Programming Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 3 / 40
  • 4. Linear Programming and the Simplex Method Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 4 / 40 min x cT x subject to Ax = b x ≥ 0 The simplex method proposed by George Dantzig in 1947 ■ starts at a corner point (a Basic Feasible Solution, BFS) ■ verifies if the current point is optimal ■ if not, moves along an edge to a new corner point until the optimal corner point is identified or it discovers that the problem has no solution.
  • 5. Preliminary results and notations Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 5 / 40 Let X = {x ∈ IRn : Ax = b, x ≥ 0} where A ∈ IRm×n, b ∈ IRm, m ≤ n. A point ¯x ∈ X is a Basic Feasible Solution (BFS) iff the columns of A corresponding to positive components of ¯x are linearly independent.
  • 6. Preliminary results and notations Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 5 / 40 Let X = {x ∈ IRn : Ax = b, x ≥ 0} where A ∈ IRm×n, b ∈ IRm, m ≤ n. A point ¯x ∈ X is a Basic Feasible Solution (BFS) iff the columns of A corresponding to positive components of ¯x are linearly independent. Let ¯x be a BFS and define ¯I = I(¯x) := {j : ¯xj > 0} then rank(A.¯I) = |¯I|. Note: |¯I| ≤ m
  • 7. Preliminary results and notations Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 5 / 40 Let X = {x ∈ IRn : Ax = b, x ≥ 0} where A ∈ IRm×n, b ∈ IRm, m ≤ n. A point ¯x ∈ X is a Basic Feasible Solution (BFS) iff the columns of A corresponding to positive components of ¯x are linearly independent. Let ¯x be a BFS and define ¯I = I(¯x) := {j : ¯xj > 0} then rank(A.¯I) = |¯I|. Note: |¯I| ≤ m Vertex Point, Extreme Points and Basic Feasible Solution Point coin- cide BFS ≡ Vertex ≡ Extreme Point
  • 8. BFS and associated basis Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 6 / 40 Assume now rank(A) = m ≤ n A base B is a subset of m linearly independent columns of A. B ⊆ {1, . . . , n} , det(A.B) = 0 N = {1, . . . , n} − B Let ¯x be a BFS. .
  • 9. BFS and associated basis Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 6 / 40 Assume now rank(A) = m ≤ n A base B is a subset of m linearly independent columns of A. B ⊆ {1, . . . , n} , det(A.B) = 0 N = {1, . . . , n} − B Let ¯x be a BFS. . If |{j : ¯xj > 0}| = m the BFS is said to be non–degenerate and there is only a single base B := {j : ¯xj > 0} associated to ¯x Non-degenerate BFS
  • 10. BFS and associated basis Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 6 / 40 Assume now rank(A) = m ≤ n A base B is a subset of m linearly independent columns of A. B ⊆ {1, . . . , n} , det(A.B) = 0 N = {1, . . . , n} − B Let ¯x be a BFS. . If |{j : ¯xj > 0}| < m the BFS is said to be degenerate and there are more than one base B1, B2, . . . , Bl associated to ¯x with {j : ¯xj > 0} ⊆ Bi Degenerate BFS
  • 11. BFS and associated basis Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 6 / 40 Assume now rank(A) = m ≤ n A base B is a subset of m linearly independent columns of A. B ⊆ {1, . . . , n} , det(A.B) = 0 N = {1, . . . , n} − B Let ¯x be a BFS. Let B a base associated to ¯x. Then ¯xN = 0, ¯xB = A−1 .B b ≥ 0
  • 12. BFS and associated basis Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 6 / 40 Assume now rank(A) = m ≤ n A base B is a subset of m linearly independent columns of A. B ⊆ {1, . . . , n} , det(A.B) = 0 N = {1, . . . , n} − B Let ¯x be a BFS. Let B a base associated to ¯x.
  • 13. Convergence of the Simplex Method Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 7 / 40 Convergence of the simplex method is ensured if all basis visited by the method are nondegenerate
  • 14. Convergence of the Simplex Method Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 7 / 40 Convergence of the simplex method is ensured if all basis visited by the method are nondegenerate In presence of degenerate BFS, the Simplex method may not terminate (cycling)
  • 15. Convergence of the Simplex Method Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 7 / 40 Convergence of the simplex method is ensured if all basis visited by the method are nondegenerate In presence of degenerate BFS, the Simplex method may not terminate (cycling) ⇓ Hence, specific anti-cycling procedures must be implemented (Bland’s rule, lexicographic order)
  • 16. Lexicographic Rule Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 8 / 40 At each iteration of the simplex method we choose the leaving variable using the lexicographic rule
  • 17. Lexicographic Rule Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 8 / 40 Let B0 be the initial base and N0 = {1, . . . , n} − B0. We can always assume, after columns reordering, that A has the form A = A.Bo ... A.No
  • 18. Lexicographic Rule Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 8 / 40 Let ¯ρ = min i: ¯Aijr >0 (A.−1 B b)i ¯Aijr if such minimum value is reached in only one index this is the leaving variable. OTHERWISE
  • 19. Lexicographic Rule Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 8 / 40 Among the indices i for which min i: ¯Aijr >0 (A.−1 B b)i ¯Aijr = ¯ρ we choose the index for which min i: ¯Aijr >0 (A.−1 B A.Bo)i1 ¯Aijr If the minimum is reached by only one index this is the leaving variable. OTHERWISE
  • 20. Lexicographic Rule Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 8 / 40 Among the indices reaching the minimum value, choose the index for which min i: ¯Aijr >0 (A.−1 B A.Bo)i2 ¯Aijr Proceed in the same way. This procedure will terminate providing a single index since the rows of the matrix (A.−1 B A.Bo) are linearly independent.
  • 21. Lexicographic rule and RHS perturbation Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 9 / 40 The procedure outlined in the previous slides is equivalent to perturb each component of the RHS vector b by a very small quantity. If this perturbation is small enough, the new linear programming problem is nondegerate and the simplex method produces exactly the same pivot sequence as the lexicographic pivot rule However, is very difficult to determine how small this perturbation must be. More often a symbolic perturbation is used (with higher computational costs)
  • 22. Lexicographic rule and RHS perturbation and ① Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 10 / 40 Replace bi with bi with bi + j∈Bo Aij①−j .
  • 23. Lexicographic rule and RHS perturbation and ① Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 10 / 40 Replace bi with bi with bi + j∈Bo Aij①−j . Let e =      ①−1 ①−2 ... ①−m      and b = A.−1 B (b + A.Boe) = A.−1 B b + A.−1 B A.Boe.
  • 24. Lexicographic rule and RHS perturbation and ① Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 10 / 40 Replace bi with bi with bi + j∈Bo Aij①−j . Therefore b = (A.−1 B b)i + m k=1 (A.−1 B A.Bo)ik ①−k and min i: ¯Aijr >0 (A.−1 B b)i + m k=1 (A.−1 B A.Bo)ik ①−k ¯Aijr = min i: ¯Aijr >0 (A.−1 B b)i ¯Aijr + (A.−1 B A.Bo)i1 ¯Aijr ①−1 + . . . + (A.−1 B A.Bo)im ¯Aijr ①−m
  • 25. Lexicographic multi-objective Linear Programming Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 11 / 40 lexmax x c1T x, c2T x, . . . , crT x subject to Ax = b x ≥ 0 The set S := {Ax = b, x ≥ 0} is bounded and non-empty.
  • 26. Lexicographic multi-objective Linear Programming Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 11 / 40 lexmax x c1T x, c2T x, . . . , crT x subject to Ax = b x ≥ 0 Preemptive Scheme Starts considering the first objective function alone: max x c1x subject to Ax = b x ≥ 0 Let x∗1 be an optimal solution and β1 = c1T x∗1.
  • 27. Lexicographic multi-objective Linear Programming Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 11 / 40 lexmax x c1T x, c2T x, . . . , crT x subject to Ax = b x ≥ 0 Preemptive Scheme Then solve max x c2T x subject to Ax = b c1T x = c1T x∗1 x ≥ 0
  • 28. Lexicographic multi-objective Linear Programming Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 11 / 40 lexmax x c1T x, c2T x, . . . , crT x subject to Ax = b x ≥ 0 Preemptive Scheme Repeat above schema until either the last problem is solved or an unique solution has been determined.
  • 29. Lexicographic multi-objective Linear Programming Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 11 / 40 lexmax x c1T x, c2T x, . . . , crT x subject to Ax = b x ≥ 0 Non–Preemptive Scheme
  • 30. Lexicographic multi-objective Linear Programming Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 11 / 40 lexmax x c1T x, c2T x, . . . , crT x subject to Ax = b x ≥ 0 Non–Preemptive Scheme There always exists a finite scalar MIR such that the solution of the above problem can be obtained by solving the one single-objective LP problem max x ˜cT x subject to Ax = b x ≥ 0 where ˜c = r i=1 M−i+1 ci .
  • 31. Non–Preemptive grossone-based scheme Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 12 / 40 Solve the LP max x ˜cT x subject to Ax = b x ≥ 0 where ˜c = r i=1 ①−i+1 ci Note that ˜cT x = c1T x ①0 + c2T x ①−1 + . . . crT x ①r−1
  • 32. Non–Preemptive grossone-based scheme Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 12 / 40 Solve the LP max x ˜cT x subject to Ax = b x ≥ 0 where ˜c = r i=1 ①−i+1 ci Note that ˜cT x = c1T x ①0 + c2T x ①−1 + . . . crT x ①r−1 The main advantage of this scheme is that it does not require the specification of a real scalar value M
  • 33. Non–Preemptive grossone-based scheme Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 12 / 40 Solve the LP max x ˜cT x subject to Ax = b x ≥ 0 where ˜c = r i=1 ①−i+1 ci Note that ˜cT x = c1T x ①0 + c2T x ①−1 + . . . crT x ①r−1 M. Cococcioni, M. Pappalardo, Y.D. Sergeyev
  • 34. Theoretical results Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 13 / 40 max x ˜cT x subject to Ax = b x ≥ 0
  • 35. Theoretical results Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 13 / 40 max x ˜cT x subject to Ax = b x ≥ 0 If the LP above has solution, there is always a solution that a vertex. All optimal solutions of the lexicographic problem are feasible for the above problem and have the objective value. Any optimal solutions of the lexicographic problem is optimal for the above problem, and viceversa.
  • 36. Theoretical results Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 13 / 40 max x ˜cT x subject to Ax = b x ≥ 0 The dual problem is min π bT π subject to AT π ≤ ˜c If ¯x is feasible for the primal problem and ¯π feasible for the dual problem ˜cT ¯x ≤ bT ¯π
  • 37. Theoretical results Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 13 / 40 max x ˜cT x subject to Ax = b x ≥ 0 The dual problem is min π bT π subject to AT π ≤ ˜c If x∗ is feasible for the primal problem and π∗ feasible for the dual problem and ˜cT x∗ = bT π∗ x∗ is primal optimal and π∗ dual optimal.
  • 38. The gross-simplex Algorithm Outline of the talk Single and Multi Objective Linear Programming Linear Programming and the Simplex Method Preliminary results and notations BFS and associated basis Convergence of the Simplex Method Lexicographic Rule Lexicographic rule and RHS perturbation Lexicographic rule and RHS perturbation and ① Lexicographic multi-objective Linear Programming Non–Preemptive grossone-based scheme Theoretical results The gross-simplex Algorithm Nonlinear Optimization Some recent resultsLION11 14 / 40 Main issues: 1) Solve AT .Bπ = ˜cB Use LU decomposition of A.B. Note: no divisions by gross-number are required. 2) Calculate reduced cost vector ¯˜cN = ˜cN − AT .N π Also in this case only multiplications and additions of gross-numbers are required. ¯˜cN = 7.331 ①−1 + 0.331 ①−2 4 0 − 3.331 ①−1 − 0.33 ①−2 ¯˜cN = 3.67 ①−1 0.17 ①−2 4 ①0 + 0.33 ①−1 − 0.17 ①−2
  • 39. Nonlinear Optimization Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 15 / 40
  • 40. The case of Equality Constraints Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 16 / 40 min x f(x) subject to h(x) = 0 where f : IRn → IR and h : IRn → IRk L(x, π) := f(x) + k j=1 πjhj(x) = f(x) + πT h(x)
  • 41. Penalty Functions Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 17 / 40 A penalty function P : IRn → IR satisfies the following condition P(x) = 0 if x belongs to the feasible region > 0 otherwise
  • 42. Penalty Functions Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 17 / 40 A penalty function P : IRn → IR satisfies the following condition P(x) = 0 if x belongs to the feasible region > 0 otherwise P(x) = k j=1 |hj(x)| P(x) = k j=1 h2 j (x)
  • 43. Exactness of a Penalty Function Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 18 / 40 The optimal solution of the constrained problem min x f(x) subject to h(x) = 0 can be obtained by solving the following unconstrained minimization problem min f(x) + 1 σ P(x) for sufficiently small but fixed σ > 0.
  • 44. Exactness of a Penalty Function Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 18 / 40 The optimal solution of the constrained problem min x f(x) subject to h(x) = 0 can be obtained by solving the following unconstrained minimization problem min f(x) + 1 σ P(x) for sufficiently small but fixed σ > 0. P(x) = k j=1 |hj(x)|
  • 45. Exactness of a Penalty Function Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 18 / 40 The optimal solution of the constrained problem min x f(x) subject to h(x) = 0 can be obtained by solving the following unconstrained minimization problem min f(x) + 1 σ P(x) for sufficiently small but fixed σ > 0. P(x) = k j=1 |hj(x)| Non–smooth function!
  • 46. Introducing ① Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 19 / 40 Let P(x) = k j=1 h2 j (x) Solve min f(x) + ①P(x) =: φ (x, ①)
  • 47. Convergence Results Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 20 / 40 min x f(x) subject to h(x) = 0 (1)
  • 48. Convergence Results Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 20 / 40 min x f(x) subject to h(x) = 0 (1) min x f(x) + 1 2 ① h(x) 2 (2)
  • 49. Convergence Results Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 20 / 40 min x f(x) subject to h(x) = 0 (1) min x f(x) + 1 2 ① h(x) 2 (2) Let x∗ = x∗0 + ①−1 x∗1 + ①−2 x∗2 + . . . be a stationary point for (2) and assume that the LICQ condition holds at x∗0 then
  • 50. Convergence Results Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 20 / 40 min x f(x) subject to h(x) = 0 (1) min x f(x) + 1 2 ① h(x) 2 (2) Let x∗ = x∗0 + ①−1 x∗1 + ①−2 x∗2 + . . . be a stationary point for (2) and assume that the LICQ condition holds at x∗0 then the pair x∗0, π∗ = h(1)(x∗) is a KKT point of (1).
  • 51. Example 1 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 21 / 40 min x 1 2x2 1 + 1 6 x2 2 subject to x1 + x2 = 1 The pair (x∗, π∗) with x∗ =   1 4 3 4  , π∗ = −1 4 is a KKT point.
  • 52. Example 1 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 21 / 40 min x 1 2x2 1 + 1 6 x2 2 subject to x1 + x2 = 1 The pair (x∗, π∗) with x∗ =   1 4 3 4  , π∗ = −1 4 is a KKT point. f(x) + ①P(x) = 1 2 x2 1 + 1 6 x2 2 + 1 2 ①(1 − x1 − x2)2
  • 53. Example 1 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 21 / 40 f(x) + ①P(x) = 1 2 x2 1 + 1 6 x2 2 + 1 2 ①(1 − x1 − x2)2 First Order Optimality Condition x1 + ①(x1 + x2 − 1) = 0 1 3 x2 + ①(x1 + x2 − 1) = 0
  • 54. Example 1 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 21 / 40 f(x) + ①P(x) = 1 2 x2 1 + 1 6 x2 2 + 1 2 ①(1 − x1 − x2)2 x∗ 1 = 1① 1 + 4① , x∗ 2 = 3① 1 + 4①
  • 55. Example 1 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 21 / 40 f(x) + ①P(x) = 1 2 x2 1 + 1 6 x2 2 + 1 2 ①(1 − x1 − x2)2 x∗ 1 = 1① 1 + 4① , x∗ 2 = 3① 1 + 4① x∗ 1 = 1 4 − ①−1 ( 1 16 − 1 64 ①−1 . . .) x∗ 2 = 3 4 − ①−1 ( 3 16 − 3 64 ①−1 . . .)
  • 56. Example 1 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 21 / 40 f(x) + ①P(x) = 1 2 x2 1 + 1 6 x2 2 + 1 2 ①(1 − x1 − x2)2 x∗ 1 = 1① 1 + 4① , x∗ 2 = 3① 1 + 4① x∗ 1 + x∗ 2 − 1 = 1 4 − 1 16 ①−1 + 1 64 ①−2 . . . + 3 4 − 3 16 ①−1 + 3 64 ①−2 . . . − 1 = − 4 16 ①−1 − 3 16 ①−1 + 4 64 ①−2 . . . and h(1)(x∗) = −1 4 = π∗
  • 57. Example 2 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 22 / 40 min x1 + x2 subject to x2 1 + x2 2 − 2 = 0 L(x, π) = x1 + x2 + π x2 1 + x2 2 − 2 The optimal solution is x∗ = −1 −1 and the pair x∗, π∗ = 1 2 satisfies the KKT conditions.
  • 58. Example 2 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 22 / 40 φ (x, ①) = x1 + x2 + ① 2 x2 1 + x2 2 − 2 2 First–Order Optimality Conditions    x1 + 2①x1 x2 1 + x2 2 − 2 2 = 0 x2 + 2①x2 x2 1 + x2 2 − 2 2 = 0 The solution is given by    x1 = −1 − ①−1 1 8 + ①−2 C x2 = −1 − ①−1 1 8 + ①−2 C
  • 59. Example 2 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 22 / 40 Moreover x2 1 + x2 2 − 2 = 1 + 1 64 ①−2 + ①−4 C2 1 4 ①−1 − 2①−2 − 1 4 ①−3 C + 1 + 1 64 ①−2 + ①−4 C2 1 4 ①−1 − 2①−2 − 1 4 ①−3 C = 1 2 ①−1 + 1 32 − 4C ①−2 + − 1 2 C ①−3 + −2C2
  • 60. Inequality Constraints Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 23 / 40 min x f(x) subject to g(x) ≤ 0 h(x) = 0 where f : IRn → IR, g : IRn → IRm h : IRn → IRk. L(x, π, µ) := f(x) + m i=1 µigi(x) + k j=1 πjhj(x) = f(x) + µT g(x) + πT h(x)
  • 61. Modified LICQ condition Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 24 / 40 Let x0 ∈ IRn. The Modified LICQ (MLICQ) condition is said to hold at x0 if the vectors ∇gi(x0 ), i : gi(x0 ) ≥ 0, ∇hj(x0 ), j = 1, . . . , k are linearly independent.
  • 62. Convergence Results Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 25 / 40 min x f(x) subject to g(x) ≤ 0 h(x) = 0 min x f(x) + ① 2 max{0, gi(x)} 2 + ① 2 h(x) 2 x∗ = x∗0 + ①−1 x∗1 + ①−2 x∗2 + . . . ⇓ (MLICQ)
  • 63. Convergence Results Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 25 / 40 min x f(x) subject to g(x) ≤ 0 h(x) = 0 min x f(x) + ① 2 max{0, gi(x)} 2 + ① 2 h(x) 2 x∗ = x∗0 + ①−1 x∗1 + ①−2 x∗2 + . . . ⇓ (MLICQ) x∗0 , µ∗ = g(1) (x∗ ), π∗ = h(1) (x∗ )
  • 64. The importance of CQs Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 26 / 40 min x1 + x2 subject to x2 1 + x2 2 − 2 2 = 0 L(x, π) = x1 + x2 + π x2 1 + x2 2 − 2 2 The optimal solution is x∗ = −1 −1
  • 65. The importance of CQs Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 26 / 40 φ (x, ①) = x1 + x2 + ① 2 x2 1 + x2 2 − 2 4 First–Order Optimality Conditions    1 + 4①x1 x2 1 + x2 2 − 2 3 = 0 1 + 4①x2 x2 1 + x2 2 − 2 3 = 0
  • 66. The importance of CQs Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 26 / 40 Let the solution of the above system be x∗ 1 = x∗ 2 = A + B①−1 + C①−2 with A, B, and C ∈ IR. Now 4①x∗ 1 = 4A① + 4B + 4C①−1 and 1 + 4①x∗ 1 (x∗ 1)2 + (x∗ 2)2 − 2 3 = 1 + 4A① + 4B + 4C①−1 2A2 − 2 + 2AB①−1 + D①−2 3 .
  • 67. The importance of CQs Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 26 / 40 1 + 4①x∗ 1 (x∗ 1)2 + (x∗ 2)2 − 2 3 = 1 + 4A① + 4B + 4C①−1 2A2 − 2 + 2AB①−1 + D①−2 3 .
  • 68. The importance of CQs Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 26 / 40 1 + 4①x∗ 1 (x∗ 1)2 + (x∗ 2)2 − 2 3 = 1 + 4A① + 4B + 4C①−1 2A2 − 2 + 2AB①−1 + D①−2 3 . If 2A2 − 2 = 0 there is still a term of the order ① unless A = 0.
  • 69. The importance of CQs Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 26 / 40 1 + 4①x∗ 1 (x∗ 1)2 + (x∗ 2)2 − 2 3 = 1 + 4A① + 4B + 4C①−1 2A2 − 2 + 2AB①−1 + D①−2 3 . If 2A2 − 2 = 0 there is still a term of the order ① unless A = 0. If 2A2 − 2 = 0 a term ①−1 can be factored out 1 + 4A① + 4B + 4C①−1 ①−3 +2AB + D①−1 3 and the finite term cannot be equal to 0.
  • 70. The importance of CQs Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 26 / 40 1 + 4①x∗ 1 (x∗ 1)2 + (x∗ 2)2 − 2 3 = 1 + 4A① + 4B + 4C①−1 2A2 − 2 + 2AB①−1 + D①−2 3 . If 2A2 − 2 = 0 there is still a term of the order ① unless A = 0. If 2A2 − 2 = 0 a term ①−1 can be factored out 1 + 4A① + 4B + 4C①−1 ①−3 +2AB + D①−1 3 and the finite term cannot be equal to 0. When Constraint Qualification conditions do not hold, the solution of ∇F(x) = 0 does not provide a KKT pair for the constrained problem.
  • 71. Conjugate Gradient Method Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 27 / 40 Data: Set k = 0, y0 = 0, r0 = b − Ay0. If r0 = 0, then STOP. Else, set p0 = r0. Step k: Compute αk = rT k pk/pT k Apk, yk+1 = yk + αkpk, rk+1 = rk − αkApk. If rk+1 = 0, then STOP. Else, set βk = −rT k+1Apk pT k Apk = rk+1 2 |rk 2 , and pk+1 = rk+1 + βkpk, k = k + 1. Go to Step k.
  • 72. pT k Apk Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 28 / 40 When the matrix A is positive definite then λm(A) pk 2 ≤ pT k Apk and pT k Apk is bounded from below. If the matrix A is not positive definite, then such a bound does not hold, being potentially pT k Apk = 0,.
  • 73. pT k Apk Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 28 / 40 When the matrix A is positive definite then λm(A) pk 2 ≤ pT k Apk and pT k Apk is bounded from below. If the matrix A is not positive definite, then such a bound does not hold, being potentially pT k Apk = 0,. R. De Leone, G. Fasano, Y.D. Sergeyev Use pT k Apk = s① where s = O(①−1 ) if the Step k is a non-degenerate CG step, and s = O(①−2 ) if the Step k is a degenerate CG step.
  • 74. Variable Metric Method for convex nonsmooth optimization Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 29 / 40 xk+1 = xk − αk Bk −1 ξk a where ξk a current aggregate subgradient, and the positive definite variable-metric n × n matrix, approximation of the Hessian matrix. Then Bk+1 = Bk + ∆k and Bk+1 δk ≈ γk with γk = gk+1 − gk (subgradients) and δk = xk+1 − xk and diagonal. The focus on the updating technique of matrix Bk
  • 75. Matrix Updating scheme Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 30 / 40 min b Bδk − γk subject to Bii ≥ ǫ Bij = 0, i = j
  • 76. Matrix Updating scheme Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 30 / 40 min b Bδk − γk subject to Bii ≥ ǫ Bij = 0, i = j Bk+1 ii = max ǫ, γk i δk 1
  • 77. Matrix Updating scheme Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 30 / 40 min b Bδk − γk subject to Bii ≥ ǫ Bij = 0, i = j M. Gaudioso, G. Giallombardo, M. Mukhametzhanov ¯γk i = γk i if |γk i | > ǫ ①−1 otherwise ¯δk i = δk i if |δk i | > ǫ ①−1 otherwise
  • 78. Matrix Updating scheme Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization The case of Equality Constraints Penalty Functions Exactness of a Penalty Function Introducing ① Convergence Results Example 1 Example 2 Inequality Constraints Modified LICQ condition Convergence Results The importance of CQs Conjugate Gradient Method pT k Apk Variable Metric Method for convex nonsmooth optimization Matrix Updating scheme LION11 30 / 40 min b Bδk − γk subject to Bii ≥ ǫ Bij = 0, i = j bk i =    ①−1 if 0 < ¯γk i ¯δk i ≤ ǫ ¯γk i ¯δk i otherwise Bk+1 ii = max ①−1 , bk i
  • 79. Some recent results Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 31 / 40
  • 80. Quadratic Problems Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 32 / 40 min x 1 2xT Mx subject to Ax = b x ≥ 0
  • 81. Quadratic Problems Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 32 / 40 min x 1 2xT Mx subject to Ax = b x ≥ 0 KKT conditions Mx + q − AT u − v = 0 Ax − b = 0 x ≥ 0, v ≥ 0, xT v = 0
  • 82. Quadratic Problems Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 32 / 40 min x 1 2xT Mx subject to Ax = b x ≥ 0 min 1 2 xT Mx + ① 2 Ax − b 2 2 + ① 2 max{0, −x} 2 2 =: F(x)
  • 83. Quadratic Problems Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 32 / 40 min x 1 2xT Mx subject to Ax = b x ≥ 0 min 1 2 xT Mx + ① 2 Ax − b 2 2 + ① 2 max{0, −x} 2 2 =: F(x) ∇F(x) = Mx + q + ①AT (Ax − b) − ① max{0, −x}
  • 84. Quadratic Problems Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 32 / 40 min x 1 2xT Mx subject to Ax = b x ≥ 0 min 1 2 xT Mx + ① 2 Ax − b 2 2 + ① 2 max{0, −x} 2 2 =: F(x) ∇F(x) = Mx + q + ①AT (Ax − b) − ① max{0, −x} x = x(0) + ①−1 x(1) + ①−2 x(2) + . . . b = b(0) + ①−1 b(1) + ①−2 b(2) + . . . A ∈ IRm×n rank(A) = m
  • 85. ∇F(x) = 0 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 33 / 40 0 = Mx + q + ①AT A x(0) + ①−1 x(1) + ①−2 x(2) + . . . −b(0) − ①−1 b(1) − ①−2 b(2) + . . . +① max 0, −x(0) − ①−1 x(1) − ①−2 x(2) + . . .
  • 86. ∇F(x) = 0 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 33 / 40 0 = Mx + q + ①AT A x(0) + ①−1 x(1) + ①−2 x(2) + . . . −b(0) − ①−1 b(1) − ①−2 b(2) + . . . +① max 0, −x(0) − ①−1 x(1) − ①−2 x(2) + . . . Looking at the ① terms Ax(0) − b(0) = 0 max 0, −x(0) = 0 and hence x(0) ≥ 0
  • 87. ∇F(x) = 0 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 33 / 40 0 = Mx + q + ①AT A x(0) + ①−1 x(1) + ①−2 x(2) + . . . −b(0) − ①−1 b(1) − ①−2 b(2) + . . . +① max 0, −x(0) − ①−1 x(1) − ①−2 x(2) + . . . Looking at the ①0 terms Mx(0) + q + AT Ax(1) − b(1) − v = 0 where vj = max 0, −x (1) j only for the indices j for which x (0) j = 0, otherwise vj = 0
  • 88. ∇F(x) = 0 Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 33 / 40 0 = Mx + q + ①AT A x(0) + ①−1 x(1) + ①−2 x(2) + . . . −b(0) − ①−1 b(1) − ①−2 b(2) + . . . +① max 0, −x(0) − ①−1 x(1) − ①−2 x(2) + . . . Set u = Ax(1) − b(1) vj = 0 if x (0) j = 0 max 0, −x (1) j otherwise Then Mx(0) + q + AT u − v = 0 v ≥ 0, vT x0 = 0
  • 89. A Generic Algorithm Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 34 / 40 min x f(x) f(x) = ①f(1) (x) + f(0) (x) + ①−1 f(−1) (x) + . . . ∇f(x) = ①∇f(1) (x) + ∇f(0) (x) + ①−1 ∇f(−1) (x) + . . .
  • 90. A Generic Algorithm Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 34 / 40 min x f(x) At iteration k
  • 91. A Generic Algorithm Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 34 / 40 min x f(x) At iteration k If ∇f(1) (xk ) = 0 and ∇f(0) (xk ) = 0 STOP
  • 92. A Generic Algorithm Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 34 / 40 min x f(x) At iteration k otherwise find xk+1 such that
  • 93. A Generic Algorithm Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 34 / 40 min x f(x) At iteration k otherwise find xk+1 such that If ∇f(1)(xk) = 0 f(1) (xk+1 ) ≤ f(1) (xk ) + σ ∇f(1) (xk ) f(0) (xk+1 ) ≤ max 0≤j≤lk f(0) (xk−j ) + σ ∇f(0) (xk )
  • 94. A Generic Algorithm Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 34 / 40 min x f(x) At iteration k otherwise find xk+1 such that If ∇f(1)(xk) = 0 f(0) (xk+1 ) ≤ f(0) (xk ) + σ ∇f(0) (xk ) f(1) (xk+1 ) ≤ max 0≤j≤mk f(1) (xk−j )
  • 95. A Generic Algorithm Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 34 / 40 min x f(x) m0 = 0, mk+1 ≤ max {mk + 1, M} l0 = 0, kk+1 ≤ max {lk + 1, L} σ(.) is a forcing function. Non–monotone optimization technique, Zhang-Hager, Grippo- Lampariello-Lucidi, Dai
  • 96. Convergence Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 35 / 40 Case 1: ∃¯k such that ∇f(1)(xk) = 0, k ≥ ¯k Then f(1) (xk+1 ) ≤ max 0≤j≤mk f(1) (xk−j ), k ≥ ¯k and hence max 0≤i≤M f(1) (x ¯k+Ml+i ) ≤ max 0≤i≤M f(1) (x ¯k+M(l−1)+i ) and f(0) (xk+1 ) ≤ f(0) (xk ) + σ ∇f(0) (xk ) , k ≥ ¯k Assuming that the level sets for f(1)(x0) and f(0)(x0) are compact sets, then the sequence has at least one accumulation point x∗ and any accumulation point satisfies ∇f(1)(x∗) = 0 and ∇f(0)(x∗) = 0
  • 97. Convergence Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 35 / 40 Case 2: ∃ a subsequence jk such that ∇f(1)(xjk ) = 0 Then f(1) (xjk+1 ) ≤ f(1) (xjk ) + +σ ∇f(1) (xjk ) Again max 0≤i≤M f(1) (xjk+Mt+i ) ≤ max 0≤i≤M f(1) (xjk+M(t−1)+i )+σ ∇f(1) (xjk ) and hence ∇f(1)(xjk ) → 0.
  • 98. Convergence Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 35 / 40 Case 2: ∃ a subsequence jk such that ∇f(1)(xjk ) = 0 Then f(1) (xjk+1 ) ≤ f(1) (xjk ) + +σ ∇f(1) (xjk ) Again max 0≤i≤M f(1) (xjk+Mt+i ) ≤ max 0≤i≤M f(1) (xjk+M(t−1)+i )+σ ∇f(1) (xjk ) and hence ∇f(1)(xjk ) → 0. Moreover, max 0≤i≤L f(0) (xjk+Lt+i ) ≤ max 0≤i≤L f(0) (xjk+L(t−1)+i )+σ ∇f(0) (xjk ) and hence ∇f(0)(xjk ) → 0.
  • 99. Gradient Method Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 36 / 40 At iterations k calculate ∇f(xk). If ∇f(1)(xk) = 0 xk+1 = min α≥0,β≥0 f xk − α∇f(1) (xk ) − β∇f(0) (xk ) If ∇f(1)(xk) = 0 xk+1 = min α≥0 f(0) xk − α∇f(0) (xk )
  • 100. Example A Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 37 / 40 min x 1 2x2 1 + 1 6 x2 2 subject to x1 + x2 − 1 = 0 f(x) = 1 2 x2 1 + 1 6 x2 2 + 1 2 ①(1 − x1 − x2)2 x0 = 4 1 → x1 = 0.31 0.69 → x2 = −0.1 0.39 → x3 = 0.26 0.74 → x4 = −0.12 0.38 → x5 = 0.25 0.75
  • 101. Example B Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 38 / 40 min x x1 + x2 subject to x1 1 + x2 2 − 2 = 0 f(x) = 1 2 x2 1 + 1 6 x2 2 + 1 2 ①(1 − x1 − x2)2 x0 = 0.25 0.75 → x1 = −1.22 −0.72 → x2 = −7.39 −6.89 → x3 = 1.04 0.95 x4 = −7.10 −7.19 → x5 = −1 −1
  • 102. Conclusions (?) Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 39 / 40 ■ The use of ① is extremely beneficial in various aspects in Linear and Nonlinear Optimization ■ Difficult problems in NLP can be approached in a simpler way using ① ■ A new convergence theory for standard algorithms (gradient, Newton’s, Quasi-Newton) needs to be developed in theis new framework
  • 103. Outline of the talk Single and Multi Objective Linear Programming Nonlinear Optimization Some recent results Quadratic Problems ∇F (x) = 0 A Generic Algorithm Convergence Gradient Method Example A Example B Conclusions (?) LION11 40 / 40 Thanks for your attention