SlideShare ist ein Scribd-Unternehmen logo
1 von 107
Downloaden Sie, um offline zu lesen
Linear Optimization(MATH 2062)
Dereje Tigabu(MSc.)
Department of Mathematics
Debark University
October 7, 2020
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 1 / 107
Chapter 4
Simplex Method1
1
Checking the results of a decision against its expectations shows executives
what their strengths are, where they need to improve, and where they lack
knowledge or information. Peter Drucker
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 2 / 107
Outline of the Chapter
1 Introduction
2 Linear Programs in Standard Form
3 Basic Feasible Solutions
4 Fundamental Theorem of Linear Programming
5 Algebra of the Simplex Method
6 The simplex Algorithm
7 Degeneracy and Finiteness of Simplex Algorithm
8 Finding a Starting Basic Feasible Solution
Two -phase method
Big - M method
9 Some Complications and Their Resolution
Unrestricted Variables
Tie for Entering Basic Variable (Key Column)
Tie for Leaving Basic Variable (Key Row) Degeneracy
10 Types of Linear Programming Solutions
Alternative (Multiple) Optimal Solutions
Unbounded Solution
Infeasible Solution
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 3 / 107
1. Introduction
We shall discuss a procedure called the simplex method for solving
an LP model of such problems.
This method was developed by G B Dantzig in 1947.
For LP problems with several variables, we may not be able to graph
the feasible region, but the optimal solution will still lie at an extreme
point of the many-sided, multidimensional figure (called an
n-dimensional polyhedron) that represents the feasible solution space.
The simplex method examines these extreme points in a systematic
manner, repeating the same set of steps of the algorithm until an
optimal solution is found.
It is also called the iterative method.
Since the number of extreme points of the feasible solution space are
finite, the method assures an improvement in the value of objective
function as we move from one iteration (extreme point) to another
and achieve the optimal solution in a finite number of steps.
The method also indicates when an unbounded solution is reached.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 4 / 107
Slack variables : If the constraints are in the form of ”≤”, then we
add variable to the left hand side of inequality to make equality. This
variables are called slack variables.
Remark: A slack variable represents unused resource, either in the
form of time on a machine, labour hours, money, warehouse space
or any number of such resources in various business problems. Since
these variables yield no profit, therefore such variables are added to
the original objective function with zero coefficients.
Surplus variables : If the constraints are in the form of ” ≥ ” ,
then we subtract variables to the left hand side of inequality to make
equality. This variables are called the surplus variables.
Remark: A surplus variable represents amount by which solution values
exceed a resource. These variables are also called negative slack variables.
Surplus variables like slack variables carry a zero coefficients in the objective
function.(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 5 / 107
Conditions for Application of Simplex Method
1 The right hand side of each of the constraint bi should be
non-negative. If LPP has a negative constraint, we should
convert it to positive by multiplying both sides by -1.
2 Each of the decision variables of the problem should be
non-negative. If one of the choice variables is not feasible we
can’t apply the Simplex method. Therefore feasibility is the
necessity condition for application of the Simplex method.
3 The inequality constraint of resources or any other activities
must be converted in to equality equations by adding slack
variables (≤ type inequality) or by subtracting surplus variables
(≥ type inequality) to the left of the inequality.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 6 / 107
2. Linear Programs in Standard Form
The standard form have the following characteristics.
(i) All the constraints should be expressed as equations by adding
slack or surplus and/ or artificial variables.
(ii) The right hand side of each constraint should be made of
non-negative, if it is not, this should be done by multiplying
both sides of the resulting constraint by -1.
Minimization and Maximization Problems
Max(Z) = −Min(−Z).
After the optimization of the new problem is completed, the
objective value of the old problem is -1 times the optimal
objective value of the new problem.
Let Z = −Z. After the Z value is found, replace Z = −Z .
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 7 / 107
Consider an LP model,
Max/Min Z = cT
X
S.t : Ax(≤, ≥)b
xi ≥ 0
The standard form of the Linear programming problem is expressed as
Optimize(Max or Min)Z = CT X + 0S
Subject to AX ± S = b.
and X, S ≥ 0.
where CT = (c1, c2, ..., cn) is the row vector, X = (x1, x2, ..., xn)T
b = (b1, b2, ..., bm)T and S = (s1, s2, ..., sm)T are column vectors.
and A =








a11 a12 . . . a1n
a21 a22 . . . a2n
. . .
. . . . . .
. . .
am1 am2 . . . amn








is the m × n matrix of coefficients of variables of x1, x2, ..., xn in the
constraints.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 8 / 107
Example 1
Transform the following LPP into standard form of LPP.
Maximize Z = x1 − x2 + x3
Subject to x1 + x2 − 3x3 ≥ 4
2x1 − 4x2 + x3 ≥ −5
x1 + 2x2 − 2x3 ≤ 3
x1 ≥ 0, x2 ≥ 0, x3 ≥ 0
Solution:
Since the second constraint have a negative value on the right we can
multiply both side by (-1) so,
Maximize Z = x1 − x2 + x3 + 0x4 + 0x5 + 0x6
subject to x1 + x2 − 3x3 − x4 = 4
−2x1 + 4x2 − x3 + x5 = 5
x1 + 2x2 − 2x3 + x6 = 3
x1, x2, x3, x4, x5, x6 ≥ 0
Therefore, the above problem is standard LPP.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 9 / 107
Variables Unrestricted In Sign
The difference of two non-negative variables is a variable unrestricted
in sign.
Let x1 and x2 be two non-negative variables. The difference of these
two variables is a variable x3 i.e x3 = x1 − x2 which is unrestricted in
sign.
If x1 > x2, then x3 > 0.
If x1 < x2, then x3 < 0.
If x1 = x2 , then x3 = 0.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 10 / 107
Example 2.
Write down the following LPP, where the variables are non-negative in
standard LPP form
Maximize Z = 2x1 + 3x2 − x3
Subject to 4x1 + x2 + x3 ≥ 4
7x1 + 4x2 − x3 ≤ 25,
x1, x3 ≥ 0, x2 unrestricted sign.
Solution: Since x2 is unrestricted sign, we will convert x2 by x2 = x2 − x2 ,
where x2, x2 ≥ 0.
Maximize Z = 2x1 + 3x2 − 3x2 − x3 + 0x4 + 0x5
Subject to 4x1 + x2 − x2 + x3 − x4 = 4
7x1 + 4x2 − 4x2 − x3 + x5 = 25,
x1, x3, x4, x5, x2, x2 ≥ 0,
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 11 / 107
3. Basic Feasible Solutions
Consider the system Ax = b and x ≥ 0, where A is an m × n matrix
and b is any m vector.
Suppose that the rank(A, b) = rank(A) = m. After possibly
rearrangement of the columns of A, let A = [B, N] where B is an
m × m invertible matrix and N is m × (n − m) matrix.
The point X =
XB
XN
, where XB = B−1b and XN = 0 is called a
basic solution of the system.
If XB ≥ 0, then X is called a basic feasible solution of the system.
Here B is called the basic matrix (or simply the basis ) and N is
called the non basic matrix.
The components of XB are called basic variables, and the
components XN are called non basic variables.
If XB > 0, then X is called a non degenerate basic feasible
solution and if at least one component of XB is zero, then X is called
a degenerate basic feasible solution.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 12 / 107
Example 1
Consider the polyhedral set defined by the following inequalities. Find
the basic solution.
(a)
x1 + x2 ≤ 6
x2 ≤ 3
x1, x2 ≥ 0
(b)
x + y ≤ 3
2x − y ≤ 4
x, y ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 13 / 107
Solution
(a) By introducing the slack variables x3 and x4 , the problem is put in the
following standard format:
x1 + x2 + x3 = 6
x2 + x4 = 3
x1, x2, x3, x4 ≥ 0
Note that, the constraint matrix
A = [a1 a2 a3 a4] =
1 1 1 0
0 1 0 1
From forgoing definition, basic feasible solutions corresponding to finding
a 2 × 2 basic matrix B with non-negative B−1b. The following are possible
ways of extracting B out of A.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 14 / 107
B = [a1, a2] =
1 1
0 1
XB =
x1
x2
= B−1b =
1 −1
0 1
6
3
=
3
3
XN =
x3
x4
=
0
0
B = [a1, a4] =
1 0
0 1
XB =
x1
x4
= B−1b =
1 0
0 1
6
3
=
6
3
XN =
x2
x3
=
0
0
B = [a2, a3] =
1 1
1 0
XB =
x2
x3
= B−1b =
0 1
1 −1
6
3
=
3
3
XN =
x1
x4
=
0
0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 15 / 107
B = [a2, a4] =
1 0
1 1
XB =
x2
x4
= B−1b =
1 0
−1 1
6
3
=
6
−3
XN =
x1
x3
=
0
0
B = [a3, a4] =
1 0
0 1
XB =
x3
x4
= B−1b =
1 0
0 1
6
3
=
6
3
XN =
x1
x2
=
0
0
We have four basic feasible solutions.
Namely X1 =




3
3
0
0



 , X2 =




6
0
0
3



 , X3 =




0
3
3
0



 , X4 =




0
0
6
3




(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 16 / 107
These basic feasible solutions, projected in R2 that is in the (x1, x2) space
give rise to the following four points.
3
3
,
6
0
,
0
3
,
0
0
these points are precisely the extreme points of the feasible region.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 17 / 107
In this example, the possible number of basic feasible solutions is bounded
by the number of ways of extracting two columns out of four columns to
form the basis. therefore, the number of basic feasible solutions is less
than or equal to
4
2
= 4!
2!(4−2)! = 6.
Out of these six possibilities, one point violates the non-negativity of B−1b.
furthermore, a1 and a3 could not have been used to form a basic since
a1 = a3 =
1
0
are linearly dependent, and hence the matrix
1 1
0 0
does not qualify as a basis. This leaves four basic feasible solutions.
(b)
x + y + z = 3
2x − y + w = 4
x, y, z, w ≥ 0
1 1 1 0
2 −1 0 1




x
y
z
w



 =
3
4
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 18 / 107
Here, 4
2 = 6 possible square matrix obtained from the given system.
If we select column 3 and 4 and assign zero value to the variable
associated with column 1and 2, z = 3, w = 4. So, (0, 0, 3, 4) is basic
solution and x, y is non-basic variable and z and w is basic variable.
B =
1 0
0 1
. XB =
z
w
=
3
4
and XN =
x
y
=
0
0
Again take column 1 and 2 and assign zero value to the variable associated
with column 3 and 4, B =
1 1
2 −1
and XB = B−1b and follow the same
fashion as above.
In general, the number of basic feasible solutions is less than or
equal to
n
m
= n!
m!(n−m)!
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 19 / 107
Example 2: Degenerate basic feasible solutions
Consider the following system of inequalities
x1 + x2 ≤ 6
x2 ≤ 3
x1 + 2x2 ≤ 9
x1, x2 ≥ 0
Solution: After adding the slack variables x3, x4 and x5, we get
x1 + x2 + x3 = 6
x2 + x4 = 3
x1 + 2x2 + x5 = 9
x1, x2, x3, x4, x5 ≥ 0
A = [a1, a2, a3, a4, a5] =


1 1 1 0 0
0 1 0 1 0
1 2 0 0 1


(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 20 / 107
Let us consider the basic feasible solution for B = [a1, a2, a3]
XB =


x1
x2
x3

 =


1 1 1
0 1 0
1 2 0


−1 

6
3
9

 =


0 −2 1
0 1 0
1 1 −1




6
3
9

 =


3
3
0

, XN =
x4
x5
=
0
0
Note that this basic feasible solution is degenerate since the basic variable
x3 = 0.
Now consider the basic feasible solution with B = [a1, a2, a4]
XB =


x1
x2
x4

 =


1 1 0
0 1 1
1 2 0


−1 

6
3
9

 =


2 0 −1
−1 0 1
1 1 −1




6
3
9

 =


3
3
0

, XN =
x3
x5
=
0
0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 21 / 107
Note that these basic feasible solution give rise to the same point obtained
by B = [a1, a2, a3]. It can be also checked the other basic feasible solution
with basis B = [a1, a2, a5] is give by
XB =


x1
x2
x5

 =


3
3
0

 , XN =
x3
x4
=
0
0
Note that all the three foregoing bases represent the single extreme point
or basic feasible solution (x1, x2, x3, x4, x5) = (3, 3, 0, 0, 0). This basic
feasible solution is degenerate since each associated basis involves a basic
variable at level zero.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 22 / 107
4. Fundamental Theorem of Linear Programming
Theorem: For an arbitrary linear programming in standard form of
the following are true.
i . If there is no optimal solution, then the problem is either
infeasible or unbounded
ii . If a feasible solution exists, then a basic feasible solution exists
iii . If an optimal solution exists, then a basic optimal solution
exists
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 23 / 107
Key to the Simplex Method
The key to the simplex method lies in recognizing the optimality of a given
extreme point solution based on local considerations without having to
(globally) enumerate all extreme points or basic feasible solutions.
Consider the following linear programming problem :
LP: Minimize CT X
Subject to AX = b
X ≥ 0
where A is an m × n matrix with rank m, suppose that we have a basic
feasible solution
B−1b
0
whose objective value Z0 is given by
Z0 = CT B−1b
0
= (CT
B , CT
N )
B−1b
0
= CT
B B−1
b (3.1)
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 24 / 107
Now, let XB and XN denote the set of basic and non basic variables for
the given basis and X =
XB
XN
be an arbitrary feasible solution. Then
feasibility requires that XB ≥ 0, XN ≥ 0 and that b = AX = BXB + NXN.
Multiplying by B−1 and rearranging the terms, we get
B−1
(b = BXB + NXN)
B−1
b = XB + B−1
NXN
XB = B−1
b − B−1
NXN
XB = B−1
b −
j∈J
B−1
aj xj
XB = B−1
b −
j∈J
yj xj
XB = b −
j∈J
yj xj (3.2)
Where b = B−1
b, yj = B−1
aj and J is the current set of the indices of the non
basic variables.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 25 / 107
Letting Z denote the objective function value. we get
Z = CT X
= (CT
B , CT
N )
XB
XN
= CT
B XB + CT
N XN
= CT
B (B−1b − j∈J B−1aj xj ) + j∈J cj xj
= CT
B (B−1b) − j∈J CT
B B−1aj xj + j∈J cj xj
= Z0 − j∈J zj xj + j∈J cj xj
= Z0 − j∈J(zj − cj )xj (3.3)
Where zj = CT
B B−1aj for each non basic variable
Using the foregoing transformation, the linear programming problem may
be written as
Minimize Z = Z0 −
j∈J
(zj − cj )xj
subject to
j∈J
yj xj + XB = B−1
b
xj ≥ 0, j ∈ J, and XB ≥ 0 (3.4)
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 26 / 107
5. Algebra of the Simplex Method
Consider the representation of the linear programming LP in the non
basic variable space written in equality form as in equation (3.4).
If (zj − cj ) ≤ 0 for j ∈ J, then xj = 0 for j ∈ J and XB = B−1b is
optimal for the LP. otherwise, while holding (p-1) non basic variable
fixed at zero, the simplex method considers increasing the remaining
variables, say xk.
Naturally we would like zk − ck to be positive and perhaps the most
positive of all the zj − cj , j ∈ J.
Now fixing xj = 0 for j ∈ J − {k} we obtain from equation (3.4)that
z = z0 − (zk − ck)xk (3.5)
and















xB1
xB2
.
.
.
xBr
.
.
.
xBm















=















b1
b2
.
.
.
br
.
.
.
bm















−















y1k
y2k
.
.
.
yrk
.
.
.
ymk















xk (3.6)
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 27 / 107
If yik ≤ 0, then xBi
increases as xk increase and so xBi
continues to be
non-negative.
If yik ≥ 0 then xBi
will decrease as xk increases.
In order to satisfy non-negativity, xk is increase until the first point at
which some basic variable xBr drops to zero.
Examining equation (3.6), it is then clear that the first basic variable
dropping to zero corresponds to the minimum of bi
yik
for positive yik.
more precisely, we can increase xk until
xk = br
yrk
= minimum1≤i≤m{ bi
yik
: yik > 0} (3.7)
In the absence of degeneracy, br > 0 and hence xk = br
yrk
> 0, from
equation (3.5) and the fact that zk − ck > 0, it then follows z < z0
and the objective function strictly improves.
As xk increases from level 0 to br
yrk
, a new feasible solution obtained.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 28 / 107
Substituting xk = br
yrk
in equation (3.6) gives the following point:
xBi
= bi − yik
yrk
br , i = 1, 2, ..., m
xk = br
yrk
(3.8)
and all other xj - variables are zero.
From equation (3.8), xBr = 0 and hence, at most m variables are
positive.
The corresponding columns in A are
aB1 , aB2 , ..., aBr−1 , ak, aBr+1 , ..., aBm .
Note that these columns are linearly independent. since yrk = 0.
(Recall that if aB1 , aB1 , ..., aBm are linearly independent, and if ak
replaces aBr , then the new columns are linearly independent if and
only if yrk
= 0.
Therefore, the point given by equation (3.8) is a basic feasible
solution.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 29 / 107
Recall that a basic variable xB that first drops to zero is called a
blocking variable because it blocks the further increase of xk. Thus,
xk enters the basis and xB leaves the basis.
To summarize, we have algebraically described an iteration, that is,
the process of transforming from one basis to an adjacent basis.
This is done by increasing the value of a nonbasic variable xk with
positive zk − ck and adjusting the current basic variables.
In the process, the variable xB drops to zero.
The variable xk hence enters the basis and xB leaves the basis.
In the absence of degeneracy the objective function value strictly
decreases, and hence the basic feasible solutions generated are
distinct.
Because there exists only a finite number of basic feasible solutions,
the procedure would terminate in a finite number of steps.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 30 / 107
Example 1.
Minimize x1 + x2
subject to x1 + 2x2 ≤ 4
x2 ≤ 1
x1, x2 ≥ 0
Solution: Introducing the slack variables x3 and x4 to put the problem in
a standard form.
This leads to the following constraint matrix A :
A = [a1, a2, a3, a4] =
1 2 1 0
0 1 0 1
Consider the basic feasible solution corresponding to B = [a1, a2].
In other words, x1 and x2 are the basic variables, while x3 and x4 are
the nonbasic variables.
The representation of the problem in this nonbasic variable space as
in Equation (3.4) with J= 3, 4 may be obtained as follows.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 31 / 107
First, compute: B−1 =
1 2
0 1
−1
=
1 −2
0 1
, CBB−1 =
(1, 1)
1 −2
1 1
= (1, −1)
Hence y3 = B−1a3 =
1 −2
0 1
1
0
=
1
0
y4 = B−1a4 =
1 −2
0 1
0
1
=
−2
1
and XB = b = B−1b =
1 −2
0 1
4
1
=
2
1
Also, z0 = CBB−1b = (1, −1)
4
1
= 3
z3 − c3 = CBB−1a3 − c3 = (1, −1)
1
0
− 0 = 1
z4 − c4 = CBB−1a4 − c4 = (1, −1)
0
1
− 0 = −1
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 32 / 107
Hence, the required representation of the problem is
Minimize 3 − x3 + x4
subject to x3 − 2x4 + x1 = 2
x4 + x2 = 1
x1, x2, x3, x4 ≥ 0
Since z3 − c3 > 0, then the objective function improves by increasing
x3, the modified solution is given by
XB = B−1b − B−1a3x3
x1
x2
=
2
1
−
1
0
x3
The maximum value of x3 is 2 (any larger value of x3 will force x1 to
be negative).
Therefore the new basic feasible solution is (x1, x2, x3, x4) = (0, 1, 2, 0)
Here, x3 enters the basis and x1 leaves the basis.
Note that the new point has an objective value equal to 1, which is
an improvement over the previous objective value of 3. the
improvement is precisely (z3 − c3)x3 = 2.
Remark: CB is the coefficient of the basic variable in the objective of the
LP.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 33 / 107
Termination with an optimal solution
Consider the following problem, where A is an m × n matrix with rank m.
Minimize CT X
Subject to AX = b
X ≥ 0
Suppose that X∗ is a basic feasible solution with basis B; that is
X∗ =
B−1b
0
Let Z∗ denote the objective value at X∗, that is Z∗ = CT
B B−1b. Suppose
further that Zj − Cj ≤ 0, for all non basic variables, and hence there are no
non basic variables that are eligible to enter the basis. let x be any feasible
solution with objective function value z, then from equation (3.3) we have
Z∗ − Z = j∈J(Zj − Cj )xj (3.9)
Because Zj − Cj ≤ 0 and Xj ≥ 0 for all variables, then Z∗ ≤ Z, and so X∗
is an optimal basic feasible solution, since if (Zj − Cj ) ≤ 0 for all j ∈ J,
then the current basic feasible solution is optimal.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 34 / 107
Example 2.
Minimize 2x1 − x2
subject to −x1 + x2 ≤ 2
2x1 + x2 ≤ 6
x1, x2 ≥ 0
Introducing the slack variables x3 and x4. This leads to the following
constraints
−x1 + x2 + x3 = 2
2x1 + x2 + x4 = 6
x1, x2, x3, x4 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 35 / 107
Consider the basic feasible solution with basis
B = [a1, a2] =
−1 1
2 1
andB−1 =
−1/3 1/3
2/3 1/3
XB = B−1b − B−1NXN
xB1
xB2
=
x1
x2
=
−1/3 1/3
2/3 1/3
2
6
−
−1/3 1/3
2/3 1/3
1 0
0 1
x3
x4
=
4/3
10/3
−
−1/3
2/3
x3 −
1/3
1/3
x4 (3.10) Currently,
x3 = x4 = 0, x1 = 4/3 and x2 = 10/3, note that
z3 − c3 = CBB−1a3 − c3 =
(2, −1)
−1/3 1/3
2/3 1/3
1
0
− 0 = −4/3 0
1
0
− 0 = −4/3
z4 − c4 = CBB−1a4 − c4 = (2, −1)
−1/3 1/3
2/3 1/3
0
1
− 0 = 1/3
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 36 / 107
Hence, the objective improves by holding x3 non- basic and
introducing x4 in the basis.
Then x3 is kept at zero level, x4 is increased and x1 and x2 are
modified according to equation (3.10). we see that x4 can be
increased to 4, at which x1 drops to zero.
Any further increase of x4 results in violating the non-negativity of x1
and so x1 is the blocking variable.
With x4 = 4 and x3 = 0, the modified value of x1 and x2 are 0 and 2
respectively.
The new basic feasible solution is
(x1, x2, x3, x4) = (0, 2, 0, 4)
Note that a4 replaces a1, that is x1 drop from the basic and x4 enters
the basis.
The new set of basic and non basic variables and their values are
given as :
XB =
xB4
xB2
=
x4
x1
=
4
2
, XN =
x3
x1
=
0
0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 37 / 107
Moving from old to the new basic feasible solution is illustrated in
figure given below, note that as x4 increases by one unit x1 decrease
by 1/3 unit and x2 decrease by 1/3 unit, that is we move in the
direction (-1/3, -1/3) in the (x1, x2) space.
This continues until we are blocked by the non negativity restriction
x1 ≥ 0.
At this point x1 drops to zero and leaves the basis.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 38 / 107
The simplex Algorithm
The key solution concepts
The simplex method focuses on corner point feasible(CPF) solutions.
The simplex method is an iterative algorithm(a systematic solution
procedure that keeps repeating a fixed series of steps, called an
iteration, until a desired result has been obtained ) with the following
structure.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 39 / 107
Whenever possible, the initialization of the simplex method chooses
the origin point(all decision variables equal to zero) to be the initial
CPF solution
Given a CPF solution, it is much quicker computationally to gather
information about its adjacent CPF solutions than about other CPF
solutions. Therefore, each time the simplex method performs an
iteration to move from the current CPF solution to a better one, it
always chooses a CPF solution that is adjacent to the current one.
After the current CPF solution is identified, the simplex method
examines each of the edges of the feasible region that emanates from
this CPF solution. Each of these edges leads to an adjacent solution
at the other end, but the simplex method doesn’t even take the time
to solve for adjacent CPF solutions, instead, it simply identifies the
rate of improvement in Z that would be obtained by moving along the
edge. and then chooses to move along the one with largest positive
rate of improvement.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 40 / 107
A positive rate of improvement in Z implies that the adjacent CPF
solution is better than the current one, whereas a negative rate of
improvement in Z implies that the adjacent CPF solution is worse.
Therefore, the optimality test consists simply checking whether any of
the edges give a positive rate of improvement in Z. if none do, then
the current CPF solution is optimal.
The Simplex Method in Tabular Form
Steps
Initialization:
1 Convert (Transform) all the constraints to equality by introducing
slack, surplus, and artificial variables as follows
Constraint type Variable to be added
≤ +slack(s)
≥ -surplus(s)+artificial(A)
= + artificial(A)
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 41 / 107
The artificial variable refers to the kind of variable which is introduced in
the linear program model to obtain the initial basic feasible solution. It is
utilized for the equality constraints and for the greater than or equal
inequality constraints.
1 Construct the initial simplex tableau
Cj c1 . . . cn 0 . . . 0 0 . . . 0
CB BV X1 . . . Xn S1 . . . Sn A1 . . . An RHS
0 S b1
. . .
. . .
. . .
0 A bm
Z Z val
Zj − Cj
2 Test for optimality :
Case 1: In maximization problem the current bfs is optimal if every
element in the last row of the simplex tableau is nonnegative
Case 2 : In minimization problem the current bfs is optimal if every
element in the last row of the simplex tableau is non positive
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 42 / 107
Iteration :
Step 1 : Determine the entering basic variable by selecting the
variable (automatically a non basic variable ) with the most negative
value (in case of maximization)or the most positive(in case of
minimization) in the last row (Z-row). Put a box around the column
below this variable, and call it the ” pivot column”.
Step 2 : Determine the leaving basic variable by applying the
minimum ratio test as following:
1 Pick out each coefficients in the pivot column that is strictly positive
2 Divide each of these coefficients into the right hand side entry for the
same row
3 Identify the row that has the smallest of these ratios
4 The basic variable for that row is the leaving variable, so replace that
variable by the entering variable in the basic variable column of the next
simplex tableau. Put a box around this row and call it the ”pivot row”.
Step 3: Solve for the new basic feasible solution by using elementary
row operation(multiply or divide a row by a nonzero constant, add or
subtract a multiple of one row another row)to construct a new
simplex tableau, and then return the optimality test.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 43 / 107
Example 1.
Solve the following problem using the simplex method
Maximize z = 3x1 + 5x2
Subject to x1 ≤ 4
2x2 ≤ 12
3x1 + 2x2 ≤ 18
x1, x2 ≥ 0
Solution: 1. Write the standard form of the above linear programming
problem.
Maximize z = 3x1 + 5x2
Subject to x1 + s1 = 4
2x2 + s2 = 12
3x1 + 2x2 + s3 = 18
x1, x2, s1, s2, s3 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 44 / 107
2. Initial tableau
C 3 5 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 1 0 1 0 0 4
0 s2 0 2 0 1 0 12
0 s3 3 2 0 0 1 18
Zj − Cj -3 -5 0 0 0
The basic feasible solution at the initial tableau is (0,0,4,12,18) where:
x1 = 0, x2 = 0, s1 = 4, s2 = 12, s3 = 18 and z = 0. Where s1, s2, s3 are
basic variable and x1 and x2 are non-basic variables. The solution at the
initial tableau is associated to the origin point at which all the decision
variables are zero.
Optimality test: By investigating the last row of the initial tableau, we
find that there are some negative numbers. Therefore, the current solution
is not optimal.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 45 / 107
Iteration
Step 1: Determine the entering variable by selecting the variable with the
most negative in the last row. From the initial tableau, in the last row
((Zj − Cj ) row), the coefficient of X1 is -3 and the coefficient of X2 is -5;
therefore, the most negative is -5. consequently, X2 is the entering
variable. X2 is surrounded by a box and it is called the pivot column.
Step 2: Determining the leaving variable by using the minimum ratio test
as follows:
C 3 5 0 0 0
CB XB x1 x2(Entering) s1 s2 s3 b(RHS)
0 s1 1 0 1 0 0 4
0 s2(Leaving) 0 2 0 1 0 12
0 s3 3 2 0 0 1 18
Zj − Cj -3 -5 0 0 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 46 / 107
Step 3: solving for the new BF solution by using the eliminatory row
operations as follows:
C 3 5 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 1 0 1 0 0 4
5 x2 0 1 0 1/2 0 6
0 s3 3 0 0 -1 1 6
Zj − Cj -3 0 0 5/2 0
This solution is not optimal, since there is a negative numbers in the last
row. Apply the same rules we will obtain this solution:
C 3 5 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 0 0 1 1/3 -1/3 2
5 x2 0 1 0 1/2 0 6
3 x1 1 0 0 -1/3 1/3 2
Zj − Cj 0 0 0 3/2 1 36
This solution is optimal; since there is no negative solution in the last row:
basic variables are x1 = 2, x2 = 6 and s1 = 2; the nonbasic variables are
s2 = s3 = 0, and the optimal value Z = 36.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 47 / 107
Example 2.
Minimize x1 + x2 − 4x3
subject to x1 + x2 + 2x3 ≤ 9
x1 + x2 − x3 ≤ 2
−x1 + x2 + x3 ≤ 4
x1, x2, x3 ≥ 0
Solution : Introducing the non- negative slack variables s1, s2 and s3. The
problem becomes the following :
Minimize x1 + x2 − 4x3
subject to x1 + x2 + 2x3 + s1 = 9
x1 + x2 − x3 + s2 = 2
−x1 + x2 + x3 + s3 = 4
x1, x2, x3, s1, s2, s3 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 48 / 107
Since b ≥ 0, then we can choose our initial basis as B = [a4, a5, a6] = I3,
and we indeed have B−1b = b ≥ 0. This gives the following initial tableau:
Iteration 1
C 1 1 -4 0 0 0 0
CB XB x1 x2 x3 s1 s2 s3 b(RHS)
0 s1 1 1 2 1 0 0 9
0 s2 1 1 -1 0 1 0 2
0 s3 -1 1 1 0 0 1 4
Zj − Cj -1 -1 4 0 0 0
The initial basic feasible solution in the above table is
x1 = 0, x2 = 0, x3 = 0, but it is not optimal, since there is positive element
the last row of the simplex table (Zj − Cj ). Identify the entering and
leaving variable
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 49 / 107
Iteration 2
C 1 1 -4 0 0 0 0
CB XB x1 x2 x3 s1 s2 s3 b(RHS)
0 s1 1 1 2 1 0 0 9
0 s2 1 1 -1 0 1 0 2
0 s3 -1 1 1 0 0 1 4
Zj − Cj -1 -1 4 0 0 0
From the above simplex table s3 is the leaving variable and x3 is the
entering variable.
C 1 1 -4 0 0 0 0
CB XB x1 x2 x3 s1 s2 s3 b(RHS)
0 s1 3 -1 0 1 0 -2 1
0 s2 0 2 0 0 1 1 6
-4 x3 -1 1 1 0 0 1 4
Zj − Cj 3 -5 0 0 0 -4
The initial basic feasible solution in the above table is x1 = 0, x2 = 0, x3 = 4, but
it is not optimal, since there is positive element in the last row of the simplex
table (Zj − Cj ).(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 50 / 107
Iteration 3
C 1 1 -4 0 0 0 0
CB XB x1 x2 x3 s1 s2 s3 b(RHS)
0 s1 3 -1 0 1 0 -2 1
0 s2 0 2 0 0 1 1 6
-4 x3 -1 1 1 0 0 1 4
Zj − Cj 3 -5 0 0 0 -4
From the above simplex table s1 is the leaving variable and x1 is the
entering variable.
C 1 1 -4 0 0 0
CB XB x1 x2 x3 s1 s2 s3 b(RHS)
1 x1 1 -1/3 0 1/3 0 -2/3 1/3
0 s2 0 2 0 0 1 1 6
-4 x3 0 2/3 1 1/3 0 1/3 13/3
Zj − Cj 0 -4 0 -1 0 -2
This is an optimal tableau since Zj − Cj ≤ 0 in the last row. The optimal solution
is given by x1 = 1/3, x2 = 0, x3 = 13/3, with z = -17.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 51 / 107
Degeneracy and Finiteness of Simplex Algorithm
Degeneracy: 1. only one component of basic feasible solution
corresponding to the basic variable is zero.
2. at least two component of basic feasible solution corresponding to the
basic variables are zero.
Example 1:
Maximize 5x1 + 3x2
Subject to x1 + x2 ≤ 2
5x1 + 2x2 ≤ 10
x1 + 8x2 ≤ 12
x1, x2 ≥ 0
Solution :The standard linear programming problem for the above problem is
Maximize 5x1 + 3x2
Subject to x1 + x2 + s1 = 2
5x1 + 2x2 + s2 = 10
x1 + 8x2 + s3 = 12
x1, x2, s1, s2, s3 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 52 / 107
Now we can construct the simplex tableau
C 5 3 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 1 1 1 0 0 2 ←
0 s2 5 2 0 1 0 10
0 s3 3 8 0 0 1 12
Zj − Cj ↑-5 -3 0 0 0
The initial basic feasible solution of the LP problem is x1 = 0, x2 = 0, but
it can not be an optimal solution, since there is a negative element in the
last row of the simplex table. from the above table we can identify the
entering and leaving variables, thus x1 is the entering variable and s1 is the
leaving variable.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 53 / 107
By applying elementary row operation we can get the following simplex
tableau
C 5 3 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
5 x1 1 1 1 0 0 2
0 s2 0 -3 -5 1 0 0
0 s3 0 5 -3 0 1 6
Zj − Cj 0 2 5 0 0
Thus the optimal solution is (x1, x2, s1, s2, s3) = (2, 0, 0, 0, 6) and the
optimal value is 10, since there exist a zero value of basic variable ,it is
degenerate case of finiteness of simplex algorithm.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 54 / 107
Infinite number of solutions
The alternative optimal solution can be obtained by considering the zj − cj
row of the simplex table. That is, zj − cj = 0 for some non-basic variable
columns in the optimal simplex table.
Example
Maximize 10x1 + 20x2
subject to x1 ≤ 10
x2 ≤ 6
2x1 + 4x2 ≤ 36
x1, x2 ≥ 0
Solution: The standard linear programming problem of the above problem
is
Maximize 10x1 + 20x2
subject to x1 + s1 = 10
x2 + s2 = 6
2x1 + 4x2 + s3 = 36
x1, x2, s1, s2, s3 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 55 / 107
Now construct the simplex tableau.
C 10 20 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 1 0 1 0 0 10
0 s2 0 1 0 1 0 6 ←
0 s3 2 4 0 0 1 36
Zj − Cj -10 ↑ -20 0 0 0
C 10 20 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 1 0 1 0 0 10
20 x2 0 1 0 1 0 6
0 s3 2 0 0 -4 1 12←
Zj − Cj ↑ -10 0 0 20 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 56 / 107
C 10 20 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 0 0 1 2 -1/2 4
20 x2 0 1 0 1 0 6
10 x1 1 0 0 -2 1/2 6
Zj − Cj 0 0 0 0 5 180
We have basic feasible solution (x1, x2) = (6, 6) with optimal value z =
180. We apply a new simplex step with Zj − Cj = 0 for a non- basic
variable.
C 10 20 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s1 0 0 1 2 -1/2 4
20 x2 0 1 0 1 0 6
10 x1 1 0 0 -2 1/2 6
Zj − Cj 0 0 0 0 5 180
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 57 / 107
C 10 20 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
0 s2 0 0 1/2 1 -1/4 2
20 x2 0 1 -1/2 0 1/4 4
10 x1 1 0 1 0 0 10
Zj − Cj 0 0 0 0 5 180
Now we have another basic solution (x1, x2) = (10, 4), but the optimal
values remains 180.
In our case, since (x1, x2) = (6, 6) and (x1, x2) = (10, 4) are solutions all
points of the segment (x1, x2)(x1, x2) = λ(6, 6) + (1 − λ)(10, 4), λ ∈ [0, 1]
are solutions.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 58 / 107
Finding a Starting Basic Feasible Solution
In certain cases, it is difficult to obtain an initial basic feasible solution of
the given LP problem. Such cases arise
1 when the constraints are of the ≤ type,
n
j=1
aij xj ≤ bi , xj ≥ 0
and value of few right-hand side constants is negative [i.e. bi < 0].
After adding the non-negative slack variable si (i = 1, 2, . . ., m), the
initial solution so obtained will be si = −bi for a particular resource, i.
This solution is not feasible because it does not satisfy non-negativity
conditions of slack variables (i.e. si ≥ 0).
2 when the constraints are of the ≥ type,
n
j=1
aij xj ≥ bi , xj ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 59 / 107
After adding surplus (negative slack) variable si , the initial solution so
obtained will be −si = bi or si = −bi
n
j=1
aij xj − si = bi , xj ≥ 0, si ≥ 0
This solution is not feasible because it does not satisfy non-negativity
conditions of surplus variables (i.e. si ≥ 0).
In such a case, artificial variables, Ai (i = 1, 2, . . ., m) are added to
get an initial basic feasible solution.
The resulting system of equations then becomes:
n
j=1
aij xj − si + Ai = bi
xj , si , Ai ≥ 0, i = 1, 2, ..., m
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 60 / 107
These are m simultaneous equations with (n + m + m) variables (n
decision variables, m artificial variables and m surplus variables).
An initial basic feasible solution of LP problem with such constraints
can be obtained by equating (n + 2m - m) = (n + m) variables equal
to zero.
Thus the new solution to the given LP problem is: Ai = bi (i = 1, 2,
. . . , m), which is not the solution to the original system of
equations because the two systems of equations are not equivalent.
Thus, to get back to the original problem, artificial variables must be
removed from the optimal solution.
There are two methods for removing artificial variables from the solution.
Two-Phase Method
Big-M Method or Method of Penalties
Remark Before the optimal solution is reached, all artificial variables must
be dropped out from the solution mix. This is done by assigning
appropriate coefficients to these variables in the objective function. These
variables are added to those constraints with equality (=) and greater than
or equal to (≥) sign.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 61 / 107
Two -phase method
In the first phase of this method, the sum of the artificial variables is
minimized subject to the given constraints in order to get a basic
feasible solution of the LP problem.
The second phase minimizes the original objective function starting
with the basic feasible solution obtained at the end of the first phase.
Since the solution of the LP problem is completed in two phases, this
method is called the two-phase method.
Advantages of the method
No assumptions on the original system of constraints are made, i.e.
the system may be redundant, inconsistent or not solvable in
non-negative numbers.
It is easy to obtain an initial basic feasible solution for Phase I.
The basic feasible solution (if it exists) obtained at the end of phase I
is used as initial solution for Phase II
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 62 / 107
Steps of the Algorithm: Phase I
Step 1 If all the constraints in the given LP problem are less than or equal
to (≤) type, then Phase II can be directly used to solve the problem.
Otherwise, the necessary number of surplus and artificial variables are
added to convert constraints into equality constraints.
Step 2: Assign zero coefficient to each of the decision variables (xj ) and
to the surplus variables; and assign - 1 coefficient to each of the artificial
variables. This yields the following auxiliary LP problem.
Maximize Z∗ = m
i=1(−1)Ai
subject to the constraints
m
i=1
aij xj + Ai = bi , i = 1, 2, ..., m
and xj , Aj ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 63 / 107
Step 3: Apply the simplex algorithm to solve this auxiliary LP problem.
The following three cases may arise at optimality.
(i) Max Z∗ = 0 and at least one artificial variable is present in the basis
with positive value. This means that no feasible solution exists for
the original LP problem.
(ii) Max Z∗ = 0 and no artificial variable is present in the basis. This
means that only decision variables(xj ’s) are present in the basis and
hence proceed to Phase II to obtain an optimal basic feasible solution
on the original LP problem.
(iii) Max Z∗ = 0 and at least one artificial variable is present in the basis
at zero value. This means that a feasible solution to the auxiliary LP
problem is also a feasible solution to the original LP problem. In
order to arrive at the basic feasible solution, proceed directly to
Phase II or else eliminate the artificial basic variable and then proceed
to Phase II.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 64 / 107
Remark Once an artificial variable has left the basis, it has served
its purpose and can, therefore, be removed from the simplex table.
An artificial variable is never considered for re-entry into the basis.
Phase II
Assign actual coefficients to the variables in the objective function
and zero coefficient to the artificial variables which appear at zero
value in the basis at the end of Phase I.
The last simplex table of Phase I can be used as the initial simplex
table for Phase II.
Then apply the usual simplex algorithm to the modified simplex table
in order to get the optimal solution to the original problem.
Artificial variables that do not appear in the basis may be removed.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 65 / 107
Remark
1 All artificial vectors are not present in the bases which
indicates that all artificial vectors at zero level at the optimal
stage, thus the solution obtained is a basic feasible solution.
2 Some artificial vectors are present in the basis and some
artificial vectors are at positive level at the optimal stage. In
that case there are no feasible solution to the problem.
3 All artificial are at zero level but at least one artificial vector is
present in the basis at optimal stage. Here the solution under
test is an optimal solution. Here the converted equations are
consistent but some of the constraints may be redundant. By
redundancy means the system has more than enough
constraints.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 66 / 107
Example 1.
Solve the following Linear programming problems by using two phase
method
Maximize Z = 3x1 − x2
subject to 2x1 + x2 ≥ 2
x1 + 3x2 ≤ 2
x2 ≤ 4
x1, x2 ≥ 0
Solution:Phase I Convert the above problem to Standard form of LPP
Maximize Z = 3x1 − x2/Z∗ = −A1
subject to 2x1 + x2 − s1 + A1 = 2
x1 + 3x2 + s2 = 2
x2 + s3 = 4
x1, x2, s1, s2, s3, A1 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 67 / 107
Now, we will Max Z∗ = −A1 till zero in order to remove the variable A1,
where A1 is an artificial variable.
Maximize Z∗ = 0x1 − 0x2 + 0s1 + 0s2 + 0s3 − 1A1
subject to 2x1 + x2 − s1 + A1 = 2
x1 + 3x2 + s2 = 2
x2 + s3 = 4
x1, x2, s1, s2, s3, A1 ≥ 0
Cj 0 0 0 0 0 -1
CB XB x1 x2 s1 s2 s3 A1 b(RHS) Min ratio bi /yik
-1 A1 2 1 -1 0 0 1 2 1 ←
0 s2 1 3 0 1 0 0 2 2
0 s3 0 1 0 0 1 0 4 -
Zj − Cj ↑-2 -1 1 0 0 0 Z∗ = −2
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 68 / 107
Cj 0 0 0 0 0 -1
CB XB x1 x2 s1 s2 s3 A1 b(RHS)
0 x1 1 1/2 -1/2 0 0 × 1
0 s2 0 5/2 1/2 1 0 × 1
0 s3 0 1 0 0 1 × 4
Zj − Cj 0 0 0 0 0 × Z∗ = 0
Since all Zj − Cj = 0 and no artificial vector appears in the the basis, we
proceed to phase II.
Phase II
Cj 3 -1 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
3 x1 1 1/2 -1/2 0 0 1
0 s2 0 5/2 1/2 1 0 1 ←
0 s3 0 1 0 0 1 4
Zj − Cj 0 5/2 ↑ -3/2 0 0 Z = 3
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 69 / 107
Cj 3 -1 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
3 x1 1 3 0 1 0 2
0 s1 0 5 1 2 0 2
0 s3 0 1 0 0 1 4
Zj − Cj 0 10 0 3 0 Z = 6
Since all Zj − Cj ≥ 0, optimal basic feasible solution is obtained, therefore
the solution is Max Z = 6, x1 = 2, x2 = 0.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 70 / 107
Example 2.
Maximize Z = 5x1 + 8x2
subject to 3x1 + 2x2 ≥ 3
x1 + 4x2 ≥ 4
x1 + x2 ≤ 5
x1, x2 ≥ 0
Solution: The standard linear programming problem
Maximize Z = 5x1 + 8x2
subject to 3x1 + 2x2 − s1 + A1 = 3
x1 + 4x2 − s2 + A2 = 4
x1 + x2 + s3 = 5
x1, x2, s1, s2, s3, A1, A2 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 71 / 107
Auxiliary LPP
Maximize Z = 0x1 + 0x2 + 0s1 + 0s2 + 0s3 − A1 − A2
subject to 3x1 + 2x2 − s1 + A1 = 3
x1 + 4x2 − s2 + A2 = 4
x1 + x2 + s3 = 5
x1, x2, s1, s2, s3, A1, A2 ≥ 0
Phase I
Cj 0 0 0 0 0 -1 -1
CB XB x1 x2 s1 s2 s3 A1 A2 b(RHS) min ratio bi
-1 A1 3 2 -1 0 0 1 0 3 3/2
-1 A2 1 4 0 -1 0 0 1 4 1 →
0 s3 1 1 0 0 1 0 0 5 5
Zj − Cj -4 ↑ −6 1 1 0 0 0 Z∗ = −7
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 72 / 107
Cj 0 0 0 0 0 -1 -1
CB XB x1 x2 s1 s2 s3 A1 A2 b(RHS) min rat
-1 A1 5/2 0 -1 1/2 0 1 × 1 2/
0 x2 1/4 1 0 -1/4 0 0 × 1
0 s3 3/4 0 0 1/4 1 0 × 4 16
Zj − Cj ↑ −5/2 0 1 -1/2 0 0 × Z∗ = −1
Cj 0 0 0 0 0 -1 -1
CB XB x1 x2 s1 s2 s3 A1 A2 b(RHS) min ratio
0 x1 1 0 -2/5 1/5 0 × × 2/5
0 x2 0 1 1/10 -3/10 0 × × 9/10
0 s3 0 0 3/10 1/10 1 × × 37/10
Zj − Cj 0 0 0 0 0 × × Z∗ = 0
Since all Zj − Cj ≥ 0, Maz zx = 0, and no artificial vector appears in the
basis, we proceed to phase II.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 73 / 107
Phase II
Cj 5 8 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS) min ratio bi /yik
5 x1 1 0 -2/5 1/5 0 2/5 2 →
8 x2 0 1 1/10 -3/10 0 9/10 1
0 s3 0 0 3/10 1/10 1 37/10 37
Zj − Cj 0 0 -6/5 ↑ −7/5 0 Z = 46/5
Cj 5 8 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS) min ratio bi /yik
0 s2 5 0 -2 1 0 2 -
8 x2 3/2 1 -1/2 0 0 3/2 -
0 s3 -1/2 0 1/2 0 1 7/2 7→
Zj − Cj 7 0 ↑ −4 0 0 Z = 12
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 74 / 107
Cj 5 8 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS) min ratio bi /yik
0 s2 3 0 0 1 2 16
8 x2 1 1 0 0 1/2 5
0 s1 -1 0 1 0 2 7
Zj − Cj 3 0 0 0 4 Z = 40
Since all Zj − Cj ≥ 0, optimal basic feasible solution is obtained. Therefore
the solution is Max Z = 40, x1 = 0, x2 = 5.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 75 / 107
Big - M method
Big-M method is another method of removing artificial variables from
the basis.
In this method, large undesirable (unacceptable penalty) coefficients
to artificial variables are assigned from the point of view of the
objective function.
If the objective function Z is to be minimized, then a very large
positive price (called penalty) is assigned to each artificial variable.
Similarly, if Z is to be maximized, then a very large negative price
(also called penalty) is assigned to each of these variables.
The penalty is supposed to be designated by - M, for a maximization
problem, and + M, for a minimization problem, where M > 0.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 76 / 107
The Big-M method for solving an LPP can be summarized in
the following steps:
Step 1: Express the LP problem in the standard form by adding slack
variables, surplus variables and/or artificial variables. Assign a zero
coefficient to both slack and surplus variables. Then assign a very large
coefficient + M (minimization case) and - M (maximization case) to
artificial variable in the objective function.
Step 2: The initial basic feasible solution is obtained by assigning zero
value to decision variables, x1, x2, ..., etc.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 77 / 107
Step 3: Calculate the values of zj − cj in last row of the simplex table and
examine these values.
1 If all zj − cj ≤ 0, then the current basic feasible solution is optimal.
2 If for a column, k, zk − ck is most positive and all entries in this
column are negative, then the problem has an unbounded optimal
solution.
3 If one or more zj − cj > 0 (minimization case), then select the
variable to enter into the basis (solution mix) with the largest positive
zj − cj value (largest per unit increase in the objective function
value). This value also represents the opportunity cost of not having
one unit of the variable in the solution. That is,
zk − ck = Max{zj − cj : zj − cj > 0}
Step 4: Determine the key row and key element in the same manner as
discussed in the simplex algorithm.
Step 5: Continue with the procedure to update solution at each iteration
till optimal solution is obtained.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 78 / 107
Remarks At any iteration of the simplex algorithm one of the fol-
lowing cases may arise:
1 If at least one artificial variable is a basic variable (i.e., variable
that is present in the basis) with zero value and the coefficient
it M in each zj − cj ( j = 1, 2, . . ., n) values is non-positive,
then the given LP problem has no solution. That is, the
current basic feasible solution is degenerate.
2 If at least one artificial variable is present in the basis with a
positive value and the coefficients M in each zj − cj ( j = 1, 2,
. . ., n) values is non-positive, then the given LP problem has
no optimum basic feasible solution. In this case, the given LP
problem has a pseudo optimum basic feasible solution.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 79 / 107
Example 1.
Solve the following linear programming problem by using Big-M method.
Maximize Z = −2x1 − x2
subject to 3x1 + x2 = 3
4x1 + 3x2 ≥ 6
x1 + 2x2 ≤ 4
x1, x2 ≥ 0
Solution : The standard linear programming problem of the above LP
Maximize Z = −2x1 − x2 + 0s1 + 0s2 − MA1 − MA2
subject to 3x1 + x2 + A1 = 3
4x1 + 3x2 − s1 + A2 = 6
x1 + 2x2 + s2 = 4
x1, x2, s1, s2, A1, A2 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 80 / 107
Cj -2 -1 0 0 -M -M
CB XB x1 x2 s1 s2 A1 A2 b(RHS) min ratio
-M A1 3 1 0 0 1 0 3 3/3 = 1 →
-M A2 4 3 -1 0 0 1 6 6/4 = 3/2
0 s2 1 2 0 1 0 0 4 4/1 = 4
Zj − Cj ↑ 2 − 7M 1-4M M 0 0 0 Z = -9M
Cj -2 -1 0 0 -M -M
CB XB x1 x2 s1 s2 A1 A2 b(RHS) min ratio
-2 x1 1 1/3 0 0 × 0 1 1/1/3 = 3
-M A2 0 5/3 -1 0 × 1 2 2/5/3 = 6/5 →
0 s2 0 5/3 0 1 × 0 3 3/5/3 = 9/5
Zj − Cj 0 ↑ −5M+1
3 M 0 × 0 Z = -2-2m
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 81 / 107
Cj -2 -1 0 0 -M -M
CB XB x1 x2 s1 s2 A1 A2 b(RHS) min ratio
-2 x1 1 0 1/5 0 × × 3/5
-1 x2 0 1 -3/5 0 × × 6/5
0 s2 0 0 1 1 × × 1
Zj − Cj 0 0 1/5 0 × × Z = -12/5
Since all Zj − Cj ≥ 0, optimal basic feasible is obtained. Therefore the solution is
Max Z = −12/5, x1 = 3/5, x2 = 6/5
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 82 / 107
Example 2.
Maximize Z = 3x1 − x2
Subject to 2x1 + x2 ≥ 2
x1 + 3x2 ≤ 3
x2 ≤ 4
x1, x2 ≥ 0
Solution: The standard linear programming problem
Maximize Z = 3x1 − x2 + 0s1 + 0s2 + 0s3 − MA1
Subject to 2x1 + x2 − s1 + A1 = 2
x1 + 3x2 + s2 = 3
x2 + s3 = 4
x1, x2, s1, s2, s3, A1 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 83 / 107
Cj 3 -1 0 0 0 -M
CB XB x1 x2 s1 s2 s3 A1 b(RHS) min ratio
-M A1 2 1 -1 0 0 1 2 2/2 = 1 →
0 s2 1 3 0 1 0 0 3 3/1 = 3
0 s3 0 1 0 0 1 0 4 -
Zj − Cj ↑ −2M − 3 -M + 1 M 0 0 0 Z = -2M
Cj 3 -1 0 0 0 -M
CB XB x1 x2 s1 s2 s3 A1 b(RHS) min ratio bi /yik
3 x1 1 1/2 -1/2 0 0 × 1 -
0 s2 0 5/2 1/2 1 0 × 2 2/1/2 = 4 →
0 s3 0 1 0 0 1 × 4 -
Zj − Cj 0 5/2 ↑ −3/2 0 0 × Z = 3
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 84 / 107
Cj 3 -1 0 0 0 -M
CB XB x1 x2 s1 s2 s3 A1 b(RHS) min ratio bi /yik
3 x1 1 3 0 1/2 0 × 3
0 s1 0 5 1 2 0 × 4
0 s3 0 1 0 0 1 × 4
Zj − Cj 0 10 0 3/2 0 × Z = 9
Since all Zj − Cj ≥, optimal basic feasible solution is obtained. Therefore,
the solution is Max Z = 9, x1 = 3, x2 = 0.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 85 / 107
Example 3
Maximize Z = 3x1 + 2x2 + x3
subject to 2x1 + x2 + x3 = 12
3x1 + 4x2 = 11
and x1 is unrestricted, x2, x3 ≥ 0
Solution:The standard linear programming problem
Maximize Z = 3(x1 − x1 ) + 2x2 + x3 − MA1 − MA2
subject to 2(x1 − x1 ) + x2 + x3 + A1 = 12
3(x1 − x1 ) + 4x2 + A2 = 11
x1, x1 , x2, x3, A1, A2 ≥ 0
Maximize Z = 3x1 − 3x1 + 2x2 + x3 − MA1 − MA2
subject to 2x1 − 2x1 + x2 + x3 + A1 = 12
3x1 − 3x1 + 4x2 + A2 = 11
x1, x1 , x2, x3, A1, A2 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 86 / 107
Cj 3 -3 2 1 -M -M
CB XB x1 x1 x2 x3 A1 A2 b(RHS)
-M A1 2 -2 1 1 1 0 12
-M A2 3 -3 4 0 0 1 11
Zj − Cj ↑ −5M − 3 5M + 3 -5M -2 -M- 1 0 0 Z = -23M
Cj 3 -3 2 1 -M -M
CB XB x1 x1 x2 x3 A1 A2 b(RHS) m
-M A1 0 0 -5/3 1 1 × 14/3 14
3 x1 1 -1 4/3 0 0 × 11/3 -
Zj − Cj 0 0 5/3M+1 ↑ −M − 1 0 × Z = −14M
3 + 11
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 87 / 107
Cj → 3 -3 2 1 -M -M
CB ↓ XB ↓ x1 x1 x2 x3 A1 A2 b(RHS) min ratio
1 x3 0 0 -5/3 1 × × 14/3
3 x1 1 -1 4/3 0 × × 11/3 -
Zj − Cj → 0 0 1/3 0 × × Z = 47/3
Since all Zj − Cj ≥ 0, optimal basic feasible solution is obtained
x1 = 11/3, x1 = 0
x1 = x1 − x1 = 11/3 − 0 = 11/3
Therefore the solution is Max Z = 47/3, x1 = 11/3, x2 = 0, x3 = 14/3.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 88 / 107
Some Complications and Their Resolution
Unrestricted Variables
Let variable xr be unrestricted in sign. We define two new variables say xr
and xr such that
xr = xr − xr ; &xr , xr ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 89 / 107
Tie for Entering Basic Variable (Key Column)
While solving an LP problem using simplex method two or more
columns of simplex table may have same zj − cj value (positive or
negative depending upon the type of LP problem).
In order to break this tie, the selection for key column (entering
variable) can be made arbitrary.
However, the number of iterations required to arrive at the optimal
solution can be minimized by adopting the following rules.
(i) If there is a tie between two decision variables, then the selection can
be made arbitrarily.
(ii) If there is a tie between a decision variable and a slack (or surplus)
variable, then select the decision variable to enter into basis.
(iii) If there is a tie between two slack (or surplus) variables, then the
selection can be made arbitrarily
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 90 / 107
Tie for Leaving Basic Variable (Key Row) Degeneracy
While solving an LP problem a situation may arise where either the
minimum ratio to identify the basic variable to leave the basis is not
unique or value of one or more basic variables in the xB becomes
zero.This causes the problem of degeneracy.
In order to break the tie in the minimum ratios, the selection can be
made arbitrarily.
However, the number of iterations required to arrive at the optimal
solution can be minimized by adopting the following rules.
(i) Divide the coefficients of slack variables in the simplex table where
degeneracy is seen by the corresponding positive numbers of the key
column in the row, starting from left to right.
(ii) Compare the ratios in step (i) from left to right column wise, select the
row that contains the smallest ratio.
Remark When there is a tie between a slack and artificial variable to
leave the basis, preference should be given to the artificial variable for
leaving the basis.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 91 / 107
Example
Solve the following LP problem
Maximize Z = 3x1 + 9x2
subject to the constraints
x1 + 4x2 ≤ 8,
x1 + 2x2 ≤ 4
and x1, x2 ≥ 0
Solution Adding slack variables s1 and s2 to the constraints, the problem
can be expressed as
Maximize Z = 3x1 + 9x2 + 0s1 + 0s2
subject to the constraints
x1 + 4x2 + s1 = 8,
x1 + 2x2 + s2 = 4
and x1, x2, s1, a2 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 92 / 107
The initial basic feasible solution is given in Table . Both variables s1 and
s2 are eligible to leave the basis as the minimum ratio is same, i.e. 2, so
there is a tie among the ratio in rows s1 and s2.This causes the problem of
degeneracy. To obtain the key row for resolving degeneracy, apply the
following procedure:
cj 3 9 0 0
CB XB x1 x2 s1 s2 b(RHS) min. ratio
0 s1 1 4 1 0 8 8/4=2
0 s2 1 2 0 1 4 4/2=2
Zj − Cj -3 ↑-9 0 0 0
Dividing the coefficients of slack variables s1 and s2 by the corresponding
elements in the key column as shown below in the table.
x2 column
Row (Key Column) s1 s2
s1 4 1/4=1/4 0/4=0
s2 2 0/2=0 1/2=1/2
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 93 / 107
Comparing the ratios of Step (ii) from left to right column wise, the
minimum ratio (i.e., 0/2 = 0) occurs in the s2-row. Thus, variable s2 is
selected to leave the basis. The new solution is shown in Table
cj 3 9 0 0
CB XB x1 x2 s1 s2 b(RHS)
0 s1 -1 0 1 -2 0
9 x2 1/2 1 0 1/2 2
Zj − Cj 3/2 0 0 9/2 18
In Table above , all Zj − Cj ≤ 0. Hence, an optimal solution is arrived at.
The optimal basic feasible solution is: x1 = 0, x2 = 2 and Max Z = 18.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 94 / 107
Types of Linear Programming Solutions
While solving any LP problem using simplex method, at the stage of
optimal solution, the following three types of solutions may exist:
Alternative (Multiple) Optimal Solutions
Unbounded Solution
Infeasible Solutions
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 95 / 107
Alternative (Multiple) Optimal Solutions
The zj − cj values in the simplex table indicates the contribution in
the objective function value by each unit of a variable chosen to enter
into the basis.
Also, Also, an optimal solution to a maximization(minimization) LP
problem is reached when all zj − cj ≥ 0(zj − cj ≤ 0).
But, if zj − cj = 0 for a non-basic variable column in the optimal
simplex table and such nonbasic variable is chosen to enter into the
basis, then another optimal solution so obtained will show no
improvement in the value of objective function.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 96 / 107
Example
Solve the following LP problem.
Maximize Z = 6x1 + 4x2
subject to the constraints
2x1 + 3x2 ≤ 30,
3x1 + 2x2 ≤ 24
,
x1 + x2 ≥ 3
and
x1, x2 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 97 / 107
Solution Adding slack variables s1, s2, surplus variable s3 and artificial
variable A1 in the constraint set,the LP problem becomes
Maximize Z = 6x1 + 4x2 + 0s1 + 0s2 + 0s3 − MA1
subject to the constraints
2x1 + 3x2 + s1 = 30,
3x1 + 2x2 + s2 = 24
,
x1 + x2 − s3 + A1 = 3
and
x1, x2, s1, s2, A1 ≥ 0
The optimal solution: x1 = 8, x2 = 0 and Max Z = 48 for this LP problem
is shown in Table below.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 98 / 107
Cj 6 4 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS) Ratio
0 s1 0 5/3 1 -2/3 0 14 42/5 ←
0 s3 0 -1/3 0 1/3 1 5
6 x1 1 2/3 0 1/3 0 8 12
zj − cj 0 ↑ 0 0 2 0 48
In Table above , z2 − c2 = 0 corresponds to a non-basic variable, x2. Thus, an
alternative optimal solution can also be obtained by entering variable x2 into the
basis and removing basic variable, s1 from the basis. The new solution is shown in
Table below.
Cj 6 4 0 0 0
CB XB x1 x2 s1 s2 s3 b(RHS)
4 x2 0 1 3/5 -2/5 0 42/5
0 s3 0 0 1/5 1/5 1 39/5
6 x1 1 0 -2/5 3/5 0 12/5
zj − cj 0 0 0 2 0 48
The optimal solution shown in Table above is: x1 = 12/5, x2 = 42/5 and Max Z
= 48. Since this optimal solution shows no change in the value of objective
function, it is an alternative solution. Once again, z3 − c3 = 0 corresponds to
nonbasic variable, s1. This indicates that an alternative optimal solution exists.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 99 / 107
Unbounded Solution
In a maximization LP problem, if zj − zj < 0(zj − cj > 0 for a
minimization case) corresponds to a non-basic variable column in
simplex table, and all aij values in this column are negative, then
minimum ratio to decide basic variable to leave the basis can not be
calculated.
It is because negative value in denominator would indicate the entry
of a non-basic variable in the basis with a negative value (an
infeasible solution).
Also,a zero value in the denominator would result in a ratio having an
infinite value and would indicate that the value of non-basic variable
could be increased infinitely with any of the current basic variables
being removed from the basis.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 100 / 107
Example
Solve the following LP problem.
Maximize Z = 3x1 + 5x2
subject to the constraints
x1 − 2x2 ≤ 6,
3x1 ≤ 10
,
x2 ≥ 1
and
x1, x2 ≥ 0
Solution:Adding slack variables s1, s2 , surplus variable s3 and artificial
variable A1 in the constraint set.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 101 / 107
Then the standard form of LP problem becomes
Maximize Z = 3x1 + 5x2 + 0s1 + 0s2 + 0s3 − MA1
subject to the constraints
x1 − 2x2 + s1 = 6,
3x1 + s2 = 10
,
x2 − s3 + A1 = 1
and
x1, x2, s1, s2, s3, A1 ≥ 0
The initial solution to this LP problem is shown in Table below.
Cj 3 5 0 0 0 -M
CB xB x1 x2 s1 s2 s3 A1 b(RHS) min. ratio
0 s1 1 -2 1 0 0 0 6 -
0 s2 1 0 0 1 0 0 10 -
-M A1 0 1 0 0 -1 1 1 1 ←
zj − cj -3 ↑-5-M 0 0 M 0 M
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 102 / 107
Iteration 1: Since z2 − c2 ≥ 0, non-basic variable x2 is chosen to enter into
the basis in place of basic variable A1. The new solution is shown in Table
below.
Cj 3 5 0 0 0 -M
CB xB x1 x2 s1 s2 s3 A1 b(RHS)
0 s1 1 0 1 0 -2 2 8
0 s2 1 0 0 1 0 0 10
5 x2 0 1 0 0 -1 1 1
zj − cj -3 0 0 0 -5 M+5 5
In Table above, z5 − c5 = −5 is largest negative, so variable s3 should enter
into the basis. But, coefficients in the s3 column are all negative or zero.
This indicates that s3 cannot be entered into the basis. However,the value
of s3 can be increased infinitely without removing any one of the basic
variables. Further, since s3 is associated with x2 in the third constraint, x2
will also be increased infinitely because it can be expressed as
x2 = 1 + s3 − A1. Hence, the solution to the given problem is unbounded.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 103 / 107
Infeasible Solution
Lp models with inconsistent constraints have no feasible solution.
This situation can never occur if all the constraints are of the type ≤
with nonnegative right hand sides because the slacks provide a
feasible solution.
For other types of constraints we use artificial variables.
Although the artificial variables are penalized in the objective function
to force them to zero at the optimum, this can occur only if the
model has a feasible space.
Otherwise, at least one artificial variables will be positive in the
optimum iteration.
If LP problem solution does not satisfy all of the constraints, then
such a solution is called infeasible solution.
Also, infeasible solution occurs when all zj − cj values satisfy optimal
solution condition but at least one of artificial variables appears in the
basis with a positive value.
This situation may occur when an LP model is either improperly
formulated or more than two of the constraints are incompatible.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 104 / 107
Example
Solve the following LP problem.
Maximize Z = 6x1 + 4x2
subject to the constraints
x1 + x2 ≤ 5,
x2 ≥ 8
and
x1, x2 ≥ 0
Solution:Adding slack, surplus and artificial variables, the standard form
of LP problem becomes
Maximize Z = 6x1 + 4x2 + 0s1 + 0s2 − MA1
subject to the constraints
x1 + x2 + s1 = 5,
x2 − s2 + A1 = 8
and x1, x2, s1, s2, A1 ≥ 0
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 105 / 107
The initial solution to this LP problem is shown in Table below.
Cj 6 4 0 0 -M
CB XB x1 x2 s1 s2 A1 b(RHS) min. ratio
6 x1 1 1 1 0 0 5 5/1←
-M A1 0 1 0 -1 1 8 8/1
zj − cj 0 ↑2-M 6 M 0 30-8M
Iteration 1: Since z2 − c2 = 2 − M(≤ 0), non-basic variable x2 is chosen to enter
into the basis to replace basic variable x1. The new solution is shown in Table
below.
Cj 6 4 0 0 -M
CB XB x1 x2 s1 s2 A1 b(RHS)
4 x2 1 1 1 0 0 5
-M A1 -1 0 -1 -1 1 3
zj − cj M-2 0 4+M M 0 20-3M
In table above, since all zj − cj ≥ 0, the current solution is optimal. But this
solution is not feasible for the given LP problem because values of decision
variables are: x1 = 0 and x2 = 5 violates second constraint,x2 ≥ 8. The presence
of artificial variable A1 = 3 in the solution also indicates that the optimal solution
violates the second constraint (x2 ≥ 8) by 3 units.
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 106 / 107
I THANK YOU!!
(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 107 / 107

Weitere ähnliche Inhalte

Was ist angesagt?

LP special cases and Duality.pptx
LP special cases and Duality.pptxLP special cases and Duality.pptx
LP special cases and Duality.pptxSnehal Athawale
 
Simplex Method
Simplex MethodSimplex Method
Simplex Methodkzoe1996
 
Unit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisisUnit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisisDagnaygebawGoshme
 
Duality in Linear Programming
Duality in Linear ProgrammingDuality in Linear Programming
Duality in Linear Programmingjyothimonc
 
Simplex Method
Simplex MethodSimplex Method
Simplex MethodSachin MK
 
Duality in lpp
Duality in lppDuality in lpp
Duality in lppAbu Bashar
 
3. linear programming senstivity analysis
3. linear programming senstivity analysis3. linear programming senstivity analysis
3. linear programming senstivity analysisHakeem-Ur- Rehman
 
Sensitivity analysis in linear programming problem ( Muhammed Jiyad)
Sensitivity analysis in linear programming problem ( Muhammed Jiyad)Sensitivity analysis in linear programming problem ( Muhammed Jiyad)
Sensitivity analysis in linear programming problem ( Muhammed Jiyad)Muhammed Jiyad
 
03 convexfunctions
03 convexfunctions03 convexfunctions
03 convexfunctionsSufyan Sahoo
 
Operation Research (Simplex Method)
Operation Research (Simplex Method)Operation Research (Simplex Method)
Operation Research (Simplex Method)Shivani Gautam
 
NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING karishma gupta
 
Roots of equations
Roots of equations Roots of equations
Roots of equations shopnohinami
 
Linear programming graphical method (feasibility)
Linear programming   graphical method (feasibility)Linear programming   graphical method (feasibility)
Linear programming graphical method (feasibility)Rajesh Timane, PhD
 
Integer Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear ProgrammingInteger Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear ProgrammingSalah A. Skaik - MBA-PMP®
 
Canonical form and Standard form of LPP
Canonical form and Standard form of LPPCanonical form and Standard form of LPP
Canonical form and Standard form of LPPSundar B N
 

Was ist angesagt? (20)

LP special cases and Duality.pptx
LP special cases and Duality.pptxLP special cases and Duality.pptx
LP special cases and Duality.pptx
 
Simplex Method
Simplex MethodSimplex Method
Simplex Method
 
Unit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisisUnit.3. duality and sensetivity analisis
Unit.3. duality and sensetivity analisis
 
Duality in Linear Programming
Duality in Linear ProgrammingDuality in Linear Programming
Duality in Linear Programming
 
Big m method
Big m methodBig m method
Big m method
 
Simplex Method
Simplex MethodSimplex Method
Simplex Method
 
Duality in lpp
Duality in lppDuality in lpp
Duality in lpp
 
3. linear programming senstivity analysis
3. linear programming senstivity analysis3. linear programming senstivity analysis
3. linear programming senstivity analysis
 
Sensitivity analysis in linear programming problem ( Muhammed Jiyad)
Sensitivity analysis in linear programming problem ( Muhammed Jiyad)Sensitivity analysis in linear programming problem ( Muhammed Jiyad)
Sensitivity analysis in linear programming problem ( Muhammed Jiyad)
 
simplex method
simplex methodsimplex method
simplex method
 
03 convexfunctions
03 convexfunctions03 convexfunctions
03 convexfunctions
 
Operation Research (Simplex Method)
Operation Research (Simplex Method)Operation Research (Simplex Method)
Operation Research (Simplex Method)
 
NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING NON LINEAR PROGRAMMING
NON LINEAR PROGRAMMING
 
Roots of equations
Roots of equations Roots of equations
Roots of equations
 
Big m method
Big   m methodBig   m method
Big m method
 
Transportation problem
Transportation problemTransportation problem
Transportation problem
 
Linear programming graphical method (feasibility)
Linear programming   graphical method (feasibility)Linear programming   graphical method (feasibility)
Linear programming graphical method (feasibility)
 
Simplex Method.pptx
Simplex Method.pptxSimplex Method.pptx
Simplex Method.pptx
 
Integer Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear ProgrammingInteger Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear Programming
 
Canonical form and Standard form of LPP
Canonical form and Standard form of LPPCanonical form and Standard form of LPP
Canonical form and Standard form of LPP
 

Ähnlich wie Chapter 4 Simplex Method ppt

LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game TheoryPurnima Pandit
 
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
 
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
 
Simplex method - Maximisation Case
Simplex method - Maximisation CaseSimplex method - Maximisation Case
Simplex method - Maximisation CaseJoseph Konnully
 
Lecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptxLecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptxhlKh4
 
MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4arogozhnikov
 
Linearprog, Reading Materials for Operational Research
Linearprog, Reading Materials for Operational Research Linearprog, Reading Materials for Operational Research
Linearprog, Reading Materials for Operational Research Derbew Tesfa
 
Linear Programming Quiz Solution
Linear Programming Quiz SolutionLinear Programming Quiz Solution
Linear Programming Quiz SolutionEd Dansereau
 
Balaji-opt-lecture3-sp13.pptx
Balaji-opt-lecture3-sp13.pptxBalaji-opt-lecture3-sp13.pptx
Balaji-opt-lecture3-sp13.pptxMayurkumarpatil1
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...kongara
 
Integer Linear Programming
Integer Linear ProgrammingInteger Linear Programming
Integer Linear ProgrammingSukhpalRamanand
 
Certified global minima
Certified global minimaCertified global minima
Certified global minimassuserfa7e73
 
CHAPTER 6 System Techniques in water resuorce ppt yadesa.pptx
CHAPTER 6 System Techniques in water resuorce ppt yadesa.pptxCHAPTER 6 System Techniques in water resuorce ppt yadesa.pptx
CHAPTER 6 System Techniques in water resuorce ppt yadesa.pptxGodisgoodtube
 

Ähnlich wie Chapter 4 Simplex Method ppt (20)

Graphical method
Graphical methodGraphical method
Graphical method
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game Theory
 
OI.ppt
OI.pptOI.ppt
OI.ppt
 
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
 
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
 
2_Simplex.pdf
2_Simplex.pdf2_Simplex.pdf
2_Simplex.pdf
 
Simplex method
Simplex methodSimplex method
Simplex method
 
Simplex method - Maximisation Case
Simplex method - Maximisation CaseSimplex method - Maximisation Case
Simplex method - Maximisation Case
 
Lecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptxLecture5-7_12946_Linear Programming The Graphical Method.pptx
Lecture5-7_12946_Linear Programming The Graphical Method.pptx
 
MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4MLHEP 2015: Introductory Lecture #4
MLHEP 2015: Introductory Lecture #4
 
02 basics i-handout
02 basics i-handout02 basics i-handout
02 basics i-handout
 
Simplex algorithm
Simplex algorithmSimplex algorithm
Simplex algorithm
 
Linearprog, Reading Materials for Operational Research
Linearprog, Reading Materials for Operational Research Linearprog, Reading Materials for Operational Research
Linearprog, Reading Materials for Operational Research
 
Linear Programming Quiz Solution
Linear Programming Quiz SolutionLinear Programming Quiz Solution
Linear Programming Quiz Solution
 
Balaji-opt-lecture3-sp13.pptx
Balaji-opt-lecture3-sp13.pptxBalaji-opt-lecture3-sp13.pptx
Balaji-opt-lecture3-sp13.pptx
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
 
Linear Programming
Linear ProgrammingLinear Programming
Linear Programming
 
Integer Linear Programming
Integer Linear ProgrammingInteger Linear Programming
Integer Linear Programming
 
Certified global minima
Certified global minimaCertified global minima
Certified global minima
 
CHAPTER 6 System Techniques in water resuorce ppt yadesa.pptx
CHAPTER 6 System Techniques in water resuorce ppt yadesa.pptxCHAPTER 6 System Techniques in water resuorce ppt yadesa.pptx
CHAPTER 6 System Techniques in water resuorce ppt yadesa.pptx
 

Kürzlich hochgeladen

Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRDelhi Call girls
 
Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptxRajatChauhan518211
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bSérgio Sacani
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PPRINCE C P
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...anilsa9823
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhousejana861314
 
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCESTERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCEPRINCE C P
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Sérgio Sacani
 
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...Sérgio Sacani
 
Zoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfZoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfSumit Kumar yadav
 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptxanandsmhk
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000Sapana Sha
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)Areesha Ahmad
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfmuntazimhurra
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTSérgio Sacani
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSSLeenakshiTyagi
 

Kürzlich hochgeladen (20)

Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
 
Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptx
 
The Philosophy of Science
The Philosophy of ScienceThe Philosophy of Science
The Philosophy of Science
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C P
 
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhouse
 
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCESTERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
 
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
PossibleEoarcheanRecordsoftheGeomagneticFieldPreservedintheIsuaSupracrustalBe...
 
Zoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfZoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdf
 
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptxUnlocking  the Potential: Deep dive into ocean of Ceramic Magnets.pptx
Unlocking the Potential: Deep dive into ocean of Ceramic Magnets.pptx
 
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 60009654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
9654467111 Call Girls In Raj Nagar Delhi Short 1500 Night 6000
 
GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)GBSN - Microbiology (Unit 1)
GBSN - Microbiology (Unit 1)
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
 

Chapter 4 Simplex Method ppt

  • 1. Linear Optimization(MATH 2062) Dereje Tigabu(MSc.) Department of Mathematics Debark University October 7, 2020 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 1 / 107
  • 2. Chapter 4 Simplex Method1 1 Checking the results of a decision against its expectations shows executives what their strengths are, where they need to improve, and where they lack knowledge or information. Peter Drucker (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 2 / 107
  • 3. Outline of the Chapter 1 Introduction 2 Linear Programs in Standard Form 3 Basic Feasible Solutions 4 Fundamental Theorem of Linear Programming 5 Algebra of the Simplex Method 6 The simplex Algorithm 7 Degeneracy and Finiteness of Simplex Algorithm 8 Finding a Starting Basic Feasible Solution Two -phase method Big - M method 9 Some Complications and Their Resolution Unrestricted Variables Tie for Entering Basic Variable (Key Column) Tie for Leaving Basic Variable (Key Row) Degeneracy 10 Types of Linear Programming Solutions Alternative (Multiple) Optimal Solutions Unbounded Solution Infeasible Solution (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 3 / 107
  • 4. 1. Introduction We shall discuss a procedure called the simplex method for solving an LP model of such problems. This method was developed by G B Dantzig in 1947. For LP problems with several variables, we may not be able to graph the feasible region, but the optimal solution will still lie at an extreme point of the many-sided, multidimensional figure (called an n-dimensional polyhedron) that represents the feasible solution space. The simplex method examines these extreme points in a systematic manner, repeating the same set of steps of the algorithm until an optimal solution is found. It is also called the iterative method. Since the number of extreme points of the feasible solution space are finite, the method assures an improvement in the value of objective function as we move from one iteration (extreme point) to another and achieve the optimal solution in a finite number of steps. The method also indicates when an unbounded solution is reached. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 4 / 107
  • 5. Slack variables : If the constraints are in the form of ”≤”, then we add variable to the left hand side of inequality to make equality. This variables are called slack variables. Remark: A slack variable represents unused resource, either in the form of time on a machine, labour hours, money, warehouse space or any number of such resources in various business problems. Since these variables yield no profit, therefore such variables are added to the original objective function with zero coefficients. Surplus variables : If the constraints are in the form of ” ≥ ” , then we subtract variables to the left hand side of inequality to make equality. This variables are called the surplus variables. Remark: A surplus variable represents amount by which solution values exceed a resource. These variables are also called negative slack variables. Surplus variables like slack variables carry a zero coefficients in the objective function.(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 5 / 107
  • 6. Conditions for Application of Simplex Method 1 The right hand side of each of the constraint bi should be non-negative. If LPP has a negative constraint, we should convert it to positive by multiplying both sides by -1. 2 Each of the decision variables of the problem should be non-negative. If one of the choice variables is not feasible we can’t apply the Simplex method. Therefore feasibility is the necessity condition for application of the Simplex method. 3 The inequality constraint of resources or any other activities must be converted in to equality equations by adding slack variables (≤ type inequality) or by subtracting surplus variables (≥ type inequality) to the left of the inequality. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 6 / 107
  • 7. 2. Linear Programs in Standard Form The standard form have the following characteristics. (i) All the constraints should be expressed as equations by adding slack or surplus and/ or artificial variables. (ii) The right hand side of each constraint should be made of non-negative, if it is not, this should be done by multiplying both sides of the resulting constraint by -1. Minimization and Maximization Problems Max(Z) = −Min(−Z). After the optimization of the new problem is completed, the objective value of the old problem is -1 times the optimal objective value of the new problem. Let Z = −Z. After the Z value is found, replace Z = −Z . (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 7 / 107
  • 8. Consider an LP model, Max/Min Z = cT X S.t : Ax(≤, ≥)b xi ≥ 0 The standard form of the Linear programming problem is expressed as Optimize(Max or Min)Z = CT X + 0S Subject to AX ± S = b. and X, S ≥ 0. where CT = (c1, c2, ..., cn) is the row vector, X = (x1, x2, ..., xn)T b = (b1, b2, ..., bm)T and S = (s1, s2, ..., sm)T are column vectors. and A =         a11 a12 . . . a1n a21 a22 . . . a2n . . . . . . . . . . . . am1 am2 . . . amn         is the m × n matrix of coefficients of variables of x1, x2, ..., xn in the constraints. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 8 / 107
  • 9. Example 1 Transform the following LPP into standard form of LPP. Maximize Z = x1 − x2 + x3 Subject to x1 + x2 − 3x3 ≥ 4 2x1 − 4x2 + x3 ≥ −5 x1 + 2x2 − 2x3 ≤ 3 x1 ≥ 0, x2 ≥ 0, x3 ≥ 0 Solution: Since the second constraint have a negative value on the right we can multiply both side by (-1) so, Maximize Z = x1 − x2 + x3 + 0x4 + 0x5 + 0x6 subject to x1 + x2 − 3x3 − x4 = 4 −2x1 + 4x2 − x3 + x5 = 5 x1 + 2x2 − 2x3 + x6 = 3 x1, x2, x3, x4, x5, x6 ≥ 0 Therefore, the above problem is standard LPP. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 9 / 107
  • 10. Variables Unrestricted In Sign The difference of two non-negative variables is a variable unrestricted in sign. Let x1 and x2 be two non-negative variables. The difference of these two variables is a variable x3 i.e x3 = x1 − x2 which is unrestricted in sign. If x1 > x2, then x3 > 0. If x1 < x2, then x3 < 0. If x1 = x2 , then x3 = 0. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 10 / 107
  • 11. Example 2. Write down the following LPP, where the variables are non-negative in standard LPP form Maximize Z = 2x1 + 3x2 − x3 Subject to 4x1 + x2 + x3 ≥ 4 7x1 + 4x2 − x3 ≤ 25, x1, x3 ≥ 0, x2 unrestricted sign. Solution: Since x2 is unrestricted sign, we will convert x2 by x2 = x2 − x2 , where x2, x2 ≥ 0. Maximize Z = 2x1 + 3x2 − 3x2 − x3 + 0x4 + 0x5 Subject to 4x1 + x2 − x2 + x3 − x4 = 4 7x1 + 4x2 − 4x2 − x3 + x5 = 25, x1, x3, x4, x5, x2, x2 ≥ 0, (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 11 / 107
  • 12. 3. Basic Feasible Solutions Consider the system Ax = b and x ≥ 0, where A is an m × n matrix and b is any m vector. Suppose that the rank(A, b) = rank(A) = m. After possibly rearrangement of the columns of A, let A = [B, N] where B is an m × m invertible matrix and N is m × (n − m) matrix. The point X = XB XN , where XB = B−1b and XN = 0 is called a basic solution of the system. If XB ≥ 0, then X is called a basic feasible solution of the system. Here B is called the basic matrix (or simply the basis ) and N is called the non basic matrix. The components of XB are called basic variables, and the components XN are called non basic variables. If XB > 0, then X is called a non degenerate basic feasible solution and if at least one component of XB is zero, then X is called a degenerate basic feasible solution. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 12 / 107
  • 13. Example 1 Consider the polyhedral set defined by the following inequalities. Find the basic solution. (a) x1 + x2 ≤ 6 x2 ≤ 3 x1, x2 ≥ 0 (b) x + y ≤ 3 2x − y ≤ 4 x, y ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 13 / 107
  • 14. Solution (a) By introducing the slack variables x3 and x4 , the problem is put in the following standard format: x1 + x2 + x3 = 6 x2 + x4 = 3 x1, x2, x3, x4 ≥ 0 Note that, the constraint matrix A = [a1 a2 a3 a4] = 1 1 1 0 0 1 0 1 From forgoing definition, basic feasible solutions corresponding to finding a 2 × 2 basic matrix B with non-negative B−1b. The following are possible ways of extracting B out of A. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 14 / 107
  • 15. B = [a1, a2] = 1 1 0 1 XB = x1 x2 = B−1b = 1 −1 0 1 6 3 = 3 3 XN = x3 x4 = 0 0 B = [a1, a4] = 1 0 0 1 XB = x1 x4 = B−1b = 1 0 0 1 6 3 = 6 3 XN = x2 x3 = 0 0 B = [a2, a3] = 1 1 1 0 XB = x2 x3 = B−1b = 0 1 1 −1 6 3 = 3 3 XN = x1 x4 = 0 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 15 / 107
  • 16. B = [a2, a4] = 1 0 1 1 XB = x2 x4 = B−1b = 1 0 −1 1 6 3 = 6 −3 XN = x1 x3 = 0 0 B = [a3, a4] = 1 0 0 1 XB = x3 x4 = B−1b = 1 0 0 1 6 3 = 6 3 XN = x1 x2 = 0 0 We have four basic feasible solutions. Namely X1 =     3 3 0 0     , X2 =     6 0 0 3     , X3 =     0 3 3 0     , X4 =     0 0 6 3     (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 16 / 107
  • 17. These basic feasible solutions, projected in R2 that is in the (x1, x2) space give rise to the following four points. 3 3 , 6 0 , 0 3 , 0 0 these points are precisely the extreme points of the feasible region. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 17 / 107
  • 18. In this example, the possible number of basic feasible solutions is bounded by the number of ways of extracting two columns out of four columns to form the basis. therefore, the number of basic feasible solutions is less than or equal to 4 2 = 4! 2!(4−2)! = 6. Out of these six possibilities, one point violates the non-negativity of B−1b. furthermore, a1 and a3 could not have been used to form a basic since a1 = a3 = 1 0 are linearly dependent, and hence the matrix 1 1 0 0 does not qualify as a basis. This leaves four basic feasible solutions. (b) x + y + z = 3 2x − y + w = 4 x, y, z, w ≥ 0 1 1 1 0 2 −1 0 1     x y z w     = 3 4 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 18 / 107
  • 19. Here, 4 2 = 6 possible square matrix obtained from the given system. If we select column 3 and 4 and assign zero value to the variable associated with column 1and 2, z = 3, w = 4. So, (0, 0, 3, 4) is basic solution and x, y is non-basic variable and z and w is basic variable. B = 1 0 0 1 . XB = z w = 3 4 and XN = x y = 0 0 Again take column 1 and 2 and assign zero value to the variable associated with column 3 and 4, B = 1 1 2 −1 and XB = B−1b and follow the same fashion as above. In general, the number of basic feasible solutions is less than or equal to n m = n! m!(n−m)! (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 19 / 107
  • 20. Example 2: Degenerate basic feasible solutions Consider the following system of inequalities x1 + x2 ≤ 6 x2 ≤ 3 x1 + 2x2 ≤ 9 x1, x2 ≥ 0 Solution: After adding the slack variables x3, x4 and x5, we get x1 + x2 + x3 = 6 x2 + x4 = 3 x1 + 2x2 + x5 = 9 x1, x2, x3, x4, x5 ≥ 0 A = [a1, a2, a3, a4, a5] =   1 1 1 0 0 0 1 0 1 0 1 2 0 0 1   (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 20 / 107
  • 21. Let us consider the basic feasible solution for B = [a1, a2, a3] XB =   x1 x2 x3   =   1 1 1 0 1 0 1 2 0   −1   6 3 9   =   0 −2 1 0 1 0 1 1 −1     6 3 9   =   3 3 0  , XN = x4 x5 = 0 0 Note that this basic feasible solution is degenerate since the basic variable x3 = 0. Now consider the basic feasible solution with B = [a1, a2, a4] XB =   x1 x2 x4   =   1 1 0 0 1 1 1 2 0   −1   6 3 9   =   2 0 −1 −1 0 1 1 1 −1     6 3 9   =   3 3 0  , XN = x3 x5 = 0 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 21 / 107
  • 22. Note that these basic feasible solution give rise to the same point obtained by B = [a1, a2, a3]. It can be also checked the other basic feasible solution with basis B = [a1, a2, a5] is give by XB =   x1 x2 x5   =   3 3 0   , XN = x3 x4 = 0 0 Note that all the three foregoing bases represent the single extreme point or basic feasible solution (x1, x2, x3, x4, x5) = (3, 3, 0, 0, 0). This basic feasible solution is degenerate since each associated basis involves a basic variable at level zero. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 22 / 107
  • 23. 4. Fundamental Theorem of Linear Programming Theorem: For an arbitrary linear programming in standard form of the following are true. i . If there is no optimal solution, then the problem is either infeasible or unbounded ii . If a feasible solution exists, then a basic feasible solution exists iii . If an optimal solution exists, then a basic optimal solution exists (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 23 / 107
  • 24. Key to the Simplex Method The key to the simplex method lies in recognizing the optimality of a given extreme point solution based on local considerations without having to (globally) enumerate all extreme points or basic feasible solutions. Consider the following linear programming problem : LP: Minimize CT X Subject to AX = b X ≥ 0 where A is an m × n matrix with rank m, suppose that we have a basic feasible solution B−1b 0 whose objective value Z0 is given by Z0 = CT B−1b 0 = (CT B , CT N ) B−1b 0 = CT B B−1 b (3.1) (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 24 / 107
  • 25. Now, let XB and XN denote the set of basic and non basic variables for the given basis and X = XB XN be an arbitrary feasible solution. Then feasibility requires that XB ≥ 0, XN ≥ 0 and that b = AX = BXB + NXN. Multiplying by B−1 and rearranging the terms, we get B−1 (b = BXB + NXN) B−1 b = XB + B−1 NXN XB = B−1 b − B−1 NXN XB = B−1 b − j∈J B−1 aj xj XB = B−1 b − j∈J yj xj XB = b − j∈J yj xj (3.2) Where b = B−1 b, yj = B−1 aj and J is the current set of the indices of the non basic variables. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 25 / 107
  • 26. Letting Z denote the objective function value. we get Z = CT X = (CT B , CT N ) XB XN = CT B XB + CT N XN = CT B (B−1b − j∈J B−1aj xj ) + j∈J cj xj = CT B (B−1b) − j∈J CT B B−1aj xj + j∈J cj xj = Z0 − j∈J zj xj + j∈J cj xj = Z0 − j∈J(zj − cj )xj (3.3) Where zj = CT B B−1aj for each non basic variable Using the foregoing transformation, the linear programming problem may be written as Minimize Z = Z0 − j∈J (zj − cj )xj subject to j∈J yj xj + XB = B−1 b xj ≥ 0, j ∈ J, and XB ≥ 0 (3.4) (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 26 / 107
  • 27. 5. Algebra of the Simplex Method Consider the representation of the linear programming LP in the non basic variable space written in equality form as in equation (3.4). If (zj − cj ) ≤ 0 for j ∈ J, then xj = 0 for j ∈ J and XB = B−1b is optimal for the LP. otherwise, while holding (p-1) non basic variable fixed at zero, the simplex method considers increasing the remaining variables, say xk. Naturally we would like zk − ck to be positive and perhaps the most positive of all the zj − cj , j ∈ J. Now fixing xj = 0 for j ∈ J − {k} we obtain from equation (3.4)that z = z0 − (zk − ck)xk (3.5) and                xB1 xB2 . . . xBr . . . xBm                =                b1 b2 . . . br . . . bm                −                y1k y2k . . . yrk . . . ymk                xk (3.6) (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 27 / 107
  • 28. If yik ≤ 0, then xBi increases as xk increase and so xBi continues to be non-negative. If yik ≥ 0 then xBi will decrease as xk increases. In order to satisfy non-negativity, xk is increase until the first point at which some basic variable xBr drops to zero. Examining equation (3.6), it is then clear that the first basic variable dropping to zero corresponds to the minimum of bi yik for positive yik. more precisely, we can increase xk until xk = br yrk = minimum1≤i≤m{ bi yik : yik > 0} (3.7) In the absence of degeneracy, br > 0 and hence xk = br yrk > 0, from equation (3.5) and the fact that zk − ck > 0, it then follows z < z0 and the objective function strictly improves. As xk increases from level 0 to br yrk , a new feasible solution obtained. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 28 / 107
  • 29. Substituting xk = br yrk in equation (3.6) gives the following point: xBi = bi − yik yrk br , i = 1, 2, ..., m xk = br yrk (3.8) and all other xj - variables are zero. From equation (3.8), xBr = 0 and hence, at most m variables are positive. The corresponding columns in A are aB1 , aB2 , ..., aBr−1 , ak, aBr+1 , ..., aBm . Note that these columns are linearly independent. since yrk = 0. (Recall that if aB1 , aB1 , ..., aBm are linearly independent, and if ak replaces aBr , then the new columns are linearly independent if and only if yrk = 0. Therefore, the point given by equation (3.8) is a basic feasible solution. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 29 / 107
  • 30. Recall that a basic variable xB that first drops to zero is called a blocking variable because it blocks the further increase of xk. Thus, xk enters the basis and xB leaves the basis. To summarize, we have algebraically described an iteration, that is, the process of transforming from one basis to an adjacent basis. This is done by increasing the value of a nonbasic variable xk with positive zk − ck and adjusting the current basic variables. In the process, the variable xB drops to zero. The variable xk hence enters the basis and xB leaves the basis. In the absence of degeneracy the objective function value strictly decreases, and hence the basic feasible solutions generated are distinct. Because there exists only a finite number of basic feasible solutions, the procedure would terminate in a finite number of steps. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 30 / 107
  • 31. Example 1. Minimize x1 + x2 subject to x1 + 2x2 ≤ 4 x2 ≤ 1 x1, x2 ≥ 0 Solution: Introducing the slack variables x3 and x4 to put the problem in a standard form. This leads to the following constraint matrix A : A = [a1, a2, a3, a4] = 1 2 1 0 0 1 0 1 Consider the basic feasible solution corresponding to B = [a1, a2]. In other words, x1 and x2 are the basic variables, while x3 and x4 are the nonbasic variables. The representation of the problem in this nonbasic variable space as in Equation (3.4) with J= 3, 4 may be obtained as follows. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 31 / 107
  • 32. First, compute: B−1 = 1 2 0 1 −1 = 1 −2 0 1 , CBB−1 = (1, 1) 1 −2 1 1 = (1, −1) Hence y3 = B−1a3 = 1 −2 0 1 1 0 = 1 0 y4 = B−1a4 = 1 −2 0 1 0 1 = −2 1 and XB = b = B−1b = 1 −2 0 1 4 1 = 2 1 Also, z0 = CBB−1b = (1, −1) 4 1 = 3 z3 − c3 = CBB−1a3 − c3 = (1, −1) 1 0 − 0 = 1 z4 − c4 = CBB−1a4 − c4 = (1, −1) 0 1 − 0 = −1 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 32 / 107
  • 33. Hence, the required representation of the problem is Minimize 3 − x3 + x4 subject to x3 − 2x4 + x1 = 2 x4 + x2 = 1 x1, x2, x3, x4 ≥ 0 Since z3 − c3 > 0, then the objective function improves by increasing x3, the modified solution is given by XB = B−1b − B−1a3x3 x1 x2 = 2 1 − 1 0 x3 The maximum value of x3 is 2 (any larger value of x3 will force x1 to be negative). Therefore the new basic feasible solution is (x1, x2, x3, x4) = (0, 1, 2, 0) Here, x3 enters the basis and x1 leaves the basis. Note that the new point has an objective value equal to 1, which is an improvement over the previous objective value of 3. the improvement is precisely (z3 − c3)x3 = 2. Remark: CB is the coefficient of the basic variable in the objective of the LP. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 33 / 107
  • 34. Termination with an optimal solution Consider the following problem, where A is an m × n matrix with rank m. Minimize CT X Subject to AX = b X ≥ 0 Suppose that X∗ is a basic feasible solution with basis B; that is X∗ = B−1b 0 Let Z∗ denote the objective value at X∗, that is Z∗ = CT B B−1b. Suppose further that Zj − Cj ≤ 0, for all non basic variables, and hence there are no non basic variables that are eligible to enter the basis. let x be any feasible solution with objective function value z, then from equation (3.3) we have Z∗ − Z = j∈J(Zj − Cj )xj (3.9) Because Zj − Cj ≤ 0 and Xj ≥ 0 for all variables, then Z∗ ≤ Z, and so X∗ is an optimal basic feasible solution, since if (Zj − Cj ) ≤ 0 for all j ∈ J, then the current basic feasible solution is optimal. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 34 / 107
  • 35. Example 2. Minimize 2x1 − x2 subject to −x1 + x2 ≤ 2 2x1 + x2 ≤ 6 x1, x2 ≥ 0 Introducing the slack variables x3 and x4. This leads to the following constraints −x1 + x2 + x3 = 2 2x1 + x2 + x4 = 6 x1, x2, x3, x4 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 35 / 107
  • 36. Consider the basic feasible solution with basis B = [a1, a2] = −1 1 2 1 andB−1 = −1/3 1/3 2/3 1/3 XB = B−1b − B−1NXN xB1 xB2 = x1 x2 = −1/3 1/3 2/3 1/3 2 6 − −1/3 1/3 2/3 1/3 1 0 0 1 x3 x4 = 4/3 10/3 − −1/3 2/3 x3 − 1/3 1/3 x4 (3.10) Currently, x3 = x4 = 0, x1 = 4/3 and x2 = 10/3, note that z3 − c3 = CBB−1a3 − c3 = (2, −1) −1/3 1/3 2/3 1/3 1 0 − 0 = −4/3 0 1 0 − 0 = −4/3 z4 − c4 = CBB−1a4 − c4 = (2, −1) −1/3 1/3 2/3 1/3 0 1 − 0 = 1/3 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 36 / 107
  • 37. Hence, the objective improves by holding x3 non- basic and introducing x4 in the basis. Then x3 is kept at zero level, x4 is increased and x1 and x2 are modified according to equation (3.10). we see that x4 can be increased to 4, at which x1 drops to zero. Any further increase of x4 results in violating the non-negativity of x1 and so x1 is the blocking variable. With x4 = 4 and x3 = 0, the modified value of x1 and x2 are 0 and 2 respectively. The new basic feasible solution is (x1, x2, x3, x4) = (0, 2, 0, 4) Note that a4 replaces a1, that is x1 drop from the basic and x4 enters the basis. The new set of basic and non basic variables and their values are given as : XB = xB4 xB2 = x4 x1 = 4 2 , XN = x3 x1 = 0 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 37 / 107
  • 38. Moving from old to the new basic feasible solution is illustrated in figure given below, note that as x4 increases by one unit x1 decrease by 1/3 unit and x2 decrease by 1/3 unit, that is we move in the direction (-1/3, -1/3) in the (x1, x2) space. This continues until we are blocked by the non negativity restriction x1 ≥ 0. At this point x1 drops to zero and leaves the basis. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 38 / 107
  • 39. The simplex Algorithm The key solution concepts The simplex method focuses on corner point feasible(CPF) solutions. The simplex method is an iterative algorithm(a systematic solution procedure that keeps repeating a fixed series of steps, called an iteration, until a desired result has been obtained ) with the following structure. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 39 / 107
  • 40. Whenever possible, the initialization of the simplex method chooses the origin point(all decision variables equal to zero) to be the initial CPF solution Given a CPF solution, it is much quicker computationally to gather information about its adjacent CPF solutions than about other CPF solutions. Therefore, each time the simplex method performs an iteration to move from the current CPF solution to a better one, it always chooses a CPF solution that is adjacent to the current one. After the current CPF solution is identified, the simplex method examines each of the edges of the feasible region that emanates from this CPF solution. Each of these edges leads to an adjacent solution at the other end, but the simplex method doesn’t even take the time to solve for adjacent CPF solutions, instead, it simply identifies the rate of improvement in Z that would be obtained by moving along the edge. and then chooses to move along the one with largest positive rate of improvement. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 40 / 107
  • 41. A positive rate of improvement in Z implies that the adjacent CPF solution is better than the current one, whereas a negative rate of improvement in Z implies that the adjacent CPF solution is worse. Therefore, the optimality test consists simply checking whether any of the edges give a positive rate of improvement in Z. if none do, then the current CPF solution is optimal. The Simplex Method in Tabular Form Steps Initialization: 1 Convert (Transform) all the constraints to equality by introducing slack, surplus, and artificial variables as follows Constraint type Variable to be added ≤ +slack(s) ≥ -surplus(s)+artificial(A) = + artificial(A) (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 41 / 107
  • 42. The artificial variable refers to the kind of variable which is introduced in the linear program model to obtain the initial basic feasible solution. It is utilized for the equality constraints and for the greater than or equal inequality constraints. 1 Construct the initial simplex tableau Cj c1 . . . cn 0 . . . 0 0 . . . 0 CB BV X1 . . . Xn S1 . . . Sn A1 . . . An RHS 0 S b1 . . . . . . . . . 0 A bm Z Z val Zj − Cj 2 Test for optimality : Case 1: In maximization problem the current bfs is optimal if every element in the last row of the simplex tableau is nonnegative Case 2 : In minimization problem the current bfs is optimal if every element in the last row of the simplex tableau is non positive (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 42 / 107
  • 43. Iteration : Step 1 : Determine the entering basic variable by selecting the variable (automatically a non basic variable ) with the most negative value (in case of maximization)or the most positive(in case of minimization) in the last row (Z-row). Put a box around the column below this variable, and call it the ” pivot column”. Step 2 : Determine the leaving basic variable by applying the minimum ratio test as following: 1 Pick out each coefficients in the pivot column that is strictly positive 2 Divide each of these coefficients into the right hand side entry for the same row 3 Identify the row that has the smallest of these ratios 4 The basic variable for that row is the leaving variable, so replace that variable by the entering variable in the basic variable column of the next simplex tableau. Put a box around this row and call it the ”pivot row”. Step 3: Solve for the new basic feasible solution by using elementary row operation(multiply or divide a row by a nonzero constant, add or subtract a multiple of one row another row)to construct a new simplex tableau, and then return the optimality test. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 43 / 107
  • 44. Example 1. Solve the following problem using the simplex method Maximize z = 3x1 + 5x2 Subject to x1 ≤ 4 2x2 ≤ 12 3x1 + 2x2 ≤ 18 x1, x2 ≥ 0 Solution: 1. Write the standard form of the above linear programming problem. Maximize z = 3x1 + 5x2 Subject to x1 + s1 = 4 2x2 + s2 = 12 3x1 + 2x2 + s3 = 18 x1, x2, s1, s2, s3 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 44 / 107
  • 45. 2. Initial tableau C 3 5 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 1 0 1 0 0 4 0 s2 0 2 0 1 0 12 0 s3 3 2 0 0 1 18 Zj − Cj -3 -5 0 0 0 The basic feasible solution at the initial tableau is (0,0,4,12,18) where: x1 = 0, x2 = 0, s1 = 4, s2 = 12, s3 = 18 and z = 0. Where s1, s2, s3 are basic variable and x1 and x2 are non-basic variables. The solution at the initial tableau is associated to the origin point at which all the decision variables are zero. Optimality test: By investigating the last row of the initial tableau, we find that there are some negative numbers. Therefore, the current solution is not optimal. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 45 / 107
  • 46. Iteration Step 1: Determine the entering variable by selecting the variable with the most negative in the last row. From the initial tableau, in the last row ((Zj − Cj ) row), the coefficient of X1 is -3 and the coefficient of X2 is -5; therefore, the most negative is -5. consequently, X2 is the entering variable. X2 is surrounded by a box and it is called the pivot column. Step 2: Determining the leaving variable by using the minimum ratio test as follows: C 3 5 0 0 0 CB XB x1 x2(Entering) s1 s2 s3 b(RHS) 0 s1 1 0 1 0 0 4 0 s2(Leaving) 0 2 0 1 0 12 0 s3 3 2 0 0 1 18 Zj − Cj -3 -5 0 0 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 46 / 107
  • 47. Step 3: solving for the new BF solution by using the eliminatory row operations as follows: C 3 5 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 1 0 1 0 0 4 5 x2 0 1 0 1/2 0 6 0 s3 3 0 0 -1 1 6 Zj − Cj -3 0 0 5/2 0 This solution is not optimal, since there is a negative numbers in the last row. Apply the same rules we will obtain this solution: C 3 5 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 0 0 1 1/3 -1/3 2 5 x2 0 1 0 1/2 0 6 3 x1 1 0 0 -1/3 1/3 2 Zj − Cj 0 0 0 3/2 1 36 This solution is optimal; since there is no negative solution in the last row: basic variables are x1 = 2, x2 = 6 and s1 = 2; the nonbasic variables are s2 = s3 = 0, and the optimal value Z = 36. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 47 / 107
  • 48. Example 2. Minimize x1 + x2 − 4x3 subject to x1 + x2 + 2x3 ≤ 9 x1 + x2 − x3 ≤ 2 −x1 + x2 + x3 ≤ 4 x1, x2, x3 ≥ 0 Solution : Introducing the non- negative slack variables s1, s2 and s3. The problem becomes the following : Minimize x1 + x2 − 4x3 subject to x1 + x2 + 2x3 + s1 = 9 x1 + x2 − x3 + s2 = 2 −x1 + x2 + x3 + s3 = 4 x1, x2, x3, s1, s2, s3 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 48 / 107
  • 49. Since b ≥ 0, then we can choose our initial basis as B = [a4, a5, a6] = I3, and we indeed have B−1b = b ≥ 0. This gives the following initial tableau: Iteration 1 C 1 1 -4 0 0 0 0 CB XB x1 x2 x3 s1 s2 s3 b(RHS) 0 s1 1 1 2 1 0 0 9 0 s2 1 1 -1 0 1 0 2 0 s3 -1 1 1 0 0 1 4 Zj − Cj -1 -1 4 0 0 0 The initial basic feasible solution in the above table is x1 = 0, x2 = 0, x3 = 0, but it is not optimal, since there is positive element the last row of the simplex table (Zj − Cj ). Identify the entering and leaving variable (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 49 / 107
  • 50. Iteration 2 C 1 1 -4 0 0 0 0 CB XB x1 x2 x3 s1 s2 s3 b(RHS) 0 s1 1 1 2 1 0 0 9 0 s2 1 1 -1 0 1 0 2 0 s3 -1 1 1 0 0 1 4 Zj − Cj -1 -1 4 0 0 0 From the above simplex table s3 is the leaving variable and x3 is the entering variable. C 1 1 -4 0 0 0 0 CB XB x1 x2 x3 s1 s2 s3 b(RHS) 0 s1 3 -1 0 1 0 -2 1 0 s2 0 2 0 0 1 1 6 -4 x3 -1 1 1 0 0 1 4 Zj − Cj 3 -5 0 0 0 -4 The initial basic feasible solution in the above table is x1 = 0, x2 = 0, x3 = 4, but it is not optimal, since there is positive element in the last row of the simplex table (Zj − Cj ).(Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 50 / 107
  • 51. Iteration 3 C 1 1 -4 0 0 0 0 CB XB x1 x2 x3 s1 s2 s3 b(RHS) 0 s1 3 -1 0 1 0 -2 1 0 s2 0 2 0 0 1 1 6 -4 x3 -1 1 1 0 0 1 4 Zj − Cj 3 -5 0 0 0 -4 From the above simplex table s1 is the leaving variable and x1 is the entering variable. C 1 1 -4 0 0 0 CB XB x1 x2 x3 s1 s2 s3 b(RHS) 1 x1 1 -1/3 0 1/3 0 -2/3 1/3 0 s2 0 2 0 0 1 1 6 -4 x3 0 2/3 1 1/3 0 1/3 13/3 Zj − Cj 0 -4 0 -1 0 -2 This is an optimal tableau since Zj − Cj ≤ 0 in the last row. The optimal solution is given by x1 = 1/3, x2 = 0, x3 = 13/3, with z = -17. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 51 / 107
  • 52. Degeneracy and Finiteness of Simplex Algorithm Degeneracy: 1. only one component of basic feasible solution corresponding to the basic variable is zero. 2. at least two component of basic feasible solution corresponding to the basic variables are zero. Example 1: Maximize 5x1 + 3x2 Subject to x1 + x2 ≤ 2 5x1 + 2x2 ≤ 10 x1 + 8x2 ≤ 12 x1, x2 ≥ 0 Solution :The standard linear programming problem for the above problem is Maximize 5x1 + 3x2 Subject to x1 + x2 + s1 = 2 5x1 + 2x2 + s2 = 10 x1 + 8x2 + s3 = 12 x1, x2, s1, s2, s3 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 52 / 107
  • 53. Now we can construct the simplex tableau C 5 3 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 1 1 1 0 0 2 ← 0 s2 5 2 0 1 0 10 0 s3 3 8 0 0 1 12 Zj − Cj ↑-5 -3 0 0 0 The initial basic feasible solution of the LP problem is x1 = 0, x2 = 0, but it can not be an optimal solution, since there is a negative element in the last row of the simplex table. from the above table we can identify the entering and leaving variables, thus x1 is the entering variable and s1 is the leaving variable. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 53 / 107
  • 54. By applying elementary row operation we can get the following simplex tableau C 5 3 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 5 x1 1 1 1 0 0 2 0 s2 0 -3 -5 1 0 0 0 s3 0 5 -3 0 1 6 Zj − Cj 0 2 5 0 0 Thus the optimal solution is (x1, x2, s1, s2, s3) = (2, 0, 0, 0, 6) and the optimal value is 10, since there exist a zero value of basic variable ,it is degenerate case of finiteness of simplex algorithm. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 54 / 107
  • 55. Infinite number of solutions The alternative optimal solution can be obtained by considering the zj − cj row of the simplex table. That is, zj − cj = 0 for some non-basic variable columns in the optimal simplex table. Example Maximize 10x1 + 20x2 subject to x1 ≤ 10 x2 ≤ 6 2x1 + 4x2 ≤ 36 x1, x2 ≥ 0 Solution: The standard linear programming problem of the above problem is Maximize 10x1 + 20x2 subject to x1 + s1 = 10 x2 + s2 = 6 2x1 + 4x2 + s3 = 36 x1, x2, s1, s2, s3 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 55 / 107
  • 56. Now construct the simplex tableau. C 10 20 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 1 0 1 0 0 10 0 s2 0 1 0 1 0 6 ← 0 s3 2 4 0 0 1 36 Zj − Cj -10 ↑ -20 0 0 0 C 10 20 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 1 0 1 0 0 10 20 x2 0 1 0 1 0 6 0 s3 2 0 0 -4 1 12← Zj − Cj ↑ -10 0 0 20 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 56 / 107
  • 57. C 10 20 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 0 0 1 2 -1/2 4 20 x2 0 1 0 1 0 6 10 x1 1 0 0 -2 1/2 6 Zj − Cj 0 0 0 0 5 180 We have basic feasible solution (x1, x2) = (6, 6) with optimal value z = 180. We apply a new simplex step with Zj − Cj = 0 for a non- basic variable. C 10 20 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s1 0 0 1 2 -1/2 4 20 x2 0 1 0 1 0 6 10 x1 1 0 0 -2 1/2 6 Zj − Cj 0 0 0 0 5 180 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 57 / 107
  • 58. C 10 20 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 0 s2 0 0 1/2 1 -1/4 2 20 x2 0 1 -1/2 0 1/4 4 10 x1 1 0 1 0 0 10 Zj − Cj 0 0 0 0 5 180 Now we have another basic solution (x1, x2) = (10, 4), but the optimal values remains 180. In our case, since (x1, x2) = (6, 6) and (x1, x2) = (10, 4) are solutions all points of the segment (x1, x2)(x1, x2) = λ(6, 6) + (1 − λ)(10, 4), λ ∈ [0, 1] are solutions. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 58 / 107
  • 59. Finding a Starting Basic Feasible Solution In certain cases, it is difficult to obtain an initial basic feasible solution of the given LP problem. Such cases arise 1 when the constraints are of the ≤ type, n j=1 aij xj ≤ bi , xj ≥ 0 and value of few right-hand side constants is negative [i.e. bi < 0]. After adding the non-negative slack variable si (i = 1, 2, . . ., m), the initial solution so obtained will be si = −bi for a particular resource, i. This solution is not feasible because it does not satisfy non-negativity conditions of slack variables (i.e. si ≥ 0). 2 when the constraints are of the ≥ type, n j=1 aij xj ≥ bi , xj ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 59 / 107
  • 60. After adding surplus (negative slack) variable si , the initial solution so obtained will be −si = bi or si = −bi n j=1 aij xj − si = bi , xj ≥ 0, si ≥ 0 This solution is not feasible because it does not satisfy non-negativity conditions of surplus variables (i.e. si ≥ 0). In such a case, artificial variables, Ai (i = 1, 2, . . ., m) are added to get an initial basic feasible solution. The resulting system of equations then becomes: n j=1 aij xj − si + Ai = bi xj , si , Ai ≥ 0, i = 1, 2, ..., m (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 60 / 107
  • 61. These are m simultaneous equations with (n + m + m) variables (n decision variables, m artificial variables and m surplus variables). An initial basic feasible solution of LP problem with such constraints can be obtained by equating (n + 2m - m) = (n + m) variables equal to zero. Thus the new solution to the given LP problem is: Ai = bi (i = 1, 2, . . . , m), which is not the solution to the original system of equations because the two systems of equations are not equivalent. Thus, to get back to the original problem, artificial variables must be removed from the optimal solution. There are two methods for removing artificial variables from the solution. Two-Phase Method Big-M Method or Method of Penalties Remark Before the optimal solution is reached, all artificial variables must be dropped out from the solution mix. This is done by assigning appropriate coefficients to these variables in the objective function. These variables are added to those constraints with equality (=) and greater than or equal to (≥) sign. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 61 / 107
  • 62. Two -phase method In the first phase of this method, the sum of the artificial variables is minimized subject to the given constraints in order to get a basic feasible solution of the LP problem. The second phase minimizes the original objective function starting with the basic feasible solution obtained at the end of the first phase. Since the solution of the LP problem is completed in two phases, this method is called the two-phase method. Advantages of the method No assumptions on the original system of constraints are made, i.e. the system may be redundant, inconsistent or not solvable in non-negative numbers. It is easy to obtain an initial basic feasible solution for Phase I. The basic feasible solution (if it exists) obtained at the end of phase I is used as initial solution for Phase II (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 62 / 107
  • 63. Steps of the Algorithm: Phase I Step 1 If all the constraints in the given LP problem are less than or equal to (≤) type, then Phase II can be directly used to solve the problem. Otherwise, the necessary number of surplus and artificial variables are added to convert constraints into equality constraints. Step 2: Assign zero coefficient to each of the decision variables (xj ) and to the surplus variables; and assign - 1 coefficient to each of the artificial variables. This yields the following auxiliary LP problem. Maximize Z∗ = m i=1(−1)Ai subject to the constraints m i=1 aij xj + Ai = bi , i = 1, 2, ..., m and xj , Aj ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 63 / 107
  • 64. Step 3: Apply the simplex algorithm to solve this auxiliary LP problem. The following three cases may arise at optimality. (i) Max Z∗ = 0 and at least one artificial variable is present in the basis with positive value. This means that no feasible solution exists for the original LP problem. (ii) Max Z∗ = 0 and no artificial variable is present in the basis. This means that only decision variables(xj ’s) are present in the basis and hence proceed to Phase II to obtain an optimal basic feasible solution on the original LP problem. (iii) Max Z∗ = 0 and at least one artificial variable is present in the basis at zero value. This means that a feasible solution to the auxiliary LP problem is also a feasible solution to the original LP problem. In order to arrive at the basic feasible solution, proceed directly to Phase II or else eliminate the artificial basic variable and then proceed to Phase II. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 64 / 107
  • 65. Remark Once an artificial variable has left the basis, it has served its purpose and can, therefore, be removed from the simplex table. An artificial variable is never considered for re-entry into the basis. Phase II Assign actual coefficients to the variables in the objective function and zero coefficient to the artificial variables which appear at zero value in the basis at the end of Phase I. The last simplex table of Phase I can be used as the initial simplex table for Phase II. Then apply the usual simplex algorithm to the modified simplex table in order to get the optimal solution to the original problem. Artificial variables that do not appear in the basis may be removed. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 65 / 107
  • 66. Remark 1 All artificial vectors are not present in the bases which indicates that all artificial vectors at zero level at the optimal stage, thus the solution obtained is a basic feasible solution. 2 Some artificial vectors are present in the basis and some artificial vectors are at positive level at the optimal stage. In that case there are no feasible solution to the problem. 3 All artificial are at zero level but at least one artificial vector is present in the basis at optimal stage. Here the solution under test is an optimal solution. Here the converted equations are consistent but some of the constraints may be redundant. By redundancy means the system has more than enough constraints. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 66 / 107
  • 67. Example 1. Solve the following Linear programming problems by using two phase method Maximize Z = 3x1 − x2 subject to 2x1 + x2 ≥ 2 x1 + 3x2 ≤ 2 x2 ≤ 4 x1, x2 ≥ 0 Solution:Phase I Convert the above problem to Standard form of LPP Maximize Z = 3x1 − x2/Z∗ = −A1 subject to 2x1 + x2 − s1 + A1 = 2 x1 + 3x2 + s2 = 2 x2 + s3 = 4 x1, x2, s1, s2, s3, A1 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 67 / 107
  • 68. Now, we will Max Z∗ = −A1 till zero in order to remove the variable A1, where A1 is an artificial variable. Maximize Z∗ = 0x1 − 0x2 + 0s1 + 0s2 + 0s3 − 1A1 subject to 2x1 + x2 − s1 + A1 = 2 x1 + 3x2 + s2 = 2 x2 + s3 = 4 x1, x2, s1, s2, s3, A1 ≥ 0 Cj 0 0 0 0 0 -1 CB XB x1 x2 s1 s2 s3 A1 b(RHS) Min ratio bi /yik -1 A1 2 1 -1 0 0 1 2 1 ← 0 s2 1 3 0 1 0 0 2 2 0 s3 0 1 0 0 1 0 4 - Zj − Cj ↑-2 -1 1 0 0 0 Z∗ = −2 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 68 / 107
  • 69. Cj 0 0 0 0 0 -1 CB XB x1 x2 s1 s2 s3 A1 b(RHS) 0 x1 1 1/2 -1/2 0 0 × 1 0 s2 0 5/2 1/2 1 0 × 1 0 s3 0 1 0 0 1 × 4 Zj − Cj 0 0 0 0 0 × Z∗ = 0 Since all Zj − Cj = 0 and no artificial vector appears in the the basis, we proceed to phase II. Phase II Cj 3 -1 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 3 x1 1 1/2 -1/2 0 0 1 0 s2 0 5/2 1/2 1 0 1 ← 0 s3 0 1 0 0 1 4 Zj − Cj 0 5/2 ↑ -3/2 0 0 Z = 3 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 69 / 107
  • 70. Cj 3 -1 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 3 x1 1 3 0 1 0 2 0 s1 0 5 1 2 0 2 0 s3 0 1 0 0 1 4 Zj − Cj 0 10 0 3 0 Z = 6 Since all Zj − Cj ≥ 0, optimal basic feasible solution is obtained, therefore the solution is Max Z = 6, x1 = 2, x2 = 0. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 70 / 107
  • 71. Example 2. Maximize Z = 5x1 + 8x2 subject to 3x1 + 2x2 ≥ 3 x1 + 4x2 ≥ 4 x1 + x2 ≤ 5 x1, x2 ≥ 0 Solution: The standard linear programming problem Maximize Z = 5x1 + 8x2 subject to 3x1 + 2x2 − s1 + A1 = 3 x1 + 4x2 − s2 + A2 = 4 x1 + x2 + s3 = 5 x1, x2, s1, s2, s3, A1, A2 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 71 / 107
  • 72. Auxiliary LPP Maximize Z = 0x1 + 0x2 + 0s1 + 0s2 + 0s3 − A1 − A2 subject to 3x1 + 2x2 − s1 + A1 = 3 x1 + 4x2 − s2 + A2 = 4 x1 + x2 + s3 = 5 x1, x2, s1, s2, s3, A1, A2 ≥ 0 Phase I Cj 0 0 0 0 0 -1 -1 CB XB x1 x2 s1 s2 s3 A1 A2 b(RHS) min ratio bi -1 A1 3 2 -1 0 0 1 0 3 3/2 -1 A2 1 4 0 -1 0 0 1 4 1 → 0 s3 1 1 0 0 1 0 0 5 5 Zj − Cj -4 ↑ −6 1 1 0 0 0 Z∗ = −7 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 72 / 107
  • 73. Cj 0 0 0 0 0 -1 -1 CB XB x1 x2 s1 s2 s3 A1 A2 b(RHS) min rat -1 A1 5/2 0 -1 1/2 0 1 × 1 2/ 0 x2 1/4 1 0 -1/4 0 0 × 1 0 s3 3/4 0 0 1/4 1 0 × 4 16 Zj − Cj ↑ −5/2 0 1 -1/2 0 0 × Z∗ = −1 Cj 0 0 0 0 0 -1 -1 CB XB x1 x2 s1 s2 s3 A1 A2 b(RHS) min ratio 0 x1 1 0 -2/5 1/5 0 × × 2/5 0 x2 0 1 1/10 -3/10 0 × × 9/10 0 s3 0 0 3/10 1/10 1 × × 37/10 Zj − Cj 0 0 0 0 0 × × Z∗ = 0 Since all Zj − Cj ≥ 0, Maz zx = 0, and no artificial vector appears in the basis, we proceed to phase II. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 73 / 107
  • 74. Phase II Cj 5 8 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) min ratio bi /yik 5 x1 1 0 -2/5 1/5 0 2/5 2 → 8 x2 0 1 1/10 -3/10 0 9/10 1 0 s3 0 0 3/10 1/10 1 37/10 37 Zj − Cj 0 0 -6/5 ↑ −7/5 0 Z = 46/5 Cj 5 8 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) min ratio bi /yik 0 s2 5 0 -2 1 0 2 - 8 x2 3/2 1 -1/2 0 0 3/2 - 0 s3 -1/2 0 1/2 0 1 7/2 7→ Zj − Cj 7 0 ↑ −4 0 0 Z = 12 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 74 / 107
  • 75. Cj 5 8 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) min ratio bi /yik 0 s2 3 0 0 1 2 16 8 x2 1 1 0 0 1/2 5 0 s1 -1 0 1 0 2 7 Zj − Cj 3 0 0 0 4 Z = 40 Since all Zj − Cj ≥ 0, optimal basic feasible solution is obtained. Therefore the solution is Max Z = 40, x1 = 0, x2 = 5. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 75 / 107
  • 76. Big - M method Big-M method is another method of removing artificial variables from the basis. In this method, large undesirable (unacceptable penalty) coefficients to artificial variables are assigned from the point of view of the objective function. If the objective function Z is to be minimized, then a very large positive price (called penalty) is assigned to each artificial variable. Similarly, if Z is to be maximized, then a very large negative price (also called penalty) is assigned to each of these variables. The penalty is supposed to be designated by - M, for a maximization problem, and + M, for a minimization problem, where M > 0. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 76 / 107
  • 77. The Big-M method for solving an LPP can be summarized in the following steps: Step 1: Express the LP problem in the standard form by adding slack variables, surplus variables and/or artificial variables. Assign a zero coefficient to both slack and surplus variables. Then assign a very large coefficient + M (minimization case) and - M (maximization case) to artificial variable in the objective function. Step 2: The initial basic feasible solution is obtained by assigning zero value to decision variables, x1, x2, ..., etc. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 77 / 107
  • 78. Step 3: Calculate the values of zj − cj in last row of the simplex table and examine these values. 1 If all zj − cj ≤ 0, then the current basic feasible solution is optimal. 2 If for a column, k, zk − ck is most positive and all entries in this column are negative, then the problem has an unbounded optimal solution. 3 If one or more zj − cj > 0 (minimization case), then select the variable to enter into the basis (solution mix) with the largest positive zj − cj value (largest per unit increase in the objective function value). This value also represents the opportunity cost of not having one unit of the variable in the solution. That is, zk − ck = Max{zj − cj : zj − cj > 0} Step 4: Determine the key row and key element in the same manner as discussed in the simplex algorithm. Step 5: Continue with the procedure to update solution at each iteration till optimal solution is obtained. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 78 / 107
  • 79. Remarks At any iteration of the simplex algorithm one of the fol- lowing cases may arise: 1 If at least one artificial variable is a basic variable (i.e., variable that is present in the basis) with zero value and the coefficient it M in each zj − cj ( j = 1, 2, . . ., n) values is non-positive, then the given LP problem has no solution. That is, the current basic feasible solution is degenerate. 2 If at least one artificial variable is present in the basis with a positive value and the coefficients M in each zj − cj ( j = 1, 2, . . ., n) values is non-positive, then the given LP problem has no optimum basic feasible solution. In this case, the given LP problem has a pseudo optimum basic feasible solution. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 79 / 107
  • 80. Example 1. Solve the following linear programming problem by using Big-M method. Maximize Z = −2x1 − x2 subject to 3x1 + x2 = 3 4x1 + 3x2 ≥ 6 x1 + 2x2 ≤ 4 x1, x2 ≥ 0 Solution : The standard linear programming problem of the above LP Maximize Z = −2x1 − x2 + 0s1 + 0s2 − MA1 − MA2 subject to 3x1 + x2 + A1 = 3 4x1 + 3x2 − s1 + A2 = 6 x1 + 2x2 + s2 = 4 x1, x2, s1, s2, A1, A2 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 80 / 107
  • 81. Cj -2 -1 0 0 -M -M CB XB x1 x2 s1 s2 A1 A2 b(RHS) min ratio -M A1 3 1 0 0 1 0 3 3/3 = 1 → -M A2 4 3 -1 0 0 1 6 6/4 = 3/2 0 s2 1 2 0 1 0 0 4 4/1 = 4 Zj − Cj ↑ 2 − 7M 1-4M M 0 0 0 Z = -9M Cj -2 -1 0 0 -M -M CB XB x1 x2 s1 s2 A1 A2 b(RHS) min ratio -2 x1 1 1/3 0 0 × 0 1 1/1/3 = 3 -M A2 0 5/3 -1 0 × 1 2 2/5/3 = 6/5 → 0 s2 0 5/3 0 1 × 0 3 3/5/3 = 9/5 Zj − Cj 0 ↑ −5M+1 3 M 0 × 0 Z = -2-2m (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 81 / 107
  • 82. Cj -2 -1 0 0 -M -M CB XB x1 x2 s1 s2 A1 A2 b(RHS) min ratio -2 x1 1 0 1/5 0 × × 3/5 -1 x2 0 1 -3/5 0 × × 6/5 0 s2 0 0 1 1 × × 1 Zj − Cj 0 0 1/5 0 × × Z = -12/5 Since all Zj − Cj ≥ 0, optimal basic feasible is obtained. Therefore the solution is Max Z = −12/5, x1 = 3/5, x2 = 6/5 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 82 / 107
  • 83. Example 2. Maximize Z = 3x1 − x2 Subject to 2x1 + x2 ≥ 2 x1 + 3x2 ≤ 3 x2 ≤ 4 x1, x2 ≥ 0 Solution: The standard linear programming problem Maximize Z = 3x1 − x2 + 0s1 + 0s2 + 0s3 − MA1 Subject to 2x1 + x2 − s1 + A1 = 2 x1 + 3x2 + s2 = 3 x2 + s3 = 4 x1, x2, s1, s2, s3, A1 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 83 / 107
  • 84. Cj 3 -1 0 0 0 -M CB XB x1 x2 s1 s2 s3 A1 b(RHS) min ratio -M A1 2 1 -1 0 0 1 2 2/2 = 1 → 0 s2 1 3 0 1 0 0 3 3/1 = 3 0 s3 0 1 0 0 1 0 4 - Zj − Cj ↑ −2M − 3 -M + 1 M 0 0 0 Z = -2M Cj 3 -1 0 0 0 -M CB XB x1 x2 s1 s2 s3 A1 b(RHS) min ratio bi /yik 3 x1 1 1/2 -1/2 0 0 × 1 - 0 s2 0 5/2 1/2 1 0 × 2 2/1/2 = 4 → 0 s3 0 1 0 0 1 × 4 - Zj − Cj 0 5/2 ↑ −3/2 0 0 × Z = 3 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 84 / 107
  • 85. Cj 3 -1 0 0 0 -M CB XB x1 x2 s1 s2 s3 A1 b(RHS) min ratio bi /yik 3 x1 1 3 0 1/2 0 × 3 0 s1 0 5 1 2 0 × 4 0 s3 0 1 0 0 1 × 4 Zj − Cj 0 10 0 3/2 0 × Z = 9 Since all Zj − Cj ≥, optimal basic feasible solution is obtained. Therefore, the solution is Max Z = 9, x1 = 3, x2 = 0. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 85 / 107
  • 86. Example 3 Maximize Z = 3x1 + 2x2 + x3 subject to 2x1 + x2 + x3 = 12 3x1 + 4x2 = 11 and x1 is unrestricted, x2, x3 ≥ 0 Solution:The standard linear programming problem Maximize Z = 3(x1 − x1 ) + 2x2 + x3 − MA1 − MA2 subject to 2(x1 − x1 ) + x2 + x3 + A1 = 12 3(x1 − x1 ) + 4x2 + A2 = 11 x1, x1 , x2, x3, A1, A2 ≥ 0 Maximize Z = 3x1 − 3x1 + 2x2 + x3 − MA1 − MA2 subject to 2x1 − 2x1 + x2 + x3 + A1 = 12 3x1 − 3x1 + 4x2 + A2 = 11 x1, x1 , x2, x3, A1, A2 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 86 / 107
  • 87. Cj 3 -3 2 1 -M -M CB XB x1 x1 x2 x3 A1 A2 b(RHS) -M A1 2 -2 1 1 1 0 12 -M A2 3 -3 4 0 0 1 11 Zj − Cj ↑ −5M − 3 5M + 3 -5M -2 -M- 1 0 0 Z = -23M Cj 3 -3 2 1 -M -M CB XB x1 x1 x2 x3 A1 A2 b(RHS) m -M A1 0 0 -5/3 1 1 × 14/3 14 3 x1 1 -1 4/3 0 0 × 11/3 - Zj − Cj 0 0 5/3M+1 ↑ −M − 1 0 × Z = −14M 3 + 11 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 87 / 107
  • 88. Cj → 3 -3 2 1 -M -M CB ↓ XB ↓ x1 x1 x2 x3 A1 A2 b(RHS) min ratio 1 x3 0 0 -5/3 1 × × 14/3 3 x1 1 -1 4/3 0 × × 11/3 - Zj − Cj → 0 0 1/3 0 × × Z = 47/3 Since all Zj − Cj ≥ 0, optimal basic feasible solution is obtained x1 = 11/3, x1 = 0 x1 = x1 − x1 = 11/3 − 0 = 11/3 Therefore the solution is Max Z = 47/3, x1 = 11/3, x2 = 0, x3 = 14/3. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 88 / 107
  • 89. Some Complications and Their Resolution Unrestricted Variables Let variable xr be unrestricted in sign. We define two new variables say xr and xr such that xr = xr − xr ; &xr , xr ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 89 / 107
  • 90. Tie for Entering Basic Variable (Key Column) While solving an LP problem using simplex method two or more columns of simplex table may have same zj − cj value (positive or negative depending upon the type of LP problem). In order to break this tie, the selection for key column (entering variable) can be made arbitrary. However, the number of iterations required to arrive at the optimal solution can be minimized by adopting the following rules. (i) If there is a tie between two decision variables, then the selection can be made arbitrarily. (ii) If there is a tie between a decision variable and a slack (or surplus) variable, then select the decision variable to enter into basis. (iii) If there is a tie between two slack (or surplus) variables, then the selection can be made arbitrarily (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 90 / 107
  • 91. Tie for Leaving Basic Variable (Key Row) Degeneracy While solving an LP problem a situation may arise where either the minimum ratio to identify the basic variable to leave the basis is not unique or value of one or more basic variables in the xB becomes zero.This causes the problem of degeneracy. In order to break the tie in the minimum ratios, the selection can be made arbitrarily. However, the number of iterations required to arrive at the optimal solution can be minimized by adopting the following rules. (i) Divide the coefficients of slack variables in the simplex table where degeneracy is seen by the corresponding positive numbers of the key column in the row, starting from left to right. (ii) Compare the ratios in step (i) from left to right column wise, select the row that contains the smallest ratio. Remark When there is a tie between a slack and artificial variable to leave the basis, preference should be given to the artificial variable for leaving the basis. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 91 / 107
  • 92. Example Solve the following LP problem Maximize Z = 3x1 + 9x2 subject to the constraints x1 + 4x2 ≤ 8, x1 + 2x2 ≤ 4 and x1, x2 ≥ 0 Solution Adding slack variables s1 and s2 to the constraints, the problem can be expressed as Maximize Z = 3x1 + 9x2 + 0s1 + 0s2 subject to the constraints x1 + 4x2 + s1 = 8, x1 + 2x2 + s2 = 4 and x1, x2, s1, a2 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 92 / 107
  • 93. The initial basic feasible solution is given in Table . Both variables s1 and s2 are eligible to leave the basis as the minimum ratio is same, i.e. 2, so there is a tie among the ratio in rows s1 and s2.This causes the problem of degeneracy. To obtain the key row for resolving degeneracy, apply the following procedure: cj 3 9 0 0 CB XB x1 x2 s1 s2 b(RHS) min. ratio 0 s1 1 4 1 0 8 8/4=2 0 s2 1 2 0 1 4 4/2=2 Zj − Cj -3 ↑-9 0 0 0 Dividing the coefficients of slack variables s1 and s2 by the corresponding elements in the key column as shown below in the table. x2 column Row (Key Column) s1 s2 s1 4 1/4=1/4 0/4=0 s2 2 0/2=0 1/2=1/2 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 93 / 107
  • 94. Comparing the ratios of Step (ii) from left to right column wise, the minimum ratio (i.e., 0/2 = 0) occurs in the s2-row. Thus, variable s2 is selected to leave the basis. The new solution is shown in Table cj 3 9 0 0 CB XB x1 x2 s1 s2 b(RHS) 0 s1 -1 0 1 -2 0 9 x2 1/2 1 0 1/2 2 Zj − Cj 3/2 0 0 9/2 18 In Table above , all Zj − Cj ≤ 0. Hence, an optimal solution is arrived at. The optimal basic feasible solution is: x1 = 0, x2 = 2 and Max Z = 18. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 94 / 107
  • 95. Types of Linear Programming Solutions While solving any LP problem using simplex method, at the stage of optimal solution, the following three types of solutions may exist: Alternative (Multiple) Optimal Solutions Unbounded Solution Infeasible Solutions (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 95 / 107
  • 96. Alternative (Multiple) Optimal Solutions The zj − cj values in the simplex table indicates the contribution in the objective function value by each unit of a variable chosen to enter into the basis. Also, Also, an optimal solution to a maximization(minimization) LP problem is reached when all zj − cj ≥ 0(zj − cj ≤ 0). But, if zj − cj = 0 for a non-basic variable column in the optimal simplex table and such nonbasic variable is chosen to enter into the basis, then another optimal solution so obtained will show no improvement in the value of objective function. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 96 / 107
  • 97. Example Solve the following LP problem. Maximize Z = 6x1 + 4x2 subject to the constraints 2x1 + 3x2 ≤ 30, 3x1 + 2x2 ≤ 24 , x1 + x2 ≥ 3 and x1, x2 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 97 / 107
  • 98. Solution Adding slack variables s1, s2, surplus variable s3 and artificial variable A1 in the constraint set,the LP problem becomes Maximize Z = 6x1 + 4x2 + 0s1 + 0s2 + 0s3 − MA1 subject to the constraints 2x1 + 3x2 + s1 = 30, 3x1 + 2x2 + s2 = 24 , x1 + x2 − s3 + A1 = 3 and x1, x2, s1, s2, A1 ≥ 0 The optimal solution: x1 = 8, x2 = 0 and Max Z = 48 for this LP problem is shown in Table below. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 98 / 107
  • 99. Cj 6 4 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) Ratio 0 s1 0 5/3 1 -2/3 0 14 42/5 ← 0 s3 0 -1/3 0 1/3 1 5 6 x1 1 2/3 0 1/3 0 8 12 zj − cj 0 ↑ 0 0 2 0 48 In Table above , z2 − c2 = 0 corresponds to a non-basic variable, x2. Thus, an alternative optimal solution can also be obtained by entering variable x2 into the basis and removing basic variable, s1 from the basis. The new solution is shown in Table below. Cj 6 4 0 0 0 CB XB x1 x2 s1 s2 s3 b(RHS) 4 x2 0 1 3/5 -2/5 0 42/5 0 s3 0 0 1/5 1/5 1 39/5 6 x1 1 0 -2/5 3/5 0 12/5 zj − cj 0 0 0 2 0 48 The optimal solution shown in Table above is: x1 = 12/5, x2 = 42/5 and Max Z = 48. Since this optimal solution shows no change in the value of objective function, it is an alternative solution. Once again, z3 − c3 = 0 corresponds to nonbasic variable, s1. This indicates that an alternative optimal solution exists. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 99 / 107
  • 100. Unbounded Solution In a maximization LP problem, if zj − zj < 0(zj − cj > 0 for a minimization case) corresponds to a non-basic variable column in simplex table, and all aij values in this column are negative, then minimum ratio to decide basic variable to leave the basis can not be calculated. It is because negative value in denominator would indicate the entry of a non-basic variable in the basis with a negative value (an infeasible solution). Also,a zero value in the denominator would result in a ratio having an infinite value and would indicate that the value of non-basic variable could be increased infinitely with any of the current basic variables being removed from the basis. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 100 / 107
  • 101. Example Solve the following LP problem. Maximize Z = 3x1 + 5x2 subject to the constraints x1 − 2x2 ≤ 6, 3x1 ≤ 10 , x2 ≥ 1 and x1, x2 ≥ 0 Solution:Adding slack variables s1, s2 , surplus variable s3 and artificial variable A1 in the constraint set. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 101 / 107
  • 102. Then the standard form of LP problem becomes Maximize Z = 3x1 + 5x2 + 0s1 + 0s2 + 0s3 − MA1 subject to the constraints x1 − 2x2 + s1 = 6, 3x1 + s2 = 10 , x2 − s3 + A1 = 1 and x1, x2, s1, s2, s3, A1 ≥ 0 The initial solution to this LP problem is shown in Table below. Cj 3 5 0 0 0 -M CB xB x1 x2 s1 s2 s3 A1 b(RHS) min. ratio 0 s1 1 -2 1 0 0 0 6 - 0 s2 1 0 0 1 0 0 10 - -M A1 0 1 0 0 -1 1 1 1 ← zj − cj -3 ↑-5-M 0 0 M 0 M (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 102 / 107
  • 103. Iteration 1: Since z2 − c2 ≥ 0, non-basic variable x2 is chosen to enter into the basis in place of basic variable A1. The new solution is shown in Table below. Cj 3 5 0 0 0 -M CB xB x1 x2 s1 s2 s3 A1 b(RHS) 0 s1 1 0 1 0 -2 2 8 0 s2 1 0 0 1 0 0 10 5 x2 0 1 0 0 -1 1 1 zj − cj -3 0 0 0 -5 M+5 5 In Table above, z5 − c5 = −5 is largest negative, so variable s3 should enter into the basis. But, coefficients in the s3 column are all negative or zero. This indicates that s3 cannot be entered into the basis. However,the value of s3 can be increased infinitely without removing any one of the basic variables. Further, since s3 is associated with x2 in the third constraint, x2 will also be increased infinitely because it can be expressed as x2 = 1 + s3 − A1. Hence, the solution to the given problem is unbounded. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 103 / 107
  • 104. Infeasible Solution Lp models with inconsistent constraints have no feasible solution. This situation can never occur if all the constraints are of the type ≤ with nonnegative right hand sides because the slacks provide a feasible solution. For other types of constraints we use artificial variables. Although the artificial variables are penalized in the objective function to force them to zero at the optimum, this can occur only if the model has a feasible space. Otherwise, at least one artificial variables will be positive in the optimum iteration. If LP problem solution does not satisfy all of the constraints, then such a solution is called infeasible solution. Also, infeasible solution occurs when all zj − cj values satisfy optimal solution condition but at least one of artificial variables appears in the basis with a positive value. This situation may occur when an LP model is either improperly formulated or more than two of the constraints are incompatible. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 104 / 107
  • 105. Example Solve the following LP problem. Maximize Z = 6x1 + 4x2 subject to the constraints x1 + x2 ≤ 5, x2 ≥ 8 and x1, x2 ≥ 0 Solution:Adding slack, surplus and artificial variables, the standard form of LP problem becomes Maximize Z = 6x1 + 4x2 + 0s1 + 0s2 − MA1 subject to the constraints x1 + x2 + s1 = 5, x2 − s2 + A1 = 8 and x1, x2, s1, s2, A1 ≥ 0 (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 105 / 107
  • 106. The initial solution to this LP problem is shown in Table below. Cj 6 4 0 0 -M CB XB x1 x2 s1 s2 A1 b(RHS) min. ratio 6 x1 1 1 1 0 0 5 5/1← -M A1 0 1 0 -1 1 8 8/1 zj − cj 0 ↑2-M 6 M 0 30-8M Iteration 1: Since z2 − c2 = 2 − M(≤ 0), non-basic variable x2 is chosen to enter into the basis to replace basic variable x1. The new solution is shown in Table below. Cj 6 4 0 0 -M CB XB x1 x2 s1 s2 A1 b(RHS) 4 x2 1 1 1 0 0 5 -M A1 -1 0 -1 -1 1 3 zj − cj M-2 0 4+M M 0 20-3M In table above, since all zj − cj ≥ 0, the current solution is optimal. But this solution is not feasible for the given LP problem because values of decision variables are: x1 = 0 and x2 = 5 violates second constraint,x2 ≥ 8. The presence of artificial variable A1 = 3 in the solution also indicates that the optimal solution violates the second constraint (x2 ≥ 8) by 3 units. (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 106 / 107
  • 107. I THANK YOU!! (Department of Mathematics Debark University )Linear Optimization(MATH 2062) October 7, 2020 107 / 107