SlideShare ist ein Scribd-Unternehmen logo
1 von 85
DYNAMIC PROGRAMMING
• Problems like knapsack problem, shortest
path can be solved by greedy method in
which optimal decisions can be made one at
a time.
• For many problems, it is not possible to
make stepwise decision in such a manner
that the sequence of decisions made is
optimal.
DYNAMIC PROGRAMMING (Contd..)
Example:
• Suppose a shortest path from vertex i to vertex j is
to be found.
• Let Ai be vertices adjacent from vertex i. which of
the vertices of Ai should be the second vertex on
the path?
• One way to solve the problem is to enumerate all
decision sequences and pick out the best
• In dynamic programming the principle of
optimality is used to reduce the decision
sequences.
DYNAMIC PROGRAMMING (Contd..)
Principle of optimality:
• An optimal sequence of decisions has the property
that whatever the initial state and decisions are, the
remaining decisions must constitute an optimal
decision sequence with regard to the state
resulting from the first decision.
• In the greedy method only one decision sequence
is generated.
• In dynamic programming many decision
sequences may be generated.
3
0/1 knapsack problem
Example:
• [0/1 knapsack problem]:the xi’s in 0/1 knapsack problem is
restricted to either 0 or 1.
• Using KNAP(l,j,y) to represent the problem
Maximize ∑pixi
l≤i≤j
subject to ∑wixi ≤ y,
………………..(1)
l≤i≤j
xi = 0 or 1 l≤i≤j
The 0/1 knapsack problem is KNAP(l,n,M).
0/1 knapsack problem: principal of optimality
• Let y1 ,y2 , …… ,yn be an optimal sequence of 0/l values for
x1 ,x2 …..,xn respectively.
• If y1 = 0 then y2,y3,……,yn must constitute an optimal
sequence for KNAP (2,n,M).
• If it does not, then y1 ,y2 , …… ,yn is not an optimal
sequence for KNAP (1, n, M)
• If y1=1, then y1 ,y2 , …… ,yn is an optimal sequence for
KNAP(2,n,M-w1 ).
• If it is not there is another 0/l sequence z1,z2,….,zn such that
∑pizi has greater value.
• Thus the principal of optimality holds.
0/1 knapsack problem: principal of optimality
(contd..)
• Let gj(y) be the value of an optimal solution to
KNAP (j+1, n, y).
• Then g0(M) is the value of an optimal solution to
KNAP(l,n,M)
• From the principal of optimality
g0(M) = max{g1(M), g1(M-W1) + P1)}
• g0(M) is the maximum profit which is the value of
the optimal solution
6
0/1 knapsack problem: principal of optimality
(contd..)
• The principal of optimality can be equally applied
to intermediate states and decisions as has been
applied to initial states and decisions
• Let y1,y2,…..yn be an optimal solution to
KNAP(l,n,M).
• Then for each j l≤j≤n , y1,..,yj and yj+1,….,yn must
be optimal solutions to the problems
KNAP(1,j,∑wiyi)
l≤i≤j
and KNAP(j+1,n,M-∑wiyi) respectively
l≤i≤j
Recurrence Relation
• Then gi(y) = max{gi+1(y),
(xi +1= 0 case)
gi+1(y-wi +1) + pi+1}…(1) (xi +1= 1case)
• Equation (1) is known as recurrence relation.
• Dynamic programming algorithms solve the
recurrence relation to obtain a solution to the given
problem
• To solve 0/1 knapsack problem , we use the
knowledge gn(y) = 0 for all y, because gn(y) is an
optimal solution (profit) to the problem
KNAP(n+1, n, y) which is obviously zero for any y
8
Solving Recurrence Relation
• Substituting i = n -1 and using gn(y)= 0 in
the above relation(1), we obtain gn-1(y)
• Then using gn-1(y), we obtain gn-2(y) and so
on till we get g0(M) (with i=0) which is the
solution of the knapsack problem

9
Forward and backward approaches

• In the forward approach ,decision xi is made
in terms of optimal decision sequences for
xi+1……..xn (i.e we look ahead).
• In the backward approach, decision xi is in
terms of optimal decision sequences for
x1……xi-1(i.e we look backward)

10
Forward and backward approaches: 0/1 knapsack
problem
• For the 0/l knapsack problem
gi(y)=max{gi+1(y), gi+1(y-wi+1)+Pi+1}………(1)
• (1) is the forward approach as gn-1(y) is obtained using
gn(y)
• fj(y) = max{fj-1(y), fj-1(y-wi) + pj} …………..(2)
• (2) is the backward approach, as fj(y) is obtained using
fj-1(y)
fj(y) is the value of optimal solution to Knap(i,j,y)
(2) may be solved by beginning with
f0(y) = 0 for all y ≥ 0 (no element case)
and fi(y) = -infinity for y < 0 (no capacity)
11
FEATURES OF DYNAMIC
PROGRAMMING SOLUTIONS
• It is easier to obtain the recurrence relation using
backward approach.
• Dynamic programming algorithms often have
polynomial complexity.
• Optimal solution to sub problems are retained so
as to avoid recomputing their values.
12
OPTIMAL BINARY SEARCH
TREES
•

Definition: binary search tree (BST) A binary
search tree is a binary tree; either it is empty or
each node contains an identifier and
(i) all identifiers in the left sub tree of T are less
than the identifiers in the root node T.
(ii) all the identifiers the right sub tree are greater
than the identifier in the root node T.
(iii) the right and left sub tree are also BSTs.
13
ALGORITHM TO SEARCH FOR AN
IDENTIFIER IN THE TREE ‘T’.

Procedure SEARCH (T X I)
// Search T for X, each node had fields
LCHILD, IDENT, RCHILD//
// Return address i pointing to the identifier X//
//Initially T is pointing to tree.
//ident(i)=X or i=0 //
iT
14
Algorithm to search for an identifier in the tree
‘T’(Contd..)

While i 0 do
case : X < Ident(i) : i LCHILD(i)
: X = IDENT(i) : RETURN i
: X > IDENT(i) : I  RCHILD(i)
endcase
repeat
end SEARCH
15
Optimal Binary Search trees Example
If

For

while
repeat

loop
if each identifier is searched with equal probability the
average number of comparisons for the above tree are
1+2+2+3+4 = 12/5.
5
16
OPTIMAL BINARY SEARCH
TREES (Contd..)
• Let us assume that the given set of
identifiers are {a1,a2,……..an} with
a1<a2<…….<an.
• Let Pi be the probability with which we are
searching for ai.
• Let Qi be the probability that identifier x
being searched for is such that ai<x<ai+1
0≤i≤n, and a0=-∞ and an+1=+∞.
17
OPTIMAL BINARY SEARCH
TREES (Contd..)
• Then ∑Qi is the probability of an unsuccessful search.
0≤i≤ n
∑P(i) + ∑Q(i) = 1. Given the data,
1≤i≤n 0≤i≤n
let us construct one optimal binary search tree for
(a1……….an).
• In place of empty sub tree, we add external nodes
denoted with squares.
• Internet nodes are denoted as circles.
18
OPTIMAL BINARY SEARCH
TREES (Contd..)
If
For

while
repeat

loop

19
Construction of optimal binary search
trees
• A BST with n identifiers will have n internal
nodes and n+1 external nodes.
• Successful search terminates at internal nodes
unsuccessful search terminates at external nodes.
• If a successful search terminates at an internal
node at level L, then L iterations of the loop in the
algorithm are needed.
• Hence the expected cost contribution from the
internal nodes for ai is P(i) * level(ai).
20
OPTIMAL BINARY SEARCH
TREES (Contd..)
• Unsuccessful searche terminates at external nodes
i.e. at i = 0.
• The identifiers not in the binary search tree may
be partitioned into n+1 equivalent classes
Ei 0≤i≤n.
Eo contains all X such that X≤ai
Ei contains all X such that ai<X<=ai+1 1≤i≤n
En contains all X such that X > an
21
OPTIMAL BINARY SEARCH
TREES (Contd..)
• For identifiers in the same class Ei, the
search terminate at the same external node.
• If the failure node for Ei is at level L, then
only L-1 iterations of the while loop are
made
The cost contribution of the failure node
for Ei is Q(i) * (level (Ei ) -1)
22
OPTIMAL BINARY SEARCH
TREES (Contd..)
• Thus the expected cost of a binary search tree is:
∑P(i) * level (ai) + ∑Q(i) * (level(Ei) – 1) …(2)
1≤i≤n
0≤i≤n
• An optimal binary search tree for {a1…….,an} is a
BST for which (2) is minimum .
• Example: Let {a1,a2, a3}={do, if, stop}

23
OPTIMAL BINARY SEARCH
TREES (Contd..)
Level 1
Level 2

stop
if

if
Q(3)

do

do
stop

if

Q(2)

Level 3

do

Q(0)

stop
Q(1)

(a)

(b)
(c)
{a1,a2,a3}={do,if,stop}
24
OPTIMAL BINARY SEARCH
TREES (Contd..)
stop

do

do

stop

if
if
(d)

(c)
25
OPTIMAL BINARY SEARCH
TREES (Contd..)
• With equal probability P(i)=Q(i)=1/7.
• Let us find an OBST out of these.
• Cost(tree a)=∑P(i)*level a(i) +∑Q(i)*level (Ei) -1
1≤i≤n
0≤i≤n
(2-1) (3-1)
(4-1)
(4-1)
=1/7[1+2+3 + 1 + 2 + 3 + 3 ] = 15/7
• Cost (tree b) =17[1+2+2+2+2+2+2] =13/7
• Cost (tree c) =cost (tree d) =cost (tree e) =15/7
tree b is optimal.
26
OPTIMAL BINARY SEARCH
TREES (Contd..)
• If P(1) =0.5 ,P(2) =0.1, P(3) =0.005 , Q(0)
=.15 , Q(1) =.1, Q(2) =.05 and Q(3) =.05
find the OBST.
• Cost (tree a) = .5 x 3 +.1 x 2 +.05 x 3
+.15x3 +.1x3 +.05x2 +.05x1 = 2.65
• Cost (tree b) =1.9 , Cost (tree c) =1.5 ,Cost
(tree d) =2.05 ,
• Cost (tree e) =1.6 Hence tree C is optimal.
27
OPTIMAL BINARY SEARCH
TREES (Contd..)
• To obtain a OBST using Dynamic programming
we need to take a sequence of decisions regarding
the construction of tree.
• First decision is which of ai is to be the root.
• Let us choose ak as the root . Then the internal
nodes for a1,…….,ak-1 and the external nodes for
classes Eo,E1,……,Ek-1 will lie in the left subtree L
of the root.
• The remaining nodes will be in the right subtree R.
28
OPTIMAL BINARY SEARCH
TREES (Contd..)
Define
Cost(L) =∑P(i)* level(ai) + ∑Q(i)*(level(Ei )-1)
1≤i≤k
0≤i≤k
Cost(R) =∑P(i)*level(ai) + ∑Q(i)*(level(Ei )-1)
k≤i≤n
k≤i≤n
• Tij be the tree with nodes ai+1,…..,aj and nodes
corresponding to Ei,Ei+1,…..,Ej.
• Let W(i,j) represents the weight of tree Tij.
29
OPTIMAL BINARY SEARCH
TREES (Contd..)
W(i,j)=P(i+1) +…+P(j)+Q(i)+Q(i+1)…Q(j)=Q(i) +∑j [Q(l)+P(l)]
l=i+1
• The expected cost of the search tree in (a) is (let us call it T) is
P(k)+cost(l)+cost(r)+W(0,k-1)+W(k,n)
W(0,k-1) is the sum of probabilities corresponding to nodes
and nodes belonging to equivalent classes to the left of ak.
W(k,n) is the sum of the probabilities corresponding to those
on the right of ak.
ak
L
R
(a) OBST with root ak

30
OPTIMAL BINARY SEARCH
TREES (Contd..)
• The

expected

cost

of

the

tree

is

P(K)+COST(L)+COST(R)+W(0,k-1)+W(k,n)…….....(3)

• If T is optimal then (3) must be minimum over all
binary search trees containing a1,…ak-1 and
E0,E1,…..,Ek-1
• Similarly cost(R) must be minimum.
• Let C(i,j) represent the cost of the tree Tij
(containing nodes ai+1,……aj and Ei,…..,Ej).
• Then cost(l) = C(0,k-1) and cost(R) =C(k,n)
substituting for cost(l) and cost(R) in (3),
31
OPTIMAL BINARY SEARCH TREES (Contd..)

• The expected cost of the search tree is
• P(k)+C(0,k-1)+C(k,n)+W(0,k-1)+W(k,n)…..(4)
is
minimum if k is chosen such that (4) is minimum cost
of the BST with nodes a1,a2,…..,an and E0,….,En
C(0,n)=min{C(0,k-1)+C(k,n)+P(k)+W(0,k-1)+W(k,n)}…..(5)
1<=k≤n
substitute i instead of 0 and j instead of n in (5)
C(I, j) = min { C(i, k-1)+ C (k, j)+ P (k) + W(i, k-1) + W (k, j)}
= min {C(i,k-1)+C(k,j)+W(i,j)}…………..(6)
i<k≤j
P(k) + W(i, k-1) + W (k, j)= P(k) +p(i+1)+…+P(k-1)+P(k+1)
+…+P(j) = P(i+1)+…+P(k)+P(k+1)+…P(j) =W(i,j)
32
OPTIMAL BINARY SEARCH
TREES (Contd..)
• Equation (6) may be solved for C(0,n) by
first computing C(i,j) such that j-i=1
(C(i,i)=0 and W(i,i)=Q(i),0≤i≤n)
• We can compute all C(i,j) such that j-i = 2,
then all C(i,j) with j-i=3 etc.
• If during these computations, the root R(i,j)
of each tree Tij is recorded, then OBST may
be constructed.
33
Optimal Binary Search Trees –
Example
Example: Let n = 4 and (a1,a2,a3,a4) = (do, if, read, while)
Let P(1:4)=(3,3,1,1) and Q(0:4)=(2,3,1,1,1) .
• P’s and Q’s have been multiplied by 16 for convenience.
• Initially ,W(i,j) = Q(i), C(i,i) = 0, R(i,i) = 0 , 0≤i≤4
J

j-1

W(i,j) =Q(i) +∑ Q(l) +P(l) =P(j) +Q(j) +Q(i) + ∑ Q(l) +P(l)
l=i+1
l=i+1 ….…(7)
= P(j) +Q(j) +W(i,j-1)

34
Optimal Binary Search Trees –
Example (contd..)
C(i,j) = min{C(i,k-1)+C(k,j)} +W(i,j)……….…(8)
i<k≤j

Using (7), W(0,1) = P(1) +Q(1) +W(0,0)
=3+3+Q(0)=3+3+2=8
Using (8) C(0,1) = W(0,1) +min{C(0,0) +C(1,1)}
= 8 +min{0+0}
R(0,1) = 1 because R(i,j) is the value of k that
minimizes(8)
35
Optimal Binary Search Trees –
Example (contd..)
W(1,2) =P(2) +Q(2) +W(1,1) =P(2) +Q(2) +Q(1) =3+1+3 =7
C(1,2) =W(1,2) +min{C(1,1) +C(2,2)}=7 +min{0,0)=7
R(1,2) =2
W(2,3) =P(3) +Q(3) +W(2,2) =3
k=2
C(2,3) =W(2,3) +min{C(2,2)+C(3,3)} =3
k=3
R(2,3) =3
W(3,4) =P(4) +Q(4) +W(3,3) =1+1+Q(3) =3
C(3,4) =W(3,4) +min{C(3,3) +C(4,4)} =3
k=4
R(3,4) =4
36
Optimal Binary Search Trees –
Example (contd..)
• Knowing W(i,i+1), and C(i,i+1) 0≤i<4 , we are
computing W(i,i+2) ,C(i,i+2) R(i,i+2) 0≤i<3
• This process may be repeated until W(0,4) ,C(0,4)
and R(0,4) are obtained
• Using R(i,j) let us see how to construct an OBST.
• C(o,4)=32 is the minimum cost of BST for
{ a1, a2, a3, a4}.
do, if, read, while
37
Optimal Binary Search Trees –
Example (contd..)
• The root of tree T04 is a2
R(0,4)=2
• The left sub-tree of T04 is T01 and right sub-tree is T24.
• R (0, 1) =1 a1 is the root of T01,
• R (2, 4) =3
a3 is root of T24,
• T01 has T00 and T11 as sub-trees , T24 has T22 and T34
• For T00,T11 and T22 the root is 0, T24 it is 4.
38
Optimal Binary Search Trees –
Example (contd..)
a2={if}
if
do

Read
while
OBST
39
Optimal Binary Search Trees –
Example (contd..)
We can verify that the cost of OBST
= ∑P (i) level (ai)+∑Q(i) (level(Ei)-1)
1≤i≤n
0≤i≤n
= 2 3+1 3+2 1+3 1+2 2+2 3+2 1+3 1+3 1
= 6+3+2+3+4+6+2+3+3 = 32

40
COMPLEXITY OF THE PROCEDURE
FOR CONSTRUCTING OBST

•

•
•
•

C(i,j)=min{C(i,k-1)+C(k,i)}+w(i,j)
i<k≤j
C (i,j)s are computed for j-i=1,2,…,n. If j-i=m then
there are n-m+1 C (i,j)s to compute
(for n=4, and j-i =1,they are C(0,1), C(1,2), C(2,3),
C(3,4))
Each C(i,j) requires to compute m quantities
Hence each C (i,j) can be computed in o(m) time.
The total time for all c(i,j)s with j-i=m is 0(nm+1)(m)=0(nm-m2+n)=0(nm-m2) .
41
COMPLEXITY OF THE PROCEDURE
FOR CONSTRUCTING OBST (Contd..)
• The total time to evaluate all the C (i,j)s and
R(i,j)s is
∑(nm-m2) = n∑m-∑m2 = n (m)(m+1)/2
– m (m+1)(2m+1)/6 = 0(n3)
1≤m≤n

42
COMPLEXITY OF THE PROCEDURE
FOR CONSTRUCTING OBST (Contd..)
• The complexity can be reduced to 0(n2)
using D.E.Knuth’s result which limits the
search to the range
• R(i,j-1)≤k≤R(i+1,j),
instead
of
min{C(i,k-1)+C(k,i)}+w(i,j)
1<k≤j

• Using the values of R (i,j),the tree T0n can
be constructed in 0(n) time.
43
ALGORITHM FOR OBST
PROCEDURE OBST (P,Q,n)
// Given n distinct identifiers a1<a2<…. <an and
probability P (i) 1≤i≤n, Q (i) 0≤i≤n, the cost of
OBST C (i,j) is computed//
// R (i,j) is the root of tree //
// w (i.j) is the Weight of OBST //
Real P (n), q (0: n), C (0: n, 0: n),
W (0: n, 0: n); integer R (0: n, 0: n)
44
ALGORITHM FOR OBST (Contd..)
for i←0 to n-1 do
(w (i,i), R(i,i),C(i,i) )← (Q (i), 0, 0)
(w (i,i+1), R(i,i+1),C(i,i+1)←
Q(i)+Q(i+1)+P(i+1), i+1, Q(i)+Q(i+1)+P(i+1)
// optimal trees with one node //
repeat
(w(n,n),R(n,n),C(n,n) ) ← (Q (n), 0, 0)
for m←2 to n do // find optimal //
for i←0 to n-m do // trees with m //
j←i+m
// nodes //
45
ALGORITHM FOR OBST (Contd..)
w(i,j)←w(i,j-1)+P(j)+Q(j)
k←a value l in the range
R (i,j-1)≤l≤R(i+1,j) that
Minimizes{C (i, l-1) +C (l,j)}
C(i,j)=w(i,j)+C(i,k-1)+C(k,j)
R (i,j)←k
Repeat
Repeat
End OBST
46
0/1 KNAPSACK PROBLEM
• The solution of 0/1 knapsack problem is a sequence
of decisions on x1,…, xn and the total profit;
x1,…..,xn may be 0 or 1.
• Let fj(X) be the value of an optimal solution to
KNAP (1, j, X).
• Since the principle of optimality holds
fn (M) = max {fn-1(M) (Corresponding to xn=0),
fn-1(M-wn) +pn} (corresponding to xn=1)
47
0/1 KNAPSACK PROBLEM (Contd..)
• In general
fi(X)=max{fi-1(X), fi-1(X-wi)+pi}….(1)
• (1) may be solved by beginning with f0(X)=0 for
all X (no elements case) and
fi(X)= -∞ for X<0 (no weight case)
f1,…,fn may be solved successively.
EXAMPLE:• Let n=3, (w1,w2,w3) = (2,3,4);
• (P1,P2,P3) = (1,2,3), M= 6
48
0/1 KNAPSACK PROBLEM (Contd..)
SOLUTION:
f0(X) = 0 X, fi(X) = -∞ if X<0
f0 (6) = 0
f1(6) = max{f0(6), f0(6-2)+1} = max {0,1} = 1
f2(6) = max{f1(6), f1(6-3)+2} = max{1,f1(3)+2}
f1(3) = max{f0(3), f0(3-2)+1} = max{0,1}=1
f2(6) = max {1, 3} =3
f3(6) = max{f2(6), f2(6-4)+5}=max{3, f2(2)+5}
f2(2) = max{f1(2), f1(2-3)+2}=max{1, f1(-1)+2} = 1

(as f1 (-1) = -∞)
f3(6)=max{3,f2(2)+5}=max{3,6}=6

49
0/1 KNAPSACK PROBLEM (Contd..)

• Similarly we can compute f1 (1) = 0, f2 (1) =
0, f1 (2) =1 and f2 (3) =2
• Each fi is completely specified by the pairs
(Pj,Wj) where Wj is a value of X at which fi
takes a jump Pj=fi(Wj) and (P0,W0)=(0,0) is
introduced .
• As we always choose maximum profit, if
Wj<Wj+1 0≤j≤r then Pj<Pj+1.
50
0/1 KNAPSACK PROBLEM (Contd..)
• Also fi(X)=fi(Wj) for all X such that Wj≤X<Wj+1 0≤j<r
(as no increase in profit for X>=Wj . and
fi(X)=fi(Wr) for all X X>Wr (as only 0, 1, 2, ...r-1
objects are there.
• (fj(X) is the value of an optimal solution to
KNAP(1,j,X)
• (For such X’s as 2.5, 3.5, we do not have the profit, so
we assign the previous profit.)

51
0/1 KNAPSACK PROBLEM (Contd..)
•
•
•
•
•

Let us define the sets Si-1, Si1 and Si as follows.
Si-1 is the set of all pairs (p, w) for fi-1 including (0, 0).
Si1= {(p, w)/ (P-pi, W-wi) Si-1}
(This is the set of pairs for fi(X) =fi-1(X-wi) + pi)
Si is obtained by merging together Si-1 and Si1. (This
corresponds to maximum of fi-1(X) and fi-1(X-wi)+pi.

52
0/1 KNAPSACK PROBLEM (Contd..)
• If one of Si-1 and Si1 has pair (pj,wj) and the other has a pair
(pk,wk) and pj≤pk while wj≥wk then the pair (pj,wj) is
discarded.
• This is known as dominance rule or purging rule (pk,wk)
dominates (pj,wj).

For example
S0 contains tuples when x1 = 0.
p1 = 0 ; w1 = 0 S0={0,0}
S11 contains tuples when x1 =1 p1 = 1; w1 = 2 S11={(1,2)}
S1 contains tuples when x1 = 0 and x1 = 1 S1 = {(0,0), (1,2)}

53
0/1 knapsack problem example
n = 3, (w1, w2, w3) = (2,3,4), (p1,p2,p3)= (1,2,5) and M=6.
• S0 = {0, 0}; S11 is obtained by adding (p1, w1)=(1, 2) to (0, 0)
Thus S11 = {(1,2)} ; S1 is obtained by combining S0 and S11
Hence S1 = {(0, 0), (1, 2)}; S21= {(2, 3), (3, 5)} adding
(2, 3) to elements of S1
• S2 ={(0,0),(1,2), (2,3),(3,5)}; S31={(5,4),(6,6),(7,7),(8,9)}
• S3 ={(0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9)}
• The pair (3,5) is eliminated from S3 as 3<5 and 5>4. (Using
purging rule (3,5) is discarded.

54
0/1 knapsack problem example (contd..)
• computation of xi’s is as follows:
• Consider the Last value in Sn. Let it be (P, W).
• So, we need to determine x1,x2,x3,….,xn such that ∑pixi= P and
1≤i≤n
• ∑wixi=W check if (p,w) is also in Sn-1 .
1≤i≤n
If (p, w) Sn-1then set xn= 0 otherwise xn=1 (p, w) has come
from (P-Pn, W-Wn).
• Now check (P-Pn, W-Wn) Sn-1also belongs to Sn-2 or not
• If it is xn-1= 0 otherwise xn-1= 1.
• The process is repeated.
55
0/1 knapsack problem example (contd..)
• Let us apply this procedure to the example M = 6.
• The last tuple in S3 is (6, 6) (6,6) S2 so x3=1.
• (6,6) came from 6-P3, 6-W3=(1,2) (1,2) S2 and
(1,2) is also in S1 so set x2=0.
• (1, 2) S0 so x1= 1.
• Hence an optimal solution is (x1,x2,x3) = (1,0,1) .

56
Informal Knapsack Algorithm
Procedure DKP (P, W, n, M)
S0= {0, 0)}
For i←1 to n do
Si1← {(P1, W1) / P1- pi,W1-wi Si-1 and W1≤M}
// Si1 is constructed by adding (pi,wi) to
tuples in Si-1 //
Si← MERGE-PURGE (Si-1, Si1)
repeat
(PX, WX)← Last tuple in Sn-1
57
Informal Knapsack Algorithm (Contd..)
(PY, WY)← (P1+Pn, W1+Wn) where W1 is the
largest integer W in any tuple in Sn-1 such that
W+Wn≤M
// compute xn// // As PX is in Sn-1 and PY is in Sn, PX>PY means
we need not choose PY, so Xn←0 //

8 if PX>PY then xn← 0 else xn← 1
9 end if
// compute similarly xn-1…x1 //
end DKP
58
Detailed knapsack algorithm
• In implementation P, W are treated as onedimensional arrays.
• Pointers F(i) i= 0,1,…,n with F(i) pointing
to the first element in Si 0≤i≤n and F(n)
pointing to one more than the last element
in Sn-1 are used

59
Detailed knapsack algorithm
Procedure DKNAP(p,w,n,M,m)
Real p(n), w(n), P(m), W(m), pp, ww, M
Integer F(0:n), l, h, u, i, j, p, next // p(n), w(n) are //
F(0)← 1; P(1)← W(1)← 0 // S0 // // given arrays //
l← h← 1 // start and end of S0 // // P(m), W(m) are //
// computed pair //
F(1)← next← 2 // next free spot in P and W //
4 for i← 1 to n do
60
Detailed knapsack algorithm (Contd..)
k← l // k is the temporary index for tuples in Si-1 //
6 u← largest k such that l≤k≤h and W(k)+ wi≤M
// this is to remove tuples like(7,7), (8,9) in S3 in
the example //
// k is 2 in S2 //
// so, k points to the next tuple in Si-1 that has to be merged //
// into Si // // pairs for which Wj+Wi >M are not generated//
7 for j← l to u do // generate Si1 and merge //
61
Detailed knapsack algorithm (Contd..)
8 (pp, ww)← (P(j)+pi, W(j)+wi)
// next element in Si //
9 while k≤h and W(k) ≤ ww do
P(next)← P(k); W(next)←W(k)
next← next+1; k←k+1
12 repeat
13 if k≤h and W(k)= ww then pp← max(pp, P(k));
14 k←k+1;
15 endif
62
Detailed knapsack algorithm (Contd..)
16 if pp>P(next-1) then
(P(next),W(next))←(pp, ww);
17 next←next+1
18 endif
19 while k≤h and P(k)≤P(next-1) do
//use purging rule //
20 k←k+1
repeat 21
repeat 22
63
Detailed knapsack algorithm (Contd..)
// merge m remaining terms from Si-1 //
23 while k≤h do
24 (P(next), W(next)← P(k), W(k))
25 next←next+1; k←k+1
26 repeat
// initialize for Si-1 //
27 l←h+1; h←next-1; F(i+1)← next
28 repeat
29 call PARTS // parts implements lines 8-9 //
30 end DKNAP // of DKP i.e. computations of xi s//
64
Detailed knapsack algorithm (Contd..)
Example:
• n=3, M=6, (w1,w2,w3) = (2,3,4), (p1,p2,p3)= (1,2,5)
• S0= {0,0}; S11= {(1,2)} S1={(0,0), (1,2)}; S21=
{(2,3), (3,5)}
• S2= {(0,0), (1,2), (2,3), (3,5)}; S31={
(5,4), (6,6), (7,7), (8,9)}
• S3= {(0,0), (1,2), (2,3), (5,4), (6,6), (7,7), (8,9)}

65
Detailed knapsack algorithm (Contd..)
• Let us see how S3 is generated using the procedure DKNAP
1 2 3
4
5
6
7
8
9
10 11 12
P 0

0

1

W0 0 2
↑ ↑
F(0) F(1)
l←4 h←7
k←l = 4

0

1

2

3

0

1

2

0
2
3
5
0
2
3
↑
↑
F(2)
F(3)
F(2+1)= F(3)←Next= 8
(line 27)

5

6

4

6

66
Detailed knapsack algorithm (Contd..)
• u←largest k such that 4≤k≤7 and W(k)+w3≤6
•
such k is 5 because w3= 4 and 4+2=6
•
4+3= 7>6 and 4+5= 9>6
• Now S31 is generated and merged with S2 in lines
7-22,
• The addresses of starting of set Si are stored in the
array F(i)
• F(0)  1 means F(0) is pointing to 1st pair or
address of first pair is 1.
67
Detailed knapsack algorithm (Contd..)
for j←4 to 5 do (Line 7)
4
5 6
7
(pp,ww)←P(4)+p3, W(4)+w3= (5,4) (P,W) (0,0) (1,2) (2,3) (3,5)
(p3, w3)= (5,4)
In lines 9-12 with k=4,5,6
P(8) = 0, W(8) = 0; (P(9), W(9)) = (1,2); (P(10), W(10)) = (2,3)
(P(11), W(11)) = (2,3) are generated, while is exited with k = 7
as W(7) = 5> ww = 4 and next←11

68
Detailed knapsack algorithm (Contd..)
• Line 13 is not true for any k as W(k) ≠ ww = 4
• In lines 16-18 (5,4) = (pp, ww) is included in S3 ,
as pp= 5> P(next-1)= P(10)=2 i.e. profit gained till
now.
(P(11), W(11)) = (5,4) and next←12
• In lines 19-21 P(7) = 3≤P(11) = 5 so (3,5) is not
included.
• Hence k is incremented to 8 // W(k) ≥ ww but
P(k)<=pp case//
69
Detailed knapsack algorithm (Contd..)
• Similarly with j←5
(pp,ww)←(P(5)+p3, W(5)+w3) = (6,6)
• And in lines 16-18 (P(12), W(12)) = (6,6)is
included in S3
• Lines 23-25 for W(k) ≥ ww and P(k) ≥ pp
case, which in this example are not entered
as k is 8 for j←5 and where k is 7 for j← 4
p(k)≤pp.
70
Detailed knapsack algorithm (Contd..)
• Thus S3 is {(0,0), (1,2), (2,3), (5,4), (6,6)}
• Which is
{(P(8), W(8)), (P(9),W(9)), (P(10), W(10)),
(P(11),W(11)), (P(12), W(12))}.
• F(3) = 8 is pointing to the start of this array
i.e. (P(8), W(8)).

71
THE TRAVELING SALESPERSON
PROBLEM
• 0/1 Knapsack problem is a subset selection
problem. Given n objects there are 2n
different subsets of objects.
• The solution is one of these 2n subsets.
• In permutation problems like Traveling
Salesperson Problem(TSP) there are n!
different permutations of n objects. (n!>2n).
72
THE TRAVELING SALESPERSON
PROBLEM (Contd..)
TSP problem is as follows:
• Let G = (V,E) be directed graph with edge costs
Cij. Cij>0 for all i and j and Cij = +∞ if <i,j> E.
• Let |V|=n and assume n>1.
• A tour of G is a directed cycle that includes every
vertex in V.
• The cost of a tour is the sum of the cost of the
edges on the tour.
• The TSP problem is to find a tour of minimum
cost.
73
THE TRAVELING SALESPERSON
PROBLEM (Contd..)
Applications of TSP:1. A postal van is to pick up mail from mail boxes
located at n different sites. The route taken by a
postal van is a tour and the problem is to find a
tour of minimum length.
2. A minimum cost tour of a robot arm used to
tighten the nuts on some piece of machinery on an
assembly line will minimize the time needed for
the arm to complete its task.
74
TSP AND THE PRINCIPLE
OF OPTIMALITY
• Assume a tour to be a simple path that starts and
ends at vertex 1.
• Every tour consists of an edge <1,k> for some k
V-{1} and a path from vertex k to vertex 1.
• The path from vertex k to vertex 1 goes through
each vertex in V-{1,k} exactly once.
• If the tour is optimal, then the path from k to 1
must be a shortest k to 1 path going through all
vertices in V-{1,k}.
• Hence the principle of optimality holds.
75
TSP AND THE PRINCIPLE OF
OPTIMALITY (Contd..)
• Let g(i,S) be the length of a shortest path starting
at vertex i, going through all vertices in S and
terminating at vertex 1.
• S is the set of all vertices through shortest path goes

• g(1, V-{1}) is the length of an optimal salesperson
tour.
• Form the principle of optimality, it follows that
g(1, V-{1})= min{C1k+g(k,V-{1,k})} ……(1)
2≤k≤n
(forward approach)
76
TSP AND THE PRINCIPLE OF
OPTIMALITY (Contd..)
• In general, for i S,
g(i,S)= min{Cij+g(j, S-{j})}………..(2)
j s
• (1) may be solved for g(1, V-{1}) if we know g(k,
V-{1,k}) for all choices of k.
• These values are obtained from (2)
• g(i,Ø) = Ci1 1≤i≤n.
• g(i,Ø) is the length of the shortest path starting at i
going through no vertex and ending at 1.
77
Solution of TSP using Dynamic programming
• Using equation (1), We obtain g(i,S) for all S of size 1, then
for all S of size 2 etc.
• For all |S|<n-1 g(i,S) is computed with i≠1, 1 S, i S.
Example :
1
2
3
1
2
1
0
10
15
2
5
0
9
3
6
13
0
4
3
4
8
8
9
(a) Directed Graph

4
20
10
12
0

(b) Edge Length Matrix
78
Solution of TSP using Dynamic
programming contd..
• Let us solve the problem using dynamic programming
approach.
As g(i,Ø) = Ci1 1≤i≤n
g(2,Ø) = C21= 5; g(3,Ø) = C31= 6; g(4,Ø) = C41 = 8
• we know that
g(i,S) = min{Cij+g(j, S-{j})}
j S
• for |S|=1
g(2,{3}) = C23+g(3,Ø) = 9+6 = 15
g(2,{4}) = C24+g(4,Ø) = 10+8 = 18
g(3,{2}) = C32+g(2,Ø) = 13+5 = 18
g(4,{2}) = C42+g(2,Ø) = 8+5 = 13
g(4,{3}) = C43+g(3,Ø) = 9+6 = 15

79
Solution of TSP using Dynamic
programming contd..
• Next, we compute g (i,S) with | S | = 2 i 1, 1 S, i
g (2, {3,4}) = min { C23 + g (3,{4}), C24 + g (4, {3})
= min { 9+20, 10+15 } = min { 29,25} = 25
g (3, {2,4}) = min { C32 + g (2,{4}), C34 + g (4, {2})
= min { 13+18, 12+13 } = min { 31,25} = 25
g (4, {2,3}) = min { C42 + g (2,{3}), C43 + g (3, {2})
= min { 8+15, 9+18 } = min { 23, 27} = 23

S

80
Solution of TSP using Dynamic
programming contd..
We compute g (i,S) with | S | = 3
g (1, {2, 3, 4}) = min { C12 + g (2,{3,4}), C13 + g (3, {2,4})
, C14 + g (4, {2,3})
= min { 10 + 25, 15 + 25, 20+23 }
= min { 35, 40, 43 } = 35
• Thus an optimal tour of the graph has length 35.

81
Solution of TSP using Dynamic
programming contd..
• The tour of this length may be constructed if we
retain the value of j that minimizes each g(i,S) for
| S | = 3 and | S | = 2 and | S | = 1.
• Let J (i,S) be the minimal tour.
• J (1, {2,3,4}) = 2 as C12 + g(2,{3,4}) = 35 is the
minimum.

82
Solution of TSP using Dynamic
programming contd..
• The remaining tour may be obtained from
g(2, {3,4}).
Observe that J (2, { 3, 4} ) = 4 and
J (4, { 3} ) = 3
The optimal tour starts at 1 goes through the
vertices 2, 4, 3 respectively and ends at 1.
i.e. 1,2,4,3, 1 are the vertices on the shortest path
83
Complexity of TSP
Complexity of TSP = (n22n)
Space needed = (n2n)
Complexity involves the computations of
g(1, V-{1}) or g(i, S)
Let N be the number of g(i, S) to be computed
For each values of /S/, there are n-1 choices of i
The number of distinct sets S of size k not including
1 and i is (n-2) C k
Hence N = Σ (n-1) (n-2) C k k= 0, 1, …, n-2
84
Complexity of TSP contd..
N = (n-1) Σ (n-2) C k k= 0, 1, …, n-2 = (n-1) 2 (n-2)

(Applying Binomial theorem below with a=1 and
x=1)
Binomial theorem is as follows
(a +x)n = a n n C 0 + a n-1 nC 1 x + a n-2 nC2 x 2 +…+nC n xn
As g(1, V-{1}) involves the computation of g(i, s) maximum
n times, the Complexity is n(n2n) =n22n
As g(i, s) are to be stored for further computations,
The storage required is (n2n)
85

Weitere ähnliche Inhalte

Was ist angesagt?

Bruteforce algorithm
Bruteforce algorithmBruteforce algorithm
Bruteforce algorithmRezwan Siam
 
Artificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesArtificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesDr. C.V. Suresh Babu
 
03 algorithm properties
03 algorithm properties03 algorithm properties
03 algorithm propertiesLincoln School
 
Pattern matching
Pattern matchingPattern matching
Pattern matchingshravs_188
 
I. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithmI. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithmvikas dhakane
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy methodhodcsencet
 
RABIN KARP ALGORITHM STRING MATCHING
RABIN KARP ALGORITHM STRING MATCHINGRABIN KARP ALGORITHM STRING MATCHING
RABIN KARP ALGORITHM STRING MATCHINGAbhishek Singh
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programmingGopi Saiteja
 
Divide and Conquer - Part 1
Divide and Conquer - Part 1Divide and Conquer - Part 1
Divide and Conquer - Part 1Amrinder Arora
 
Introduction to Dynamic Programming, Principle of Optimality
Introduction to Dynamic Programming, Principle of OptimalityIntroduction to Dynamic Programming, Principle of Optimality
Introduction to Dynamic Programming, Principle of OptimalityBhavin Darji
 
Propositional logic
Propositional logicPropositional logic
Propositional logicRushdi Shams
 
TOC 1 | Introduction to Theory of Computation
TOC 1 | Introduction to Theory of ComputationTOC 1 | Introduction to Theory of Computation
TOC 1 | Introduction to Theory of ComputationMohammad Imam Hossain
 

Was ist angesagt? (20)

Bruteforce algorithm
Bruteforce algorithmBruteforce algorithm
Bruteforce algorithm
 
Artificial Intelligence Searching Techniques
Artificial Intelligence Searching TechniquesArtificial Intelligence Searching Techniques
Artificial Intelligence Searching Techniques
 
03 algorithm properties
03 algorithm properties03 algorithm properties
03 algorithm properties
 
Pattern matching
Pattern matchingPattern matching
Pattern matching
 
Data Structure and Algorithm - Divide and Conquer
Data Structure and Algorithm - Divide and ConquerData Structure and Algorithm - Divide and Conquer
Data Structure and Algorithm - Divide and Conquer
 
I. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithmI. Hill climbing algorithm II. Steepest hill climbing algorithm
I. Hill climbing algorithm II. Steepest hill climbing algorithm
 
Greedy algorithms
Greedy algorithmsGreedy algorithms
Greedy algorithms
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
String matching, naive,
String matching, naive,String matching, naive,
String matching, naive,
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
 
RABIN KARP ALGORITHM STRING MATCHING
RABIN KARP ALGORITHM STRING MATCHINGRABIN KARP ALGORITHM STRING MATCHING
RABIN KARP ALGORITHM STRING MATCHING
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
Divide and Conquer - Part 1
Divide and Conquer - Part 1Divide and Conquer - Part 1
Divide and Conquer - Part 1
 
Greedy Algorihm
Greedy AlgorihmGreedy Algorihm
Greedy Algorihm
 
Np complete
Np completeNp complete
Np complete
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
Introduction to Dynamic Programming, Principle of Optimality
Introduction to Dynamic Programming, Principle of OptimalityIntroduction to Dynamic Programming, Principle of Optimality
Introduction to Dynamic Programming, Principle of Optimality
 
Travelling salesman problem
Travelling salesman problemTravelling salesman problem
Travelling salesman problem
 
Propositional logic
Propositional logicPropositional logic
Propositional logic
 
TOC 1 | Introduction to Theory of Computation
TOC 1 | Introduction to Theory of ComputationTOC 1 | Introduction to Theory of Computation
TOC 1 | Introduction to Theory of Computation
 

Andere mochten auch

backtracking algorithms of ada
backtracking algorithms of adabacktracking algorithms of ada
backtracking algorithms of adaSahil Kumar
 
Transportation Problem In Linear Programming
Transportation Problem In Linear ProgrammingTransportation Problem In Linear Programming
Transportation Problem In Linear ProgrammingMirza Tanzida
 
5.3 dynamic programming
5.3 dynamic programming5.3 dynamic programming
5.3 dynamic programmingKrish_ver2
 
Lecture 8 dynamic programming
Lecture 8 dynamic programmingLecture 8 dynamic programming
Lecture 8 dynamic programmingOye Tu
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programmingShakil Ahmed
 

Andere mochten auch (6)

backtracking algorithms of ada
backtracking algorithms of adabacktracking algorithms of ada
backtracking algorithms of ada
 
Transportation Problem In Linear Programming
Transportation Problem In Linear ProgrammingTransportation Problem In Linear Programming
Transportation Problem In Linear Programming
 
5.3 dynamic programming
5.3 dynamic programming5.3 dynamic programming
5.3 dynamic programming
 
Lecture 8 dynamic programming
Lecture 8 dynamic programmingLecture 8 dynamic programming
Lecture 8 dynamic programming
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
 
Linear programing
Linear programingLinear programing
Linear programing
 

Ähnlich wie Dynamic Programming

unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programminghodcsencet
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxAAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxHarshitSingh334328
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deepKNaveenKumarECE
 
An Algorithm to Find the Largest Circle inside a Polygon
An Algorithm to Find the Largest Circle inside a PolygonAn Algorithm to Find the Largest Circle inside a Polygon
An Algorithm to Find the Largest Circle inside a PolygonKasun Ranga Wijeweera
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptxTekle12
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfComputer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfjannatulferdousmaish
 
Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...
Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...
Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...Kasun Ranga Wijeweera
 
dynamic programming Rod cutting class
dynamic programming Rod cutting classdynamic programming Rod cutting class
dynamic programming Rod cutting classgiridaroori
 
DynamicProgramming.pdf
DynamicProgramming.pdfDynamicProgramming.pdf
DynamicProgramming.pdfssuser3a8f33
 
ICPC 2015, Tsukuba : Unofficial Commentary
ICPC 2015, Tsukuba: Unofficial CommentaryICPC 2015, Tsukuba: Unofficial Commentary
ICPC 2015, Tsukuba : Unofficial Commentaryirrrrr
 
1 chapter1 introduction
1 chapter1 introduction1 chapter1 introduction
1 chapter1 introductionSSE_AndyLi
 
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhh
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhhCh3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhh
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhhdanielgetachew0922
 
DynamicProgramming.ppt
DynamicProgramming.pptDynamicProgramming.ppt
DynamicProgramming.pptDavidMaina47
 
An Algorithm to Find the Visible Region of a Polygon
An Algorithm to Find the Visible Region of a PolygonAn Algorithm to Find the Visible Region of a Polygon
An Algorithm to Find the Visible Region of a PolygonKasun Ranga Wijeweera
 
Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...
Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...
Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...Kasun Ranga Wijeweera
 

Ähnlich wie Dynamic Programming (20)

unit-4-dynamic programming
unit-4-dynamic programmingunit-4-dynamic programming
unit-4-dynamic programming
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptxAAC ch 3 Advance strategies (Dynamic Programming).pptx
AAC ch 3 Advance strategies (Dynamic Programming).pptx
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
super vector machines algorithms using deep
super vector machines algorithms using deepsuper vector machines algorithms using deep
super vector machines algorithms using deep
 
An Algorithm to Find the Largest Circle inside a Polygon
An Algorithm to Find the Largest Circle inside a PolygonAn Algorithm to Find the Largest Circle inside a Polygon
An Algorithm to Find the Largest Circle inside a Polygon
 
Chapter 5.pptx
Chapter 5.pptxChapter 5.pptx
Chapter 5.pptx
 
Unit 3
Unit 3Unit 3
Unit 3
 
Unit 3
Unit 3Unit 3
Unit 3
 
Computer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdfComputer algorithm(Dynamic Programming).pdf
Computer algorithm(Dynamic Programming).pdf
 
algorithm Unit 2
algorithm Unit 2 algorithm Unit 2
algorithm Unit 2
 
Unit 2 in daa
Unit 2 in daaUnit 2 in daa
Unit 2 in daa
 
Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...
Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...
Accurate, Simple and Efficient Triangulation of a Polygon by Ear Removal with...
 
dynamic programming Rod cutting class
dynamic programming Rod cutting classdynamic programming Rod cutting class
dynamic programming Rod cutting class
 
DynamicProgramming.pdf
DynamicProgramming.pdfDynamicProgramming.pdf
DynamicProgramming.pdf
 
ICPC 2015, Tsukuba : Unofficial Commentary
ICPC 2015, Tsukuba: Unofficial CommentaryICPC 2015, Tsukuba: Unofficial Commentary
ICPC 2015, Tsukuba : Unofficial Commentary
 
1 chapter1 introduction
1 chapter1 introduction1 chapter1 introduction
1 chapter1 introduction
 
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhh
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhhCh3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhh
Ch3(1).pptxbbbbbbbbbbbbbbbbbbbhhhhhhhhhh
 
DynamicProgramming.ppt
DynamicProgramming.pptDynamicProgramming.ppt
DynamicProgramming.ppt
 
An Algorithm to Find the Visible Region of a Polygon
An Algorithm to Find the Visible Region of a PolygonAn Algorithm to Find the Visible Region of a Polygon
An Algorithm to Find the Visible Region of a Polygon
 
Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...
Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...
Convex Partitioning of a Polygon into Smaller Number of Pieces with Lowest Me...
 

Kürzlich hochgeladen

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Magic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptxMagic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptxdhanalakshmis0310
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfNirmal Dwivedi
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxcallscotland1987
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Association for Project Management
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsMebane Rash
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.pptRamjanShidvankar
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docxPoojaSen20
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptxMaritesTamaniVerdade
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docxPoojaSen20
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhikauryashika82
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibitjbellavia9
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...Poonam Aher Patil
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin ClassesCeline George
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...pradhanghanshyam7136
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentationcamerronhm
 

Kürzlich hochgeladen (20)

1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Magic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptxMagic bus Group work1and 2 (Team 3).pptx
Magic bus Group work1and 2 (Team 3).pptx
 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
 
Dyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptxDyslexia AI Workshop for Slideshare.pptx
Dyslexia AI Workshop for Slideshare.pptx
 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
 
psychiatric nursing HISTORY COLLECTION .docx
psychiatric  nursing HISTORY  COLLECTION  .docxpsychiatric  nursing HISTORY  COLLECTION  .docx
psychiatric nursing HISTORY COLLECTION .docx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in DelhiRussian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
Russian Escort Service in Delhi 11k Hotel Foreigner Russian Call Girls in Delhi
 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 

Dynamic Programming

  • 1. DYNAMIC PROGRAMMING • Problems like knapsack problem, shortest path can be solved by greedy method in which optimal decisions can be made one at a time. • For many problems, it is not possible to make stepwise decision in such a manner that the sequence of decisions made is optimal.
  • 2. DYNAMIC PROGRAMMING (Contd..) Example: • Suppose a shortest path from vertex i to vertex j is to be found. • Let Ai be vertices adjacent from vertex i. which of the vertices of Ai should be the second vertex on the path? • One way to solve the problem is to enumerate all decision sequences and pick out the best • In dynamic programming the principle of optimality is used to reduce the decision sequences.
  • 3. DYNAMIC PROGRAMMING (Contd..) Principle of optimality: • An optimal sequence of decisions has the property that whatever the initial state and decisions are, the remaining decisions must constitute an optimal decision sequence with regard to the state resulting from the first decision. • In the greedy method only one decision sequence is generated. • In dynamic programming many decision sequences may be generated. 3
  • 4. 0/1 knapsack problem Example: • [0/1 knapsack problem]:the xi’s in 0/1 knapsack problem is restricted to either 0 or 1. • Using KNAP(l,j,y) to represent the problem Maximize ∑pixi l≤i≤j subject to ∑wixi ≤ y, ………………..(1) l≤i≤j xi = 0 or 1 l≤i≤j The 0/1 knapsack problem is KNAP(l,n,M).
  • 5. 0/1 knapsack problem: principal of optimality • Let y1 ,y2 , …… ,yn be an optimal sequence of 0/l values for x1 ,x2 …..,xn respectively. • If y1 = 0 then y2,y3,……,yn must constitute an optimal sequence for KNAP (2,n,M). • If it does not, then y1 ,y2 , …… ,yn is not an optimal sequence for KNAP (1, n, M) • If y1=1, then y1 ,y2 , …… ,yn is an optimal sequence for KNAP(2,n,M-w1 ). • If it is not there is another 0/l sequence z1,z2,….,zn such that ∑pizi has greater value. • Thus the principal of optimality holds.
  • 6. 0/1 knapsack problem: principal of optimality (contd..) • Let gj(y) be the value of an optimal solution to KNAP (j+1, n, y). • Then g0(M) is the value of an optimal solution to KNAP(l,n,M) • From the principal of optimality g0(M) = max{g1(M), g1(M-W1) + P1)} • g0(M) is the maximum profit which is the value of the optimal solution 6
  • 7. 0/1 knapsack problem: principal of optimality (contd..) • The principal of optimality can be equally applied to intermediate states and decisions as has been applied to initial states and decisions • Let y1,y2,…..yn be an optimal solution to KNAP(l,n,M). • Then for each j l≤j≤n , y1,..,yj and yj+1,….,yn must be optimal solutions to the problems KNAP(1,j,∑wiyi) l≤i≤j and KNAP(j+1,n,M-∑wiyi) respectively l≤i≤j
  • 8. Recurrence Relation • Then gi(y) = max{gi+1(y), (xi +1= 0 case) gi+1(y-wi +1) + pi+1}…(1) (xi +1= 1case) • Equation (1) is known as recurrence relation. • Dynamic programming algorithms solve the recurrence relation to obtain a solution to the given problem • To solve 0/1 knapsack problem , we use the knowledge gn(y) = 0 for all y, because gn(y) is an optimal solution (profit) to the problem KNAP(n+1, n, y) which is obviously zero for any y 8
  • 9. Solving Recurrence Relation • Substituting i = n -1 and using gn(y)= 0 in the above relation(1), we obtain gn-1(y) • Then using gn-1(y), we obtain gn-2(y) and so on till we get g0(M) (with i=0) which is the solution of the knapsack problem 9
  • 10. Forward and backward approaches • In the forward approach ,decision xi is made in terms of optimal decision sequences for xi+1……..xn (i.e we look ahead). • In the backward approach, decision xi is in terms of optimal decision sequences for x1……xi-1(i.e we look backward) 10
  • 11. Forward and backward approaches: 0/1 knapsack problem • For the 0/l knapsack problem gi(y)=max{gi+1(y), gi+1(y-wi+1)+Pi+1}………(1) • (1) is the forward approach as gn-1(y) is obtained using gn(y) • fj(y) = max{fj-1(y), fj-1(y-wi) + pj} …………..(2) • (2) is the backward approach, as fj(y) is obtained using fj-1(y) fj(y) is the value of optimal solution to Knap(i,j,y) (2) may be solved by beginning with f0(y) = 0 for all y ≥ 0 (no element case) and fi(y) = -infinity for y < 0 (no capacity) 11
  • 12. FEATURES OF DYNAMIC PROGRAMMING SOLUTIONS • It is easier to obtain the recurrence relation using backward approach. • Dynamic programming algorithms often have polynomial complexity. • Optimal solution to sub problems are retained so as to avoid recomputing their values. 12
  • 13. OPTIMAL BINARY SEARCH TREES • Definition: binary search tree (BST) A binary search tree is a binary tree; either it is empty or each node contains an identifier and (i) all identifiers in the left sub tree of T are less than the identifiers in the root node T. (ii) all the identifiers the right sub tree are greater than the identifier in the root node T. (iii) the right and left sub tree are also BSTs. 13
  • 14. ALGORITHM TO SEARCH FOR AN IDENTIFIER IN THE TREE ‘T’. Procedure SEARCH (T X I) // Search T for X, each node had fields LCHILD, IDENT, RCHILD// // Return address i pointing to the identifier X// //Initially T is pointing to tree. //ident(i)=X or i=0 // iT 14
  • 15. Algorithm to search for an identifier in the tree ‘T’(Contd..) While i 0 do case : X < Ident(i) : i LCHILD(i) : X = IDENT(i) : RETURN i : X > IDENT(i) : I  RCHILD(i) endcase repeat end SEARCH 15
  • 16. Optimal Binary Search trees Example If For while repeat loop if each identifier is searched with equal probability the average number of comparisons for the above tree are 1+2+2+3+4 = 12/5. 5 16
  • 17. OPTIMAL BINARY SEARCH TREES (Contd..) • Let us assume that the given set of identifiers are {a1,a2,……..an} with a1<a2<…….<an. • Let Pi be the probability with which we are searching for ai. • Let Qi be the probability that identifier x being searched for is such that ai<x<ai+1 0≤i≤n, and a0=-∞ and an+1=+∞. 17
  • 18. OPTIMAL BINARY SEARCH TREES (Contd..) • Then ∑Qi is the probability of an unsuccessful search. 0≤i≤ n ∑P(i) + ∑Q(i) = 1. Given the data, 1≤i≤n 0≤i≤n let us construct one optimal binary search tree for (a1……….an). • In place of empty sub tree, we add external nodes denoted with squares. • Internet nodes are denoted as circles. 18
  • 19. OPTIMAL BINARY SEARCH TREES (Contd..) If For while repeat loop 19
  • 20. Construction of optimal binary search trees • A BST with n identifiers will have n internal nodes and n+1 external nodes. • Successful search terminates at internal nodes unsuccessful search terminates at external nodes. • If a successful search terminates at an internal node at level L, then L iterations of the loop in the algorithm are needed. • Hence the expected cost contribution from the internal nodes for ai is P(i) * level(ai). 20
  • 21. OPTIMAL BINARY SEARCH TREES (Contd..) • Unsuccessful searche terminates at external nodes i.e. at i = 0. • The identifiers not in the binary search tree may be partitioned into n+1 equivalent classes Ei 0≤i≤n. Eo contains all X such that X≤ai Ei contains all X such that ai<X<=ai+1 1≤i≤n En contains all X such that X > an 21
  • 22. OPTIMAL BINARY SEARCH TREES (Contd..) • For identifiers in the same class Ei, the search terminate at the same external node. • If the failure node for Ei is at level L, then only L-1 iterations of the while loop are made The cost contribution of the failure node for Ei is Q(i) * (level (Ei ) -1) 22
  • 23. OPTIMAL BINARY SEARCH TREES (Contd..) • Thus the expected cost of a binary search tree is: ∑P(i) * level (ai) + ∑Q(i) * (level(Ei) – 1) …(2) 1≤i≤n 0≤i≤n • An optimal binary search tree for {a1…….,an} is a BST for which (2) is minimum . • Example: Let {a1,a2, a3}={do, if, stop} 23
  • 24. OPTIMAL BINARY SEARCH TREES (Contd..) Level 1 Level 2 stop if if Q(3) do do stop if Q(2) Level 3 do Q(0) stop Q(1) (a) (b) (c) {a1,a2,a3}={do,if,stop} 24
  • 25. OPTIMAL BINARY SEARCH TREES (Contd..) stop do do stop if if (d) (c) 25
  • 26. OPTIMAL BINARY SEARCH TREES (Contd..) • With equal probability P(i)=Q(i)=1/7. • Let us find an OBST out of these. • Cost(tree a)=∑P(i)*level a(i) +∑Q(i)*level (Ei) -1 1≤i≤n 0≤i≤n (2-1) (3-1) (4-1) (4-1) =1/7[1+2+3 + 1 + 2 + 3 + 3 ] = 15/7 • Cost (tree b) =17[1+2+2+2+2+2+2] =13/7 • Cost (tree c) =cost (tree d) =cost (tree e) =15/7 tree b is optimal. 26
  • 27. OPTIMAL BINARY SEARCH TREES (Contd..) • If P(1) =0.5 ,P(2) =0.1, P(3) =0.005 , Q(0) =.15 , Q(1) =.1, Q(2) =.05 and Q(3) =.05 find the OBST. • Cost (tree a) = .5 x 3 +.1 x 2 +.05 x 3 +.15x3 +.1x3 +.05x2 +.05x1 = 2.65 • Cost (tree b) =1.9 , Cost (tree c) =1.5 ,Cost (tree d) =2.05 , • Cost (tree e) =1.6 Hence tree C is optimal. 27
  • 28. OPTIMAL BINARY SEARCH TREES (Contd..) • To obtain a OBST using Dynamic programming we need to take a sequence of decisions regarding the construction of tree. • First decision is which of ai is to be the root. • Let us choose ak as the root . Then the internal nodes for a1,…….,ak-1 and the external nodes for classes Eo,E1,……,Ek-1 will lie in the left subtree L of the root. • The remaining nodes will be in the right subtree R. 28
  • 29. OPTIMAL BINARY SEARCH TREES (Contd..) Define Cost(L) =∑P(i)* level(ai) + ∑Q(i)*(level(Ei )-1) 1≤i≤k 0≤i≤k Cost(R) =∑P(i)*level(ai) + ∑Q(i)*(level(Ei )-1) k≤i≤n k≤i≤n • Tij be the tree with nodes ai+1,…..,aj and nodes corresponding to Ei,Ei+1,…..,Ej. • Let W(i,j) represents the weight of tree Tij. 29
  • 30. OPTIMAL BINARY SEARCH TREES (Contd..) W(i,j)=P(i+1) +…+P(j)+Q(i)+Q(i+1)…Q(j)=Q(i) +∑j [Q(l)+P(l)] l=i+1 • The expected cost of the search tree in (a) is (let us call it T) is P(k)+cost(l)+cost(r)+W(0,k-1)+W(k,n) W(0,k-1) is the sum of probabilities corresponding to nodes and nodes belonging to equivalent classes to the left of ak. W(k,n) is the sum of the probabilities corresponding to those on the right of ak. ak L R (a) OBST with root ak 30
  • 31. OPTIMAL BINARY SEARCH TREES (Contd..) • The expected cost of the tree is P(K)+COST(L)+COST(R)+W(0,k-1)+W(k,n)…….....(3) • If T is optimal then (3) must be minimum over all binary search trees containing a1,…ak-1 and E0,E1,…..,Ek-1 • Similarly cost(R) must be minimum. • Let C(i,j) represent the cost of the tree Tij (containing nodes ai+1,……aj and Ei,…..,Ej). • Then cost(l) = C(0,k-1) and cost(R) =C(k,n) substituting for cost(l) and cost(R) in (3), 31
  • 32. OPTIMAL BINARY SEARCH TREES (Contd..) • The expected cost of the search tree is • P(k)+C(0,k-1)+C(k,n)+W(0,k-1)+W(k,n)…..(4) is minimum if k is chosen such that (4) is minimum cost of the BST with nodes a1,a2,…..,an and E0,….,En C(0,n)=min{C(0,k-1)+C(k,n)+P(k)+W(0,k-1)+W(k,n)}…..(5) 1<=k≤n substitute i instead of 0 and j instead of n in (5) C(I, j) = min { C(i, k-1)+ C (k, j)+ P (k) + W(i, k-1) + W (k, j)} = min {C(i,k-1)+C(k,j)+W(i,j)}…………..(6) i<k≤j P(k) + W(i, k-1) + W (k, j)= P(k) +p(i+1)+…+P(k-1)+P(k+1) +…+P(j) = P(i+1)+…+P(k)+P(k+1)+…P(j) =W(i,j) 32
  • 33. OPTIMAL BINARY SEARCH TREES (Contd..) • Equation (6) may be solved for C(0,n) by first computing C(i,j) such that j-i=1 (C(i,i)=0 and W(i,i)=Q(i),0≤i≤n) • We can compute all C(i,j) such that j-i = 2, then all C(i,j) with j-i=3 etc. • If during these computations, the root R(i,j) of each tree Tij is recorded, then OBST may be constructed. 33
  • 34. Optimal Binary Search Trees – Example Example: Let n = 4 and (a1,a2,a3,a4) = (do, if, read, while) Let P(1:4)=(3,3,1,1) and Q(0:4)=(2,3,1,1,1) . • P’s and Q’s have been multiplied by 16 for convenience. • Initially ,W(i,j) = Q(i), C(i,i) = 0, R(i,i) = 0 , 0≤i≤4 J j-1 W(i,j) =Q(i) +∑ Q(l) +P(l) =P(j) +Q(j) +Q(i) + ∑ Q(l) +P(l) l=i+1 l=i+1 ….…(7) = P(j) +Q(j) +W(i,j-1) 34
  • 35. Optimal Binary Search Trees – Example (contd..) C(i,j) = min{C(i,k-1)+C(k,j)} +W(i,j)……….…(8) i<k≤j Using (7), W(0,1) = P(1) +Q(1) +W(0,0) =3+3+Q(0)=3+3+2=8 Using (8) C(0,1) = W(0,1) +min{C(0,0) +C(1,1)} = 8 +min{0+0} R(0,1) = 1 because R(i,j) is the value of k that minimizes(8) 35
  • 36. Optimal Binary Search Trees – Example (contd..) W(1,2) =P(2) +Q(2) +W(1,1) =P(2) +Q(2) +Q(1) =3+1+3 =7 C(1,2) =W(1,2) +min{C(1,1) +C(2,2)}=7 +min{0,0)=7 R(1,2) =2 W(2,3) =P(3) +Q(3) +W(2,2) =3 k=2 C(2,3) =W(2,3) +min{C(2,2)+C(3,3)} =3 k=3 R(2,3) =3 W(3,4) =P(4) +Q(4) +W(3,3) =1+1+Q(3) =3 C(3,4) =W(3,4) +min{C(3,3) +C(4,4)} =3 k=4 R(3,4) =4 36
  • 37. Optimal Binary Search Trees – Example (contd..) • Knowing W(i,i+1), and C(i,i+1) 0≤i<4 , we are computing W(i,i+2) ,C(i,i+2) R(i,i+2) 0≤i<3 • This process may be repeated until W(0,4) ,C(0,4) and R(0,4) are obtained • Using R(i,j) let us see how to construct an OBST. • C(o,4)=32 is the minimum cost of BST for { a1, a2, a3, a4}. do, if, read, while 37
  • 38. Optimal Binary Search Trees – Example (contd..) • The root of tree T04 is a2 R(0,4)=2 • The left sub-tree of T04 is T01 and right sub-tree is T24. • R (0, 1) =1 a1 is the root of T01, • R (2, 4) =3 a3 is root of T24, • T01 has T00 and T11 as sub-trees , T24 has T22 and T34 • For T00,T11 and T22 the root is 0, T24 it is 4. 38
  • 39. Optimal Binary Search Trees – Example (contd..) a2={if} if do Read while OBST 39
  • 40. Optimal Binary Search Trees – Example (contd..) We can verify that the cost of OBST = ∑P (i) level (ai)+∑Q(i) (level(Ei)-1) 1≤i≤n 0≤i≤n = 2 3+1 3+2 1+3 1+2 2+2 3+2 1+3 1+3 1 = 6+3+2+3+4+6+2+3+3 = 32 40
  • 41. COMPLEXITY OF THE PROCEDURE FOR CONSTRUCTING OBST • • • • C(i,j)=min{C(i,k-1)+C(k,i)}+w(i,j) i<k≤j C (i,j)s are computed for j-i=1,2,…,n. If j-i=m then there are n-m+1 C (i,j)s to compute (for n=4, and j-i =1,they are C(0,1), C(1,2), C(2,3), C(3,4)) Each C(i,j) requires to compute m quantities Hence each C (i,j) can be computed in o(m) time. The total time for all c(i,j)s with j-i=m is 0(nm+1)(m)=0(nm-m2+n)=0(nm-m2) . 41
  • 42. COMPLEXITY OF THE PROCEDURE FOR CONSTRUCTING OBST (Contd..) • The total time to evaluate all the C (i,j)s and R(i,j)s is ∑(nm-m2) = n∑m-∑m2 = n (m)(m+1)/2 – m (m+1)(2m+1)/6 = 0(n3) 1≤m≤n 42
  • 43. COMPLEXITY OF THE PROCEDURE FOR CONSTRUCTING OBST (Contd..) • The complexity can be reduced to 0(n2) using D.E.Knuth’s result which limits the search to the range • R(i,j-1)≤k≤R(i+1,j), instead of min{C(i,k-1)+C(k,i)}+w(i,j) 1<k≤j • Using the values of R (i,j),the tree T0n can be constructed in 0(n) time. 43
  • 44. ALGORITHM FOR OBST PROCEDURE OBST (P,Q,n) // Given n distinct identifiers a1<a2<…. <an and probability P (i) 1≤i≤n, Q (i) 0≤i≤n, the cost of OBST C (i,j) is computed// // R (i,j) is the root of tree // // w (i.j) is the Weight of OBST // Real P (n), q (0: n), C (0: n, 0: n), W (0: n, 0: n); integer R (0: n, 0: n) 44
  • 45. ALGORITHM FOR OBST (Contd..) for i←0 to n-1 do (w (i,i), R(i,i),C(i,i) )← (Q (i), 0, 0) (w (i,i+1), R(i,i+1),C(i,i+1)← Q(i)+Q(i+1)+P(i+1), i+1, Q(i)+Q(i+1)+P(i+1) // optimal trees with one node // repeat (w(n,n),R(n,n),C(n,n) ) ← (Q (n), 0, 0) for m←2 to n do // find optimal // for i←0 to n-m do // trees with m // j←i+m // nodes // 45
  • 46. ALGORITHM FOR OBST (Contd..) w(i,j)←w(i,j-1)+P(j)+Q(j) k←a value l in the range R (i,j-1)≤l≤R(i+1,j) that Minimizes{C (i, l-1) +C (l,j)} C(i,j)=w(i,j)+C(i,k-1)+C(k,j) R (i,j)←k Repeat Repeat End OBST 46
  • 47. 0/1 KNAPSACK PROBLEM • The solution of 0/1 knapsack problem is a sequence of decisions on x1,…, xn and the total profit; x1,…..,xn may be 0 or 1. • Let fj(X) be the value of an optimal solution to KNAP (1, j, X). • Since the principle of optimality holds fn (M) = max {fn-1(M) (Corresponding to xn=0), fn-1(M-wn) +pn} (corresponding to xn=1) 47
  • 48. 0/1 KNAPSACK PROBLEM (Contd..) • In general fi(X)=max{fi-1(X), fi-1(X-wi)+pi}….(1) • (1) may be solved by beginning with f0(X)=0 for all X (no elements case) and fi(X)= -∞ for X<0 (no weight case) f1,…,fn may be solved successively. EXAMPLE:• Let n=3, (w1,w2,w3) = (2,3,4); • (P1,P2,P3) = (1,2,3), M= 6 48
  • 49. 0/1 KNAPSACK PROBLEM (Contd..) SOLUTION: f0(X) = 0 X, fi(X) = -∞ if X<0 f0 (6) = 0 f1(6) = max{f0(6), f0(6-2)+1} = max {0,1} = 1 f2(6) = max{f1(6), f1(6-3)+2} = max{1,f1(3)+2} f1(3) = max{f0(3), f0(3-2)+1} = max{0,1}=1 f2(6) = max {1, 3} =3 f3(6) = max{f2(6), f2(6-4)+5}=max{3, f2(2)+5} f2(2) = max{f1(2), f1(2-3)+2}=max{1, f1(-1)+2} = 1 (as f1 (-1) = -∞) f3(6)=max{3,f2(2)+5}=max{3,6}=6 49
  • 50. 0/1 KNAPSACK PROBLEM (Contd..) • Similarly we can compute f1 (1) = 0, f2 (1) = 0, f1 (2) =1 and f2 (3) =2 • Each fi is completely specified by the pairs (Pj,Wj) where Wj is a value of X at which fi takes a jump Pj=fi(Wj) and (P0,W0)=(0,0) is introduced . • As we always choose maximum profit, if Wj<Wj+1 0≤j≤r then Pj<Pj+1. 50
  • 51. 0/1 KNAPSACK PROBLEM (Contd..) • Also fi(X)=fi(Wj) for all X such that Wj≤X<Wj+1 0≤j<r (as no increase in profit for X>=Wj . and fi(X)=fi(Wr) for all X X>Wr (as only 0, 1, 2, ...r-1 objects are there. • (fj(X) is the value of an optimal solution to KNAP(1,j,X) • (For such X’s as 2.5, 3.5, we do not have the profit, so we assign the previous profit.) 51
  • 52. 0/1 KNAPSACK PROBLEM (Contd..) • • • • • Let us define the sets Si-1, Si1 and Si as follows. Si-1 is the set of all pairs (p, w) for fi-1 including (0, 0). Si1= {(p, w)/ (P-pi, W-wi) Si-1} (This is the set of pairs for fi(X) =fi-1(X-wi) + pi) Si is obtained by merging together Si-1 and Si1. (This corresponds to maximum of fi-1(X) and fi-1(X-wi)+pi. 52
  • 53. 0/1 KNAPSACK PROBLEM (Contd..) • If one of Si-1 and Si1 has pair (pj,wj) and the other has a pair (pk,wk) and pj≤pk while wj≥wk then the pair (pj,wj) is discarded. • This is known as dominance rule or purging rule (pk,wk) dominates (pj,wj). For example S0 contains tuples when x1 = 0. p1 = 0 ; w1 = 0 S0={0,0} S11 contains tuples when x1 =1 p1 = 1; w1 = 2 S11={(1,2)} S1 contains tuples when x1 = 0 and x1 = 1 S1 = {(0,0), (1,2)} 53
  • 54. 0/1 knapsack problem example n = 3, (w1, w2, w3) = (2,3,4), (p1,p2,p3)= (1,2,5) and M=6. • S0 = {0, 0}; S11 is obtained by adding (p1, w1)=(1, 2) to (0, 0) Thus S11 = {(1,2)} ; S1 is obtained by combining S0 and S11 Hence S1 = {(0, 0), (1, 2)}; S21= {(2, 3), (3, 5)} adding (2, 3) to elements of S1 • S2 ={(0,0),(1,2), (2,3),(3,5)}; S31={(5,4),(6,6),(7,7),(8,9)} • S3 ={(0,0),(1,2),(2,3),(5,4),(6,6),(7,7),(8,9)} • The pair (3,5) is eliminated from S3 as 3<5 and 5>4. (Using purging rule (3,5) is discarded. 54
  • 55. 0/1 knapsack problem example (contd..) • computation of xi’s is as follows: • Consider the Last value in Sn. Let it be (P, W). • So, we need to determine x1,x2,x3,….,xn such that ∑pixi= P and 1≤i≤n • ∑wixi=W check if (p,w) is also in Sn-1 . 1≤i≤n If (p, w) Sn-1then set xn= 0 otherwise xn=1 (p, w) has come from (P-Pn, W-Wn). • Now check (P-Pn, W-Wn) Sn-1also belongs to Sn-2 or not • If it is xn-1= 0 otherwise xn-1= 1. • The process is repeated. 55
  • 56. 0/1 knapsack problem example (contd..) • Let us apply this procedure to the example M = 6. • The last tuple in S3 is (6, 6) (6,6) S2 so x3=1. • (6,6) came from 6-P3, 6-W3=(1,2) (1,2) S2 and (1,2) is also in S1 so set x2=0. • (1, 2) S0 so x1= 1. • Hence an optimal solution is (x1,x2,x3) = (1,0,1) . 56
  • 57. Informal Knapsack Algorithm Procedure DKP (P, W, n, M) S0= {0, 0)} For i←1 to n do Si1← {(P1, W1) / P1- pi,W1-wi Si-1 and W1≤M} // Si1 is constructed by adding (pi,wi) to tuples in Si-1 // Si← MERGE-PURGE (Si-1, Si1) repeat (PX, WX)← Last tuple in Sn-1 57
  • 58. Informal Knapsack Algorithm (Contd..) (PY, WY)← (P1+Pn, W1+Wn) where W1 is the largest integer W in any tuple in Sn-1 such that W+Wn≤M // compute xn// // As PX is in Sn-1 and PY is in Sn, PX>PY means we need not choose PY, so Xn←0 // 8 if PX>PY then xn← 0 else xn← 1 9 end if // compute similarly xn-1…x1 // end DKP 58
  • 59. Detailed knapsack algorithm • In implementation P, W are treated as onedimensional arrays. • Pointers F(i) i= 0,1,…,n with F(i) pointing to the first element in Si 0≤i≤n and F(n) pointing to one more than the last element in Sn-1 are used 59
  • 60. Detailed knapsack algorithm Procedure DKNAP(p,w,n,M,m) Real p(n), w(n), P(m), W(m), pp, ww, M Integer F(0:n), l, h, u, i, j, p, next // p(n), w(n) are // F(0)← 1; P(1)← W(1)← 0 // S0 // // given arrays // l← h← 1 // start and end of S0 // // P(m), W(m) are // // computed pair // F(1)← next← 2 // next free spot in P and W // 4 for i← 1 to n do 60
  • 61. Detailed knapsack algorithm (Contd..) k← l // k is the temporary index for tuples in Si-1 // 6 u← largest k such that l≤k≤h and W(k)+ wi≤M // this is to remove tuples like(7,7), (8,9) in S3 in the example // // k is 2 in S2 // // so, k points to the next tuple in Si-1 that has to be merged // // into Si // // pairs for which Wj+Wi >M are not generated// 7 for j← l to u do // generate Si1 and merge // 61
  • 62. Detailed knapsack algorithm (Contd..) 8 (pp, ww)← (P(j)+pi, W(j)+wi) // next element in Si // 9 while k≤h and W(k) ≤ ww do P(next)← P(k); W(next)←W(k) next← next+1; k←k+1 12 repeat 13 if k≤h and W(k)= ww then pp← max(pp, P(k)); 14 k←k+1; 15 endif 62
  • 63. Detailed knapsack algorithm (Contd..) 16 if pp>P(next-1) then (P(next),W(next))←(pp, ww); 17 next←next+1 18 endif 19 while k≤h and P(k)≤P(next-1) do //use purging rule // 20 k←k+1 repeat 21 repeat 22 63
  • 64. Detailed knapsack algorithm (Contd..) // merge m remaining terms from Si-1 // 23 while k≤h do 24 (P(next), W(next)← P(k), W(k)) 25 next←next+1; k←k+1 26 repeat // initialize for Si-1 // 27 l←h+1; h←next-1; F(i+1)← next 28 repeat 29 call PARTS // parts implements lines 8-9 // 30 end DKNAP // of DKP i.e. computations of xi s// 64
  • 65. Detailed knapsack algorithm (Contd..) Example: • n=3, M=6, (w1,w2,w3) = (2,3,4), (p1,p2,p3)= (1,2,5) • S0= {0,0}; S11= {(1,2)} S1={(0,0), (1,2)}; S21= {(2,3), (3,5)} • S2= {(0,0), (1,2), (2,3), (3,5)}; S31={ (5,4), (6,6), (7,7), (8,9)} • S3= {(0,0), (1,2), (2,3), (5,4), (6,6), (7,7), (8,9)} 65
  • 66. Detailed knapsack algorithm (Contd..) • Let us see how S3 is generated using the procedure DKNAP 1 2 3 4 5 6 7 8 9 10 11 12 P 0 0 1 W0 0 2 ↑ ↑ F(0) F(1) l←4 h←7 k←l = 4 0 1 2 3 0 1 2 0 2 3 5 0 2 3 ↑ ↑ F(2) F(3) F(2+1)= F(3)←Next= 8 (line 27) 5 6 4 6 66
  • 67. Detailed knapsack algorithm (Contd..) • u←largest k such that 4≤k≤7 and W(k)+w3≤6 • such k is 5 because w3= 4 and 4+2=6 • 4+3= 7>6 and 4+5= 9>6 • Now S31 is generated and merged with S2 in lines 7-22, • The addresses of starting of set Si are stored in the array F(i) • F(0)  1 means F(0) is pointing to 1st pair or address of first pair is 1. 67
  • 68. Detailed knapsack algorithm (Contd..) for j←4 to 5 do (Line 7) 4 5 6 7 (pp,ww)←P(4)+p3, W(4)+w3= (5,4) (P,W) (0,0) (1,2) (2,3) (3,5) (p3, w3)= (5,4) In lines 9-12 with k=4,5,6 P(8) = 0, W(8) = 0; (P(9), W(9)) = (1,2); (P(10), W(10)) = (2,3) (P(11), W(11)) = (2,3) are generated, while is exited with k = 7 as W(7) = 5> ww = 4 and next←11 68
  • 69. Detailed knapsack algorithm (Contd..) • Line 13 is not true for any k as W(k) ≠ ww = 4 • In lines 16-18 (5,4) = (pp, ww) is included in S3 , as pp= 5> P(next-1)= P(10)=2 i.e. profit gained till now. (P(11), W(11)) = (5,4) and next←12 • In lines 19-21 P(7) = 3≤P(11) = 5 so (3,5) is not included. • Hence k is incremented to 8 // W(k) ≥ ww but P(k)<=pp case// 69
  • 70. Detailed knapsack algorithm (Contd..) • Similarly with j←5 (pp,ww)←(P(5)+p3, W(5)+w3) = (6,6) • And in lines 16-18 (P(12), W(12)) = (6,6)is included in S3 • Lines 23-25 for W(k) ≥ ww and P(k) ≥ pp case, which in this example are not entered as k is 8 for j←5 and where k is 7 for j← 4 p(k)≤pp. 70
  • 71. Detailed knapsack algorithm (Contd..) • Thus S3 is {(0,0), (1,2), (2,3), (5,4), (6,6)} • Which is {(P(8), W(8)), (P(9),W(9)), (P(10), W(10)), (P(11),W(11)), (P(12), W(12))}. • F(3) = 8 is pointing to the start of this array i.e. (P(8), W(8)). 71
  • 72. THE TRAVELING SALESPERSON PROBLEM • 0/1 Knapsack problem is a subset selection problem. Given n objects there are 2n different subsets of objects. • The solution is one of these 2n subsets. • In permutation problems like Traveling Salesperson Problem(TSP) there are n! different permutations of n objects. (n!>2n). 72
  • 73. THE TRAVELING SALESPERSON PROBLEM (Contd..) TSP problem is as follows: • Let G = (V,E) be directed graph with edge costs Cij. Cij>0 for all i and j and Cij = +∞ if <i,j> E. • Let |V|=n and assume n>1. • A tour of G is a directed cycle that includes every vertex in V. • The cost of a tour is the sum of the cost of the edges on the tour. • The TSP problem is to find a tour of minimum cost. 73
  • 74. THE TRAVELING SALESPERSON PROBLEM (Contd..) Applications of TSP:1. A postal van is to pick up mail from mail boxes located at n different sites. The route taken by a postal van is a tour and the problem is to find a tour of minimum length. 2. A minimum cost tour of a robot arm used to tighten the nuts on some piece of machinery on an assembly line will minimize the time needed for the arm to complete its task. 74
  • 75. TSP AND THE PRINCIPLE OF OPTIMALITY • Assume a tour to be a simple path that starts and ends at vertex 1. • Every tour consists of an edge <1,k> for some k V-{1} and a path from vertex k to vertex 1. • The path from vertex k to vertex 1 goes through each vertex in V-{1,k} exactly once. • If the tour is optimal, then the path from k to 1 must be a shortest k to 1 path going through all vertices in V-{1,k}. • Hence the principle of optimality holds. 75
  • 76. TSP AND THE PRINCIPLE OF OPTIMALITY (Contd..) • Let g(i,S) be the length of a shortest path starting at vertex i, going through all vertices in S and terminating at vertex 1. • S is the set of all vertices through shortest path goes • g(1, V-{1}) is the length of an optimal salesperson tour. • Form the principle of optimality, it follows that g(1, V-{1})= min{C1k+g(k,V-{1,k})} ……(1) 2≤k≤n (forward approach) 76
  • 77. TSP AND THE PRINCIPLE OF OPTIMALITY (Contd..) • In general, for i S, g(i,S)= min{Cij+g(j, S-{j})}………..(2) j s • (1) may be solved for g(1, V-{1}) if we know g(k, V-{1,k}) for all choices of k. • These values are obtained from (2) • g(i,Ø) = Ci1 1≤i≤n. • g(i,Ø) is the length of the shortest path starting at i going through no vertex and ending at 1. 77
  • 78. Solution of TSP using Dynamic programming • Using equation (1), We obtain g(i,S) for all S of size 1, then for all S of size 2 etc. • For all |S|<n-1 g(i,S) is computed with i≠1, 1 S, i S. Example : 1 2 3 1 2 1 0 10 15 2 5 0 9 3 6 13 0 4 3 4 8 8 9 (a) Directed Graph 4 20 10 12 0 (b) Edge Length Matrix 78
  • 79. Solution of TSP using Dynamic programming contd.. • Let us solve the problem using dynamic programming approach. As g(i,Ø) = Ci1 1≤i≤n g(2,Ø) = C21= 5; g(3,Ø) = C31= 6; g(4,Ø) = C41 = 8 • we know that g(i,S) = min{Cij+g(j, S-{j})} j S • for |S|=1 g(2,{3}) = C23+g(3,Ø) = 9+6 = 15 g(2,{4}) = C24+g(4,Ø) = 10+8 = 18 g(3,{2}) = C32+g(2,Ø) = 13+5 = 18 g(4,{2}) = C42+g(2,Ø) = 8+5 = 13 g(4,{3}) = C43+g(3,Ø) = 9+6 = 15 79
  • 80. Solution of TSP using Dynamic programming contd.. • Next, we compute g (i,S) with | S | = 2 i 1, 1 S, i g (2, {3,4}) = min { C23 + g (3,{4}), C24 + g (4, {3}) = min { 9+20, 10+15 } = min { 29,25} = 25 g (3, {2,4}) = min { C32 + g (2,{4}), C34 + g (4, {2}) = min { 13+18, 12+13 } = min { 31,25} = 25 g (4, {2,3}) = min { C42 + g (2,{3}), C43 + g (3, {2}) = min { 8+15, 9+18 } = min { 23, 27} = 23 S 80
  • 81. Solution of TSP using Dynamic programming contd.. We compute g (i,S) with | S | = 3 g (1, {2, 3, 4}) = min { C12 + g (2,{3,4}), C13 + g (3, {2,4}) , C14 + g (4, {2,3}) = min { 10 + 25, 15 + 25, 20+23 } = min { 35, 40, 43 } = 35 • Thus an optimal tour of the graph has length 35. 81
  • 82. Solution of TSP using Dynamic programming contd.. • The tour of this length may be constructed if we retain the value of j that minimizes each g(i,S) for | S | = 3 and | S | = 2 and | S | = 1. • Let J (i,S) be the minimal tour. • J (1, {2,3,4}) = 2 as C12 + g(2,{3,4}) = 35 is the minimum. 82
  • 83. Solution of TSP using Dynamic programming contd.. • The remaining tour may be obtained from g(2, {3,4}). Observe that J (2, { 3, 4} ) = 4 and J (4, { 3} ) = 3 The optimal tour starts at 1 goes through the vertices 2, 4, 3 respectively and ends at 1. i.e. 1,2,4,3, 1 are the vertices on the shortest path 83
  • 84. Complexity of TSP Complexity of TSP = (n22n) Space needed = (n2n) Complexity involves the computations of g(1, V-{1}) or g(i, S) Let N be the number of g(i, S) to be computed For each values of /S/, there are n-1 choices of i The number of distinct sets S of size k not including 1 and i is (n-2) C k Hence N = Σ (n-1) (n-2) C k k= 0, 1, …, n-2 84
  • 85. Complexity of TSP contd.. N = (n-1) Σ (n-2) C k k= 0, 1, …, n-2 = (n-1) 2 (n-2) (Applying Binomial theorem below with a=1 and x=1) Binomial theorem is as follows (a +x)n = a n n C 0 + a n-1 nC 1 x + a n-2 nC2 x 2 +…+nC n xn As g(1, V-{1}) involves the computation of g(i, s) maximum n times, the Complexity is n(n2n) =n22n As g(i, s) are to be stored for further computations, The storage required is (n2n) 85