2. Dynamic Programming is a general algorithm design technique for solving
problems defined by recurrences with overlapping subproblems
•Invented by American mathematician Richard Bellman in the 1950s to solve
optimization problems and later assimilated by CS
•“Programming” here means “planning”
•Main idea:
- set up a recurrence relating a solution to a larger instance to solutions of some
smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
Dynamic Programming (DP)
3. • Two key ingredients for an optimization problem
to be suitable for a dynamic-programming
solution:
Each substructure is
optimal.
(Principle of optimality)
1. optimal substructures 2. overlapping subproblems
Subproblems are dependent.
(otherwise, a divide-and-
conquer approach is the
choice.)
4. – The recurrence relation (for defining the value of an
optimal solution);
– The tabular computation (for computing the value of an
optimal solution);
– The traceback (for delivering an optimal solution).
The development of a dynamic-programming algorithm has three
basic components:
15. Problem
X = {A, B, C, B, D, A, B}
Y = {B, D, C, A, B, A}
Exemple for LCS
16. SOLUTION
0 if i = 0 or j = 0
c[i-1, j-1] + 1 if xi = yj
max(c[i, j-1], c[i-1, j]) if xi yj
0 1 2 3 4 5 6
yj B D C A B A
0 0 0 0 0 0 0
0
0
0
0 1 1 1
0 1 1 1
1 2 2
0
1
1 2 2
2
2
0 1
1
2
2 3 3
0
1 2
2
2
3
3
0
1
2
2 3
3 4
0 1
2
2
3 4
4
If xi = yj
b[i, j] = “ ”
Else if
1 xi
2 A
c[i - 1, j] ≥ c[i, j-1] 2 B
3 C
4 B
5 D
6 A
7 B
b[i, j] = “ ”
else
b[i, j] = “ ”
c[i, j] =
17. • Start at b[m, n] and follow the arrows
• When we encounter a “ “ in b[i, j] xi = yj is an element
of the LCS
1 xi
2 A
3 B
4 C
5 B
6 D
7 A
8 B
0 1 2 3 4 5 6
yj B D C A B A
0 0 0 0 0 0 0
0
0
0
0
0
0
0
0
1
1
1
1
1
1
0
1
1
1
2
2
2
0
1
2
2
2
2
2
1
1
2
2
2
3
3
1
2
2
3
3
3
4
1
2
2
3
3
4
4
Longest common subsequence is {A,B,C,B}