5. There are two aspects of algorithmic
performance:
Time
Space
6. First, we start to count the number of basic
operations in a particular solution to assess its
efficiency.
Then, we will express the efficiency of algorithms
using growth functions.
7. We measure an algorithm’s time requirement
as a function of the problem size.
The most important thing to learn is how
quickly the algorithm’s time requirement
grows as a function of the problem size.
An algorithm’s proportional time requirement
is known as growth rate.
We can compare the efficiency of two
algorithms by comparing their growth rates.
8. Each operation in an algorithm (or a program) has a
cost.
Each operation takes a certain of time.
count = count + 1; take a certain amount of time, but it is
constant
A sequence of operations:
count = count + 1; Cost: c1
sum = sum + count; Cost: c2
Total Cost = c1 + c2
10. Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
The time required for this algorithm is proportional
to n
11. Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 +
n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
The time required for this algorithm is proportional to n2
13. Informal definitions:
◦ Given a complexity function f(n),
◦ O(f(n)) is the set of complexity functions that are
upper bounds on f(n)
◦ (f(n)) is the set of complexity functions that are
lower bounds on f(n)
◦ (f(n)) is the set of complexity functions that,
given the correct constants, correctly describes f(n)
Example: If f(n) = 17n3 + 4n – 12, then
◦ O(f(n)) contains n3, n4, n5, 2n, etc.
◦ (f(n)) contains 1, n, n2, n3, log n, n log n, etc.
◦ (f(n)) contains n3
15. Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5
The time required for this algorithm is proportional
to n
O(n)
16. Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}
Total Cost = c1 + c2 + (n+1)*c3 + n*c4 +
n*(n+1)*c5+n*n*c6+n*n*c7+n*c8
The time required for this algorithm is proportional to n2
O(n2)
17. Function Growth Rate Name
c Constant
log N Logarithmic
log2N Log-squared
N Linear
N log N Linearithmic
N2 Quadratic
N3 Cubic
2N Exponential
18.
19.
20. Input:
◦ A sequence of n numbers a1, a2, . . . , an
Output:
◦ A permutation (reordering) a1’, a2’, . . . , an’ of the
input sequence such that a1’ ≤ a2’ ≤ · · · ≤ an’
21. In-Place Sort
◦ The amount of extra space required to sort the data
is constant with the input size.
22. Sorted on first key:
Sort file on second key:
Records with key value
3 are not in order on
first key!!
Stable sort
◦ preserves relative order of records with equal keys
23. Idea: like sorting a hand of playing cards
◦ Start with an empty left hand and the cards facing
down on the table.
◦ Remove one card at a time from the table, and
insert it into the correct position in the left hand
◦ The cards held in the left hand are sorted
24. To insert 12, we need to
make room for it by moving
first 36 and then 24.
25.
26.
27.
28. insertionsort (a) {
for (i = 1; i < a.length; ++i) {
key = a[i]
pos = i
while (pos > 0 && a[pos-1] > key) {
a[pos]=a[pos-1]
pos--
}
a[pos] = key
}
}
29. O(n2), stable, in-place
O(1) space
Great with small number of elements
30. Algorithm:
◦ Find the minimum value
◦ Swap with 1st position value
◦ Repeat with 2nd position down
O(n2), stable, in-place
31. Algorithm
◦ Traverse the collection
◦ “Bubble” the largest value to the end using pairwise
comparisons and swapping
O(n2), stable, in-place
Totally useless?
32. 1. Divide: split the array in two
halves
2. Conquer: Sort recursively both
subarrays
3. Combine: merge the two sorted
subarrays into a sorted array
34. The key to Merge Sort is merging two sorted
lists into one, such that if you have two lists
X (x1x2…xm) and Y(y1y2…yn) the
resulting list is Z(z1z2…zm+n)
Example:
L1 = { 3 8 9 } L2 = { 1 5 7 }
merge(L1, L2) = { 1 3 5 7 8 9 }
54. Merge Sort runs O (N log N) for all cases, because of
its Divide and Conquer approach.
T(N) = 2T(N/2) + N = O(N logN)
55. 1. Select: pick an element x
2. Divide: rearrange elements so
that x goes to its final position
• L elements less than x
• G elements greater than or equal
to x
3. Conquer: sort recursively L and G
x
x
x
L G
L G
58. Use the first element as pivot
◦ if the input is random, ok
◦ if the input is presorted? - shuffle in advance
Choose the pivot randomly
◦ generally safe
◦ random numbers generation can be expensive
59. Use the median of the array
◦ Partitioning always cuts the array into half
◦ An optimal quicksort (O(n log n))
◦ hard to find the exact median (chicken-egg?)
◦ Approximation to the exact median..
Median of three
◦ Compare just three elements: the leftmost, the
rightmost and the center
◦ Use the middle of the three as pivot
60. Given a pivot, partition the elements of the
array such that the resulting array consists of:
◦ One subarray that contains elements < pivot
◦ One subarray that contains elements >= pivot
The subarrays are stored in the original array
69. 40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
70. 40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
71. 40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
72. 40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
73. 40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
74. 40 20 10 30 60 50 7 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
75. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
76. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
77. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
78. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
79. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
80. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
81. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
82. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
83. 40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
84. 1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
5. swap a[too_small_index]a[pivot_index]
40 20 10 30 7 50 60 80 100pivot_index = 0
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
85. 7 20 10 30 40 50 60 80 100pivot_index = 4
[0] [1] [2] [3] [4] [5] [6] [7] [8]
too_big_index too_small_index
1. while a[too_big_index] <= a[pivot_index]
++too_big_index
2. while a[too_small_index] > a[pivot_index]
--too_small_index
3. if too_big_index < too_small_index
swap a[too_big_index]a[too_small_index]
4. while too_small_index > too_big_index, go to 1.
5. swap a[too_small_index]a[pivot_index]
86. Running time
◦ pivot selection: constant time, i.e. O(1)
◦ partitioning: linear time, i.e. O(N)
◦ running time of the two recursive calls
T(N)=T(i)+T(N-i-1)+cN where c is a
constant
◦ i: number of elements in L
87. What will be the worst case?
◦ The pivot is the smallest element, all the time
◦ Partition is always unbalanced
88. What will be the best case?
◦ Partition is perfectly balanced.
◦ Pivot is always in the middle (median of the array)
89. Java API provides a class Arrays with several
overloaded sort methods for different array
types
Class Collections provides similar sorting
methods
90. Arrays methods:
public static void sort (int[] a)
public static void sort (Object[] a)
// requires Comparable
public static <T> void sort (T[] a,
Comparator<? super T> comp)
// uses given Comparator
91. Collections methods:
public static <T extends Comparable<T>>
void sort (List<T> list)
public static <T> void sort (List<T> l,
Comparator<? super T> comp)
92.
93. Given the collection and an element to
find…
Determine whether the “target”
element was found in the collection
◦ Print a message
◦ Return a value
(an index or pointer, etc.)
Don’t modify the collection in the
search!
94. A search traverses the collection until
◦ the desired element is found
◦ or the collection is exhausted
95. linearsearch (a, key) {
for (i = 0; i < a.length; i++) {
if (a[i] == key) return i
}
return –1
}
127. Set
◦ The familiar set abstraction.
◦ No duplicates; May or may not be ordered.
List
◦ Ordered collection, also known as a sequence.
◦ Duplicates permitted; Allows positional access
Map
◦ A mapping from keys to values.
◦ Each key can map to at most one value (function).
128. Set List Map
HashSet ArrayList HashMap
LinkedHashSet LinkedList LinkedHashMap
TreeSet Vector Hashtable
TreeMap
129. Ordered
◦ Elements are stored and accessed in a specific
order
Sorted
◦ Elements are stored and accessed in a sorted
order
Indexed
◦ Elements can be accessed using an index
Unique
◦ Collection does not allow duplicates
130. A linked list is a series of connected nodes
Each node contains at least
◦ A piece of data (any type)
◦ Pointer to the next node in the list
Head: pointer to the first node
The last node points to NULL
A
Head
B C
A
data pointer
node
135. Operation Complexity
insert at beginning O(1)
Insert at end O(1)
Insert at index O(n)
delete at beginning O(1)
delete at end O(1)
delete at index O(n)
find element O(n)
access element by index O(n)
139. Operation Complexity
insert at beginning O(n)
Insert at end O(1) amortized
Insert at index O(n)
delete at beginning O(n)
delete at end O(1)
delete at index O(n)
find element O(n)
access element by index O(1)
140. Some collections are constrained so clients
can only use optimized operations
◦ stack: retrieves elements in reverse order as added
◦ queue: retrieves elements in same order as added
stack
queue
top 3
2
bottom 1
pop, peekpush
front back
1 2 3
addremove, peek
141. stack: A collection based on the principle of
adding elements and retrieving them in the
opposite order.
basic stack operations:
◦ push: Add an element to the top.
◦ pop: Remove the top element.
◦ peek: Examine the top element.
stack
top 3
2
bottom 1
pop, peekpush
142. Programming languages and compilers:
◦ method call stack
Matching up related pairs of things:
◦ check correctness of brackets (){}[]
Sophisticated algorithms:
◦ undo stack
143. queue: Retrieves elements in the order they
were added.
basic queue operations:
◦ add (enqueue): Add an element to the back.
◦ remove (dequeue): Remove the front element.
◦ peek: Examine the front element.
queue
front back
1 2 3
addremove, peek
144. Operating systems:
◦ queue of print jobs to send to the printer
Programming:
◦ modeling a line of customers or clients
Real world examples:
◦ people on an escalator or waiting in a line
◦ cars at a gas station
145. A data structure optimized for a very
specific kind of search / access
In a map we access by asking "give me the
value associated with this key."
capacity, load factor
A -> 65
149. What to do when inserting an element and
already something present?
150. Could search forward or backwards for an
open space
Linear probing
◦ move forward 1 spot. Open?, 2 spots, 3 spots
Quadratic probing
◦ 1 spot, 2 spots, 4 spots, 8 spots, 16 spots
Resize when load factor reaches some limit
151. Each element of hash table be another data
structure
◦ LinkedList
◦ Balanced Binary Tree
Resize at given load factor or when any chain
reaches some limit
152. Implements Map
Sorted
Easy access to the biggest
logarithmic put, get
Comparable or Comparator
153. 0, 1, or 2 children per node
Binary Search Tree
◦ node.left < node.value
◦ node.right >= node.value
154. A priority queue stores a collection of entries
Main methods of the Priority Queue ADT
◦ insert(k, x)
inserts an entry with key k and value x
◦ removeMin()
removes and returns the entry with smallest key
Priority Queues
15
4
155. A heap can be seen as a complete binary tree:
16
14 10
8 7 9 3
2 4 1
156. A heap can be seen as a complete binary tree:
16
14 10
8 7 9 3
2 4 1 1 1 111
157. In practice, heaps are usually implemented as
arrays:
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 1 =0
158. To represent a complete binary tree as an
array:
◦ The root node is A[1]
◦ Node i is A[i]
◦ The parent of node i is A[i/2] (note: integer divide)
◦ The left child of node i is A[2i]
◦ The right child of node i is A[2i + 1]
16
14 10
8 7 9 3
2 4 1
16 14 10 8 7 9 3 2 4 1 =0
168. java.util.Collections
java.util.Arrays exports similar basic operations for an
array.
binarySearch(list, key)
sort(list)
min(list)
max(list)
reverse(list)
shuffle(list)
swap(list, p1, p2)
replaceAll(list, x1, x2)
Finds key in a sorted list using binary search.
Sorts a list into ascending order.
Returns the smallest value in a list.
Returns the largest value in a list.
Reverses the order of elements in a list.
Randomly rearranges the elements in a list.
Exchanges the elements at index positions p1 and p2.
Replaces all elements matching x1 with x2.