5. 5
⚫A data structure that consists of a set of
nodes (vertices) and a set of edges that relate
the nodes to each other
⚫The set of edges describes relationships
among the vertices
What is a graph?
6. 6
⚫A graph G is defined as follows:
G=(V,E)
V(G): a finite, nonempty set of vertices
E(G): a set of edges (pairs of vertices)
Formal definition of graphs
7. 7
⚫When the edges in a graph have no direction, the
graph is called undirected
Directed vs. undirected graphs
8. 8
⚫When the edges in a graph have a direction, the
graph is called directed (or digraph)
Directed vs. undirected graphs
E(Graph2) = {(1,3) (3,1) (5,9) (9,11) (5,7)
Warning: if the graph is
directed, the order of the
vertices in each edge is
important !!
11. 11
Complete Graph
● A complete graph is a graph that has the
maximum number of edges
– for undirected graph with n vertices, the maximum
number of edges is n(n-1)/2
– for directed graph with n vertices, the maximum
number of edges is n(n-1)
– example: G1 is a complete graph
12. 12
Adjacent and Incident
● If (v0, v1) is an edge in an undirected graph,
– v0 and v1 are adjacent
– The edge (v0, v1) is incident on vertices v0 and v1
● If <v0, v1> is an edge in a directed graph
– v0 is adjacent to v1, and v1 is adjacent from v0
– The edge <v0, v1> is incident on v0 and v1
14. 14
Subgraph and Path
● A subgraph of G is a graph G’ such that V(G’)
is a subset of V(G) and E(G’) is a subset of
E(G)
● A path from vertex vp to vertex vq in a graph G,
is a sequence of vertices, vp, vi1, vi2, ..., vin, vq,
such that (vp, vi1), (vi1, vi2), ..., (vin, vq) are edges
in an undirected graph
● The length of a path is the number of edges on
it
15. 15
Figure 6.4: subgraphs of G1 and G3 (p.261)
0 0
1 2 3
1 2 0
1 2
3
(i) (ii) (iii) (iv)
(a) Some of the subgraph of G1
0 0
1
0
1
2
0
1
2
(i) (ii) (iii) (iv)
(b) Some of the subgraph of G3
0
1 2
3
G1
0
1
2
G3
16. 16
Simple Path and Style
● A simple path is a path in which all vertices,
except possibly the first and the last, are distinct
● A cycle is a simple path in which the first and
the last vertices are the same
● In an undirected graph G, two vertices, v0 and v1,
are connected if there is a path in G from v0 to v1
● An undirected graph is connected if, for every
pair of distinct vertices vi, vj, there is a path
from vi to vj
18. 18
Connected Component
● A connected component of an undirected graph
is a maximal connected subgraph.
● A tree is a graph that is connected and acyclic.
● A directed graph is strongly connected if there
is a directed path from vi to vj and also
from vj to vi.
● A strongly connected component is a maximal
subgraph that is strongly connected.
19. 19
A graph with two connected components
1
0
2
3
4
5
6
7
H1
H
2
connected component (maximal connected
subgraph)
21. 21
Degree
● The degree of a vertex is the number of edges
incident to that vertex
● For directed graph,
– the in-degree of a vertex v is the number of edges
that have v as the head
– the out-degree of a vertex v is the number of edges
that have v as the tail
– if di is the degree of a vertex i in a graph G with n
vertices and e edges, the number of edges is
23. 23
⚫Adjacent nodes: two nodes are adjacent if they are
connected by an edge
⚫Path: a sequence of vertices that connect two
nodes in a graph
⚫Complete graph: a graph in which every vertex is
directly connected to every other vertex
Graph terminology
5 is adjacent to 7
7 is adjacent from 5
24. 24
⚫What is the number of edges in a complete
directed graph with N vertices?
N * (N-1)
Graph terminology (cont.)
25. 25
⚫What is the number of edges in a complete
undirected graph with N vertices?
N * (N-1) / 2
Graph terminology (cont.)
26. 26
⚫Weighted graph: a graph in which each edge
carries a value
Graph terminology (cont.)
28. 28
Adjacency Matrix
● Let G=(V,E) be a graph with n vertices.
● The adjacency matrix of G is a two-dimensional
n by n array, say adj_mat
● If the edge (vi, vj) is in E(G), adj_mat[i][j]=1
● If there is no such edge in E(G), adj_mat[i][j]=0
● The adjacency matrix for an undirected graph is
symmetric; the adjacency matrix for a digraph
need not be symmetric
30. 30
Merits of Adjacency Matrix
● From the adjacency matrix, to determine the
connection of vertices is easy
● The degree of a vertex is
● For a digraph, the row sum is the out_degree,
while the column sum is the in_degree
31. 31
Data Structures for Adjacency Lists
#define MAX_VERTICES 50
typedef struct node *node_pointer;
typedef struct node {
int vertex;
struct node *link;
};
node_pointer graph[MAX_VERTICES];
int n=0; /* vertices currently in use */
Each row in adjacency matrix is represented as an
adjacency list.
32. 32
0
1
2
3
0
1
2
0
1
2
3
4
5
6
7
1 2 3
0 2 3
0 1 3
0 1 2
G1
1
0 2
G3
1 2
0 3
0 3
1 2
5
4 6
5 7
6
G4
0
1 2
3
0
1
2
1
0
2
3
4
5
6
7
An undirected graph with n vertices and e edges ==> n head nodes and 2e list nodes
33. 33
Interesting Operations
●degree of a vertex in an undirected graph
–# of nodes in adjacency list
●# of edges in a graph
–determined in O(n+e)
●out-degree of a vertex in a directed graph
–# of nodes in its adjacency list
●in-degree of a vertex in a directed graph
–traverse the whole data structure
35. 35
Figure 6.13:Alternate order adjacency list for G1 (p.268)
• 3 • 2
NULL
1 •
0
• 2 • 3
NULL
0 •
1
• 3 • 1
NULL
0 •
2
• 2 • 0
NULL
1 •
3
headnodes vertax link
Order is of no
significance.
0
1 2
3
36. 36
Adjacency Multilists
marked vertex1 vertex2 path1 path2
●An edge in an undirected graph is
represented by two nodes in adjacency
list representation.
●Adjacency Multilists
–lists in which nodes may be shared among
several lists.
(an edge is shared by two different paths)
38. 38
Adjacency Multilists
typedef struct edge *edge_pointer;
typedef struct edge {
short int marked;
int vertex1, vertex2;
edge_pointer path1, path2;
};
edge_pointer graph[MAX_VERTICES];
marked vertex1 vertex2 path1 path2
39. 39
⚫Array-based implementation
⚫A 1D array is used to represent the vertices
⚫A 2D array (adjacency matrix) is used to represent
the edges
Graph implementation
41. 41
⚫Linked-list implementation
⚫A 1D array is used to represent the vertices
⚫A list is used for each vertex v which contains
the vertices which are adjacent from v
(adjacency list)
Graph implementation (cont.)
43. 43
⚫Adjacency matrix
⚫Good for dense graphs --|E|~O(|V|2)
⚫Memory requirements: O(|V| + |E| ) = O(|V|2 )
⚫Connectivity between two vertices can be tested
quickly
⚫Adjacency list
⚫Good for sparse graphs -- |E|~O(|V|)
⚫Memory requirements: O(|V| + |E|)=O(|V|)
⚫Vertices adjacent to another vertex can be found
quickly
Adjacency matrix vs. adjacency list
representation
44. 44
⚫Problem: find a path between two nodes of the
graph (e.g., Austin and Washington)
⚫Methods: Depth-First-Search (DFS) or Breadth-
First-Search (BFS)
Graph searching
45. 45
⚫What is the idea behind DFS?
⚫Travel as far as you can down a path
⚫Back up as little as possible when you reach a "dead
end" (i.e., next vertex has been "marked" or there is no
next vertex)
⚫DFS can be implemented efficiently using a stack
Depth-First-Search (DFS)
46. 46
Set found to false
stack.Push(startVertex)
DO
stack.Pop(vertex)
IF vertex == endVertex
Set found to true
ELSE
Push all adjacent vertices onto stack
WHILE !stack.IsEmpty() AND !found
IF(!found)
Write "Path does not exist"
Depth-First-Search (DFS) (cont.)
50. Depth-First Search
⚫ DFS follows the following rules:
1. Select an unvisited node x, visit it, and treat as the
current node
2. Find an unvisited neighbor of the current node,
visit it, and make it the new current node;
3. If the current node has no unvisited neighbors,
backtrack to the its parent, and make that parent
the new current node;
4. Repeat steps 3 and 4 until no more nodes can be
visited.
5. If there are still unvisited nodes, repeat from step
1.
50
70. 70
⚫What is the idea behind BFS?
⚫Look at all possible paths at the same depth before
you go at a deeper level
⚫Back up as far as possible when you reach a "dead
end" (i.e., next vertex has been "marked" or there is
no next vertex)
⚫BFS can be implemented efficiently using a Queue
Breadth-First-Searching (BFS)
71. Breadth-first Search (BFS)
⚫Search for all vertices that are directly
reachable from the root (called level 1
vertices)
⚫After mark all these vertices, visit all vertices
that are directly reachable from any level 1
vertices (called level 2 vertices), and so on.
⚫In general, level k vertices are directly
reachable from a level k – 1 vertices
71
72. 72
⚫BFS can be implemented efficiently using a
queue
Set found to false
queue.Enqueue(startVertex)
DO
queue.Dequeue(vertex)
IF vertex == endVertex
Set found to true
ELSE
Enqueue all adjacent vertices onto queue
WHILE !queue.IsEmpty() AND !found
Breadth-First-Searching (BFS) (cont.)
IF(!found)
Write "Path does not exist"
78. BFS: the Color Scheme
⚫White vertices have not been discovered
⚫All vertices start out white
⚫Grey vertices are discovered but not fully
explored
⚫They may be adjacent to white vertices
⚫Black vertices are discovered and fully
explored
⚫They are adjacent only to black and gray vertices
⚫Explore vertices by scanning adjacency list
of grey vertices
78
88. 88
⚫There are multiple paths from a source vertex to a
destination vertex
⚫Shortest path: the path whose total weight (i.e., sum
of edge weights) is minimum
⚫Examples:
⚫Austin->Houston->Atlanta->Washington: 1560
miles
⚫Austin->Dallas->Denver->Atlanta->Washington: 2980
miles
shortest-path problem
90. 90
⚫Common algorithms: Dijkstra's algorithm,
Bellman-Ford algorithm
⚫BFS can be used to solve the shortest graph
problem when the graph is weightless or all the
weights are the same
(mark vertices before Enqueue)
shortest-path problem (cont.)
91. Dijkstra's algorithm
⚫ Dijkstra's algorithm - is a solution to the single-source shortest
path problem in graph theory.
⚫
⚫ Works on both directed and undirected graphs. However, all
edges must have nonnegative weights.
⚫ Approach: Greedy
⚫ Input: Weighted graph G={E,V} and source vertex v∈V, such
that all edge weights are nonnegative
⚫
⚫ Output: Lengths of shortest paths (or the shortest paths
themselves) from a given source vertex v∈V to all other
vertices
91
92. Dijkstra's algorithm - Pseudocode
dist[s] ←0 (distance to source vertex is zero)
for all v ∈ V–{s}
do dist[v] ←∞ (set all other distances to infinity)
S←∅ (S, the set of visited vertices is initially empty)
Q←V (Q, the queue initially contains all
vertices)
while Q ≠∅ (while the queue is not empty)
do u ← mindistance(Q,dist) (select the element of Q with the min.
distance)
S←S∪{u} (add u to list of visited vertices)
for all v ∈ neighbors[u]
do if dist[v] > dist[u] + w(u, v) (if new shortest path found)
then d[v] ←d[u] + w(u, v) (set new value of shortest path)
(if desired, add traceback code)
return dist
92
103. Implementations and Running Times
⚫The simplest implementation is to store vertices in an
array or linked list. This will produce a running time of
⚫
⚫O(|V|^2 + |E|)
⚫For sparse graphs, or graphs with very few edges and
many nodes, it can be implemented more efficiently
storing the graph in an adjacency list using a binary heap
or priority queue. This will produce a running time of
⚫O((|E|+|V|) log |V|)
103
104. Dijkstra's Algorithm - Why It Works
⚫ As with all greedy algorithms, we need to make sure
that it is a correct algorithm (e.g., it always returns
the right solution if it is given correct input).
⚫ A formal proof would take longer than this
presentation, but we can understand how the
argument works intuitively.
⚫ If you can’t sleep unless you see a proof, see the
second reference or ask us where you can find it.
104
105. DIJKSTRA'S ALGORITHM - WHY
USE IT?
⚫As mentioned, Dijkstra’s algorithm calculates the
shortest path to every vertex.
⚫However, it is about as computationally expensive to
calculate the shortest path from vertex u to every
vertex using Dijkstra’s as it is to calculate the shortest
path to some particular vertex v.
⚫Therefore, anytime we want to know the optimal path
to some other vertex from a determined origin, we
can use Dijkstra’s algorithm.
105
106. Applications of Dijkstra's Algorithm
⚫- Traffic Information Systems are most prominent use
⚫- Mapping (Map Quest, Google Maps)
⚫- Routing Systems
106
107. Applications of Dijkstra's Algorithm
⚫ One particularly relevant this week:
epidemiology
⚫ Prof. Lauren Meyers (Biology Dept.)
uses networks to model the spread of
infectious diseases and design
prevention and response strategies.
⚫ Vertices represent individuals, and
edges their possible contacts. It is
useful to calculate how a particular
individual is connected to others.
⚫ Knowing the shortest path lengths to
other individuals can be a relevant
indicator of the potential of a particular
individual to infect others.
107
108. Spanning trees : Definition
⚫A Minimum Spanning Tree (MST) is a subgraph of an
undirected graph such that the subgraph spans
(includes) all nodes, is connected, is acyclic, and has
minimum total edge weight
108
109. Spanning trees
⚫Suppose you have a connected undirected graph
⚫Connected: every node is reachable from every other node
⚫Undirected: edges do not have an associated direction
⚫...then a spanning tree of the graph is a connected
subgraph in which there are no cycles
109
A connected,
undirected graph
Four of the spanning trees of the graph
110. 110
Spanning Trees
● When graph G is connected, a depth first or
breadth first search starting at any vertex will
visit all vertices in G
● A spanning tree is any tree that consists solely
of edges in G and that includes all the vertices
● E(G): T (tree edges) + N (nontree edges)
where T: set of edges used during search
N: set of remaining edges
112. Minimizing costs
⚫Suppose you want to supply a set of houses (say, in a
new subdivision) with:
⚫electric power
⚫water
⚫sewage lines
⚫telephone lines
⚫To keep costs down, you could connect these houses
with a spanning tree (of, for example, power lines)
⚫However, the houses are not all equal distances apart
⚫To reduce costs even further, you could connect the
houses with a minimum-cost spanning tree
112
113. Finding Minimum Cost Spanning Trees
⚫There are two basic algorithms for finding minimum-cost
spanning trees, and both are greedy algorithms
⚫Prim’s algorithm: Start with any one node in the spanning
tree, and repeatedly add the cheapest edge, and the node it
leads to, for which the node is not already in the spanning
tree.
⚫ Here, we consider the spanning tree to consist of both nodes and
edges
⚫Kruskal’s algorithm: Start with no nodes or edges in the
spanning tree, and repeatedly add the cheapest edge that
does not create a cycle
⚫ Here, we consider the spanning tree to consist of edges only
113
114. 114
Greedy Strategy
● An optimal solution is constructed in stages
● At each stage, the best decision is made at this
time
● Since this decision cannot be changed later,
we make sure that the decision will result in a
feasible solution
● Typically, the selection of an item at each
stage is based on a least cost or a highest profit
criterion
115. Algorithm Characteristics
⚫Both Prim’s and Kruskal’s Algorithms work with
undirected graphs
⚫Both work with weighted and unweighted graphs but
are more interesting when edges are weighted
⚫Both are greedy algorithms that produce optimal
solutions
115
116. Prim’s algorithm
T = a spanning tree containing a single node s;
E = set of edges adjacent to s;
while T does not contain all the nodes {
remove an edge (v, w) of lowest cost from E
if w is already in T then discard edge (v, w)
else {
add edge (v, w) and node w to T
add to E the edges adjacent to w
}
}
⚫An edge of lowest cost can be found with a priority queue
⚫Testing for a cycle is automatic
⚫ Hence, Prim’s algorithm is far simpler to implement than Kruskal’s
algorithm
116
117. Prim’s Algorithm
⚫Similar to Dijkstra’s Algorithm
⚫Difference is that the distance is known as weight in
Prims algo
⚫Deals with the nodes
117
118. Walk-Through Initialize
array
K dv pv
A F ∞ −
B F ∞ −
C F ∞ −
D F ∞ −
E F ∞ −
F F ∞ −
G F ∞ −
H F ∞ −
4
25
A
H
B
F
E
D
C
G 7
2
10
18
3
4
3
7
8
9
3
10
2
Prim’s algorithm
118
135. Kruskal’s algorithm
T = empty spanning tree;
E = set of edges;
N = number of nodes in graph;
while T has fewer than N - 1 edges {
remove an edge (v, w) of lowest cost from E
if adding (v, w) to T would create a cycle
then discard (v, w)
else add (v, w) to T
}
⚫Finding an edge of lowest cost can be done just by
sorting the edges
⚫Efficient testing for a cycle requires a fairly complex
algorithm (UNION-FIND) which we don’t cover in this
course
135
136. Kruskal’s Algorithm
Work with edges, rather than nodes
Two steps:
– Sort edges by increasing edge weight
– Select the first |V| – 1 edges that do not
generate a cycle
136
138. Sort the edges by increasing edge weight
edge dv
(D,E) 1
(D,G) 2
(E,G) 3
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
138
139. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2
(E,G) 3
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
139
140. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
140
141. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Accepting edge (E,G) would create a cycle
Kruskal’s algorithm
141
142. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
142
143. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
143
144. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3 √
(B,C) 4
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
144
145. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3 √
(B,C) 4 √
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
145
146. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3 √
(B,C) 4 √
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 χ
(B,F) 4
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
146
147. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3 √
(B,C) 4 √
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 χ
(B,F) 4 χ
(B,H) 4
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
147
148. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3 √
(B,C) 4 √
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 χ
(B,F) 4 χ
(B,H) 4 χ
(A,H) 5
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
148
149. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3 √
(B,C) 4 √
5
1
A
H
B
F
E
D
C
G 3
2
4
6
3
4
3
4
8
4
3
10 edge dv
(B,E) 4 χ
(B,F) 4 χ
(B,H) 4 χ
(A,H) 5 √
(D,F) 6
(A,B) 8
(A,F) 10
Kruskal’s algorithm
149
150. Select first |V|–1 edges which do not
generate a cycle
edge dv
(D,E) 1 √
(D,G) 2 √
(E,G) 3 χ
(C,D) 3 √
(G,H) 3 √
(C,F) 3 √
(B,C) 4 √
5
1
A
H
B
F
E
D
C
G
2
3
3
3
edge dv
(B,E) 4 χ
(B,F) 4 χ
(B,H) 4 χ
(A,H) 5 √
(D,F) 6
(A,B) 8
(A,F) 10
Done
Total Cost = Σ dv = 21
4
}not
considere
d
Kruskal’s algorithm
150
151. Differences
151
Prim’s Algorithm Kruskal’s Algorithm
The tree that we are making or growing
always remains connected.
The tree that we are making or growing
usually remains disconnected.
Prim’s Algorithm grows a solution from a
random vertex by adding the next
cheapest vertex to the existing tree.
Kruskal’s Algorithm grows a solution
from the cheapest edge by adding the next
cheapest edge to the existing tree / forest.
Prim’s Algorithm is faster for dense
graphs.
Kruskal’s Algorithm is faster for sparse
graphs.