Aim – Write a program in C/C++ to implement kruskal’s algorithm and Prims algorithm.
Objective: – To understand the role of Kruskal and Prim’s algorithm in finding out the minimum cost spanning tree.
Theory :
Kruskal’s algorithm is another greedy algorithm for the minimum spanning tree problem that also always yields an optimal solution. It is named Kruskal’s algorithm [Kru56], after Joseph Kruskal, who discovered the algorithm when he was a second-year graduate student. Kruskal’s algorithm looks at the minimum spanning tree for a weighted connected graph G = {V, E} as an acyclic subgraph with |V|-1 edges for which the sum of the edges weights is the smallest.
The algorithm begins by sorting the graph’s edges in non decreasing order of their weights. Then, starting with the empty subgraph, it scans this sorted list adding the next edge on the list to the current subgraph if such an inclusion does not create a cycle and simply skipping the edge otherwise.
Data Structures used:
- Use array reorientation of sets where value in the array is either a link to parent field or for root node the value is total no of children with negative sign.
- We use heap to store the edge information.
Pseudocode for Kruskal: –
// Cost – 2 dimensional matrix, cost[u][v] is cost of edge (u,v);
// n no. of vertices.
//‘t’ set of edges in minimum spanning tree – it us a 2D array. t[1…..n][1….2]
// initially each vertex is in different set.
- Construct a minimum heap out of edge costs using heapify.
- For( i = 1 to n) do parent[i] = -1;
// each vertex is in different set.
- I=0; mincost=0;
- While(i
)>
- {
- Delete minimum cost edge (u,v) from heap
- Reheapify using adjust.
- J=find(u); k=find(v);
- If (j!=k) then\
- {
- I++; t[i][1]=u,t[i][2]=v;
- Union(j,k); mincost = mincost+cost[u][v];
- }
- }
Algorithm SimpleUnion (I, j) // i & j are roots of sets.
{
p[i] = j;
}
Algorithm SimpleFind (i ) // return root of ‘i’
{
while (p[i] >=0) do i=p[i];
return I;
}
If we want to perform n-1 unions then can be processed in Linear time O(n)
The time required to process a find for an element at level ‘i’ of a tree is O(i), so total time needed to process n finds is O(n^2)
One can improve the performance of our union & find by avoiding creation of degenerating trees.
We apply weighting rule for Union, which says that if the number of nodes in the tree with root i is less than the number of nodes in the tree with root j, then make j as the parent of i, otherwise make i, as the parent of j. Thus we avoid degenerated/skewed trees.
Similarly we use CollapseFind , to improve the performance of find. In CollapsingFind, we are required to go up to reach the root node and then reset the links. Every first find has to go through intermediate nodes to go up to root node, but every next find on similar node will require only one link to go up, thus reducing the cost cost of find. Collapsing rule says that, if j is a node on the path from i to its root node and p[i] is not root[i], then set p[j] to root[i].
Weighted union(I,j)
{
Temp=p[i]+p[j];
If(p[i]>p[j]) // i has fewer nodes
{
p[i]=j, p[j]=temp;
}
Else
{
p[j]=I; // j has fewer nodes
p[i]=temp;
}
}
Collapse Find(i)
{
r=I;
While(p[r]>=0)
r=p[r];
while(i=r)
{
s=p[i];
p[i]=r;
i=s;
}
}
Analysis of Kruskal algorithm :
Time required to construct the initial heap is log E.
Removal of edge from heap requires again log E and the edge is removed in the while loop which runs for E times, hence the time complexity of algorithm is O(E log E). Here since we have used sets to represent trees and we have used efficient find and union algorithms which take almost O(1) time.
Leave a Reply