DSA Theory

Bridges in a graph- Data Structure and Algorithm

The work is to identify the Bridges in an undirected graph. In an undirected linked graph, an edge is a bridge if removing it separates the graph. The definition is same for a disconnected undirected graph: a bridge is an edge removal that generates more disconnected components. Bridges, like articulation points, expose weaknesses in a linked network and help to create dependable networks. Output: (0, 3) and (3, 4) Input: Output: (1, 6) Input: Output: (0, 1), (1, 2), and (2, 3) Here is a naive approach to address the issue: Eliminate all edges one by one and observe whether the removal of one produces a disconnected graph. Use the guidelines below to carry out the concept: Find bridges in a graph applying Tarjan’s Algorithm. Before turning toward the approach, find out which edge is known as a bridge. Assume there is an edge from u -> v; now, should v not be reachable from any other edge, u -> v edge becomes bridge. Our method is based on intuition; so, take some time to grab it. Algorithm: – We require certain data structures to apply this method: visited[ ] = to monitor the visited vertices for DFS implementation disc[ ] = to keep track when for the first time that specific vertex is reached low[ ] = to keep track of the lowest feasible time by which we can reach that vertex “other than parent,” so enabling the particular node to be reached other than parent should the parent edge be deleted.We will travel the graph using DFS traversal but with minor changes i.e., while traversing we will keep track of the parent node by which the particular node is reached since we will update the low[node] = min(low[all it’s adjacent node except parent]). thus we need to keep track of the parent. While moving nearby nodes allow “v” of a given node let “u,” then three scenarios develop. Skip that iteration. Change the low of u by means of low[u] = min( low[u], disc[v]) This results when a node can be visited by more than one node; low is to monitor the lowest feasible time thus we will update it. As we know v cannot be parent; so, we have addressed that scenario first. Call the DFS to traverse forward now update the low[u] = min( low[u], low[v].The edge u -> v is a bridge since the lowest possible to time to reach “v” is more than “u” hence we cannot reach “v” without “u”. Time Complexity: O(V+E),  Auxiliary Space: O(V) used for visited, disc and low arrays.

Bridges in a graph- Data Structure and Algorithm Read More »

Kahn’s algorithm for Topological Sorting

Your goal is to find any Topological Sorted order of a directed accyclic graph with V vertices and E edges. Topological, Every directed edge u -> v has a linear ordering of vertices whereby vertex u occurs before v in the ordering. Like this: Output: 4 5 2 0 3 1Explanation: In the above output, each dependent vertex is printed after the vertices it depends upon. Input: V=5 , E={{0,1}, {1,2}, {3,2}, {3,4}} 0 3 4 1 2.Every dependent vertex in the output above prints following the vertices it depends on. Kahn’s Algorithm for Topological Sorting is a technique for linear ordering the vertices of a directed graph such that A occurs before B in the sequence for every directed edge from vertex A to vertex B. The method searches frequently for vertices devoid of incoming edges, removes them from the graph, and updates the arriving edges of the surviving vertices. This process keeps on till every vertex has arranged. Add to a queue all nodes with in-degree 0. Although the line isn’t empty: How can one ascertain the in-degree of every node? Initially computing the number of incoming edges to every node will help one determine the in-degree of every node. Go over every edge in the graph one at a time increasing the in-degree of the destination node. This allows you to ascertain every node’s in-degree prior to beginning the sorting procedure. The study of complexity: Time Complexity: O(V + E).The inner for loop will be run E number of times; the outer for loop will run V number of times.Auxiliary Space: V.The queue must retain every vertex of the graph. Consequently, the needed space is O(V). Topological Sort Using Kahn’s Algorithm: Applications

Kahn’s algorithm for Topological Sorting Read More »

Topological Sorting- Data Structure and Algorithm

Topological sorting for directed acyclic graphs (DAG) is a linear ordering of vertices such that vertex u occurs before vertex v in the ordering for every directed edge u-v. Note: Should a graph not be a DAG, topological sorting for that graph is not feasible. Depth First Traversal (DFS) prints a vertex and thereafter recursively calls DFS for its surrounding vertices. Before its nearby vertices in topological sorting, we must print a vertex. For instance, unlike DFS, the vertex “4” should also printed before vertex “0,” even if the vertex “5” in the above graph should printed before vertex “0”. Topological sorting, thus, differs from DFS. For instance, although it is not a topological sorting, a DFS of the depicted graph is “5 2 3 1 0 4”. Directed Acyclic Graphs’ (DAGs’) topological sorting Before we can explain why Topological sort only exists for DAGs, let first address two questions: DAGs a unique type of graphs in which each edge directed so that no cycle exists in the graph. : Why can graphs with undirected edges not have a topological sort?This is thus because undirected edge between two vertices u and v denotes that an edge exists from u to v as well as from v to u. This means that both u and v depend on one other and none of them can show before the other in the topological ordering without generating a contradiction. Why cannot Topological Sort be achieved for graphs with cycles?Consider a graph with three vertices and edges {1 to 2, 2 to 3, 3 to 1} generating a cycle. Now it will always contradict our definition if we try to topologically arrange this graph beginning from any vertex. Topological sorting collapses when all the vertices in a cycle depend indirectly on each other. Perhaps topological order is not unique. A topological sorting is a dependency problem whereby the completion of one activity depends on the completion of multiple other tasks whose order can vary. Allow us to grasp this idea by means of an illustration: Our assignment is to get to our school, thus first we must get ready. The dependency graph below shows the ones related to garments wearing. For instance you cannot wear shoes before donning socks. From the preceding picture, you would have already understood that there are several methods to get dressed; the below picture illustrates some of those ways. Algorithm for DFS-based topological sorting: Using Depth First Search (DFS), topological sorting is step-by-step explained here: O(V+E) is time complexity. Above is just DFS with an additional stack. Time complexity thus is exactly DFS Auxiliary space: O(V). The stack needs the extra room. Kahn’s Algorithm is the BFS based method used in topological sorting. The DFS based method covered above has temporal complexity; the Kahn’s approach has the same. Topological Sort’s applications

Topological Sorting- Data Structure and Algorithm Read More »

Difference between Prim’s and Kruskal’s algorithm for MST

Fundamental in nature, minimum spanning trees (MST) find use in network design, clustering, and optimization challenges in graph theory. Prim’s and Kruskal’s algorithms among the two most often used ones for determining the MST of a graph. Though they accomplish the same objective, both techniques do it in rather distinct ways. The variations between them will be discussed in this paper. To enable the selection of the appropriate method for particular kinds of graphs and uses. Designed as a greedy method, Prim’s Algorithm gradually generates the MST. Starting with a single vertex, it develops the MST one edge at a time always selecting the smallest edge that links a vertex in the MST to a vertex outside the MST. Steps of Prim’s Algorithm: Though it does things differently, Kruskal’s Algorithm is likewise a greedy one. Starting with all the vertices and no edges, it adds edges one by one in increasing order of weight to ensure no cycles develop until the MST is complete. Algorithm Kruskal’s steps: The main variations between Prim’s and Kruskal’s methods for minimum spanning tree (MST) discovery are compiled here: Feature Prim’s Algorithm Kruskal’s Algorithm Approach Vertex-based, grows the MST one vertex at a time Edge-based, adds edges in increasing order of weight Data Structure Priority queue (min-heap) Union-Find data structure Graph Representation Adjacency matrix or adjacency list Edge list Initialization Starts from an arbitrary vertex Starts with all vertices as separate trees (forest) Edge Selection Chooses the minimum weight edge from the connected vertices Chooses the minimum weight edge from all edges Cycle Management Not explicitly managed; grows connected component Uses Union-Find to avoid cycles Complexity O(V^2) for adjacency matrix, O((E + V) log V) with a priority queue O(E log E) or O(E log V), due to edge sorting Suitable for Dense graphs Sparse graphs Implementation Complexity Relatively simpler in dense graphs More complex due to cycle management Parallelism Difficult to parallelize Easier to parallelize edge sorting and union operations Memory Usage More memory for priority queue Less memory if edges can be sorted externally Example Use Cases Network design, clustering with dense connections Road networks, telecommunications with sparse connections Starting Point Requires a starting vertex No specific starting point, operates on global edges Optimal for Dense graphs where adjacency list is used Sparse graphs where edge list is efficient Conclusion Prim’s and Kruskal’s algorithms are both powerful tools for finding the MST of a graph, each with its unique advantages. Prim’s algorithm typically preferred for dense graphs, leveraging its efficient priority queue-based approach, while Kruskal’s algorithm excels in handling sparse graphs with its edge-sorting and union-find techniques. Understanding the structural differences and appropriate use cases for each algorithm ensures optimal performance in various graph-related problems.

Difference between Prim’s and Kruskal’s algorithm for MST Read More »

Kruskal’s Minimum Spanning Tree (MST) Algorithm

A minimum spanning tree (MST), sometimes know as least weight spanning tree, for a weighted, linked, undirected graph a spanning tree with a weight less than or equal to that of every other spanning tree. See this page to get more about Minimum Spanning Tree. Kruskal’s Algorithm: We shall now go over Kruskal’s method to determine the MST of a given weighted graph. Sort all of the supplied graph’s edges in Kruskal’s method in rising sequence. If the newly added edge does not create a cycle, it then goes on adding fresh edges and nodes in the MST. It selects the largest weighted edge last and the smallest weighted edge first. In every phase, then, we can claim that it makes a locally optimal decision to discover the best answer. This then is a greedy algorithm. Kruskal’s method for finding MST: The following describes how to find MST applying Kruskal’s method: Sort the weight of every edge in non-decreasing sequence.Choose the weakest edge. Verify whether it creates a cycle using the currently created spanning tree. Add this edge if the cycle isn’t created. Rather, throw it away.Till the spanning tree has (V-1) edges, repeat step #2. The greedy method is used in Kruskal’s method of minimal cost spanning tree finding. Choosing the smallest weight edge that does not create a cycle in the MST built thus far is the greedy choice. Allow us to clarify it using a case study: Illustration: The graph boasts fourteen edges and nine vertices. The lowest spanning tree so produced will have (9 – 1) = 8 edges.following sorting: Weight Source Destination 1 7 6 2 8 2 2 6 5 4 0 1 4 2 5 6 8 6 7 2 3 7 7 8 8 0 7 8 1 2 9 3 4 10 5 4 11 1 7 14 3 5 Below is the implementation of the above approach: C++ C Python3 Java Output Following are the edges in the constructed MST 2 — 3 == 4 0 — 3 == 5 0 — 1 == 10 Minimum Cost Spanning Tree: 19 Time complexity is O(E * logE) or O(E * logV).

Kruskal’s Minimum Spanning Tree (MST) Algorithm Read More »

Prim’s Algorithm for Minimum Spanning Tree (MST)

We have covered Kruskal’s method for minimum spanning trees in introduction. Prim’s method is a Greedy algorithm, same with Kruskal’s one. Starting with a single node, this algorithm explores all the related edges along the path by moving through multiple nearby nodes always. Starting with an empty spanning tree, the method proceeds Two sets of vertices are meant to be maintained. The MST already includes the vertices in the first set; the other set comprises the vertices not yet included. It chooses the minimum weight edge from all the edges that link the two sets at each turn. Following edge selection, it moves the other edge endpoint to the set including MST. In graph theory, cut in graph theory is the collection of edges linking two sets of vertices in a graph. Thus, identify a cut at each stage of Prim’s algorithm, choose the minimal weight edge from the cut, and add this vertex to MST Set (the set comprising already mentioned vertices). Prim’s Algorithm’s mechanism is what? One may characterize the operation of Prim’s method by means of the following steps: Note: We can split the vertices into two sets [one set comprises the vertices included in MST and the other comprises the peripheral vertices.] to identify a cycle. Primary’s Algorithm Illustration Let us use the following graph as an illustration for which we must determine the Minimum Spanning Tree (MST). First we choose an arbitrary vertex to serve as the Minimum Spanning Tree’s starting vertex. Our starting vertex here is vertex 0. The edges {0, 1} and {0, 7} define all the edges linking the incomplete MST and other vertices in second step. Between these two the edge with least weight is {0, 1}. Thus, include in the MST the edge and vertex 1. The edges joining the incomplete MST to other vertices are {0, 7}, {1, 7} and {1, 2}. Of these edges, only 8 has minimum weight; these are edges {0, 7} and {1, 2}. Let us thus consider the MST’s vertex 7 and the edge {0, 7}. [We also might have included vertex 2 in the MST and edge {1, 2}. The edges {1, 2}, {7, 6} and {7, 8} link the incomplete MST with the fringe vertices. As the MST has the least weight—that is, one—add the vertex 6 and the edge {7, 6}. Step 5: The connecting edges now are {7, 8}, {1, 2}, {6, 8} and {6, 5}. Include edge {6, 5} and vertex 5 in the MST as the edge has the minimum weight (i.e., 2) among them. Step 6: With the minimum weight among the present connecting edges is {5, 2}. Thus, incorporate that edge and vertex two in the MST. The connecting edges between the other edges and the unfinished MST are {2, 8}, {2, 3}, {5, 3} and {5, 4}. Edge {2, 8} with weight 2. has least weight edge. Thus, incorporate in the MST this edge and the vertex number eight. Step 8: See here that both the minimum weight of the edges {7, 8} and {2, 3} is similar. Nevertheless, 7 is already included into MST. We shall so take into account the edge {2, 3} and incorporate vertex 3 in the MST. Step 9: There is just one vertex 4 left to be added. From the incomplete MST to 4 the minimum weighted edge is {3, 4}. The MST has a final structure like this with weights (4 + 8 + 1 + 2 + 4 + 2 + 7 + 9) = 37 on its edges. Prim’s Algorithm implementation: Use the above described Prim’s Algorithm to determine MST of a graph following the provided guidelines: Analysis of Complexity of Prim’s Algorithm: Time Complexity: O(V2) By means of a binary heap, Prim’s algorithm’s time complexity can be lowered to O(E * logV) should the input graph be presented using an adjacency list. In this implementation, we always start from the graph’s root considering the spanning tree.Auxiliary Space: V(O) Adjacent List Representation (of Graph) with Priority Queue Intuition Optimized Implementation Prim’s method for minimal spanning tree (MST) computation: Negative aspects:

Prim’s Algorithm for Minimum Spanning Tree (MST) Read More »

Johnson’s algorithm for All-pairs shortest paths

The shortest pathways between every pair of vertices in a given weighted directed graph can found by means of negative weights. For this issue, we have addressed Floyd Warshall Algorithm. The Floyd Warshall Algorithm boasts Θ(V3) as its time complexity. We find all pair shortest paths in O(V2log V + VE) time using Johnson’s method. Using Dijkstra and Bellman-Ford as subroutines, Johnson’s method Considering every vertex as the source, applying Dijkstra’s Single Source shortest path algorithm for every vertex will yield all pair shortest paths in O(V*VLogV) time. Although Dijkstra’s single-source shortest path appears to be a superior alternative than Floyd Warshall’s Algorithm (https://www.geeksforgeeks.org/floyd-warshall-algorithm-dp-16/?ref=lbp), the problem with Dijkstra’s approach is that it does not work for negative weight edge. Johnson’s method is to re-weight every edge and make them all positive, then run Dijkstra’s algorithm for every vertex. How to convert a given graph into one with all non-negative weight edges? One might consider a basic method of locating the least weight edge and applying this weight to every edge. Sadly, this doesn’t work since various pathways may have varying numbers of edges (See this for an example). Should there several routes from a vertex u to v, all paths must scaled by the same factor so that the shortest path stays the shortest in the modified graph. Every vertex will have a weight assigned using Johnson’s method. Assume h[u] to be the weight assigned to vertex u. Reweight edges with vertex weights. The new weight, for an edge (u, v), for instance, becomes w(u, v) + h[u] – h[v]. The big advantage of this reweighting is that every set of pathways between any two vertices is raised by the same quantity and all negative weights turn non-negative. All h[] values of vertices on the path from s to t cancel each other; the weight of every path is raised by h[s] – h[t]. How are h[] values computed? For this aim, Bellman-Ford algorithm is applied. The whole process follows here. Added to the graph, a new vertex links to every other vertex already present. h[] values define the shortest distances from the new vertex to every current vertex. Method: Allow G to be the specified graph. Create a new vertex s for the graph and link edges from that vertex to every vertex in G. Let G’s be the altered graph.Run the Bellman-Ford method starting with s as the source on G’. Let Bellman-Ford’s computed distances be h[0], h[1],.. h[V-1]. Should we discover a negative weight cycle, then back off. As new vertex s lacks an edge, the negative weight cycle cannot be produced by it. From s, all edges are.Reweight the original graph’s edges. Assign the new weight “original weight + h[u] – h[v],” for every edge (u, v).Eliminate the extra vertex s and run Dijkstra’s method for every other vertex. How does the transformation ensure nonnegative weight edges?  The following property is always true about h[] values as they are the shortest distances. h[v] <= h[u] + w(u, v) The property basically indicates that the shortest distance from s to v must be smaller than or equal to the shortest distance from s to u plus the weight of the edge (u, v). The revised weights are w(u, v) + h[u] – h[v]. The inequality “h[v] <= h[u] + w(u, v)” makes the value of the new weights either more than or equal to zero. For instance, let us analyze the graph below. We add edges from a source s to every vertex of the original graph. S is 4 in the next diagram. Bellman-Ford technique helps us to find the shortest distances from 4 to all other vertices. From 4 to 0, 1, 2, and 3 the shortest distances are 0, -5, -1, 0 accordingly, i.e., h[] = {0, -5, -1, 0}. After obtaining these distances, we eliminate the source vertex 4 and reweigh the edges by this algorithm. w(u, v) = u + h[u] – h[v]. Now that all weights are positive, we may execute Dijkstra’s shortest path algorithm starting from every vertex. Time Complexity: Bellman-Ford Algorithm once and Dijkstra termed V times constitute the primary steps in the method. Bellman Ford has O(VE) and Dijkstra has O(VLogV), time complexity. Time complexity is thus O(V2log V + VE) generally. Johnson’s algorithm’s time complexity turns out to be Floyd Warshall’s Algorithm’s same.

Johnson’s algorithm for All-pairs shortest paths Read More »

Floyd Warshall Algorithm- DSA

Renowned in computer science and graph theory for its founders, Robert Floyd and Stephen Warshall, the Floyd-Warshall algorithm is a basic tool. In a weighted graph, it determines the shortest pathways between all pairings of nodes. Extremely efficient and able to manage graphs with both positive and negative edge weights, this method is a flexible tool for addressing a broad spectrum of network and connection issues. Floyd Warshall Algorithm: Unlike single source shortest path methods including Dijkstra and Bellman Ford, the Floyd Warshall Algorithm is an all pair shortest path algorithm. Both directed and undirected weighted graphs are covered by this method. It does not apply, though, for graphs with negative cycles—that is, where the total of the edges in a cycle is negative. To find the shortest distance between any pair of nodes, it uses a dynamic programming method checking every conceivable path passing via every other node. Floyd Warshall Algorithm Algorithm: Pseudo-Code of Floyd Warshall Algorithm : Time Complexity Time Complexity: O(V3), where V is the number of vertices in the graph and we perform three nested loops each of size V Auxiliary Space: O(V2), to generate a 2-D matrix in order to save the shortest distance for every pair of nodes.Note: The programme above merely shows the shortest distances. Storing the precursor information in a separate 2D matrix will help us to publish the shortest paths as well. Why, for Dense Graphs rather than for Sparse Graphs, Floyd-Warshall Algorithm is better? Dense graphs are those in which the number of edges is noticeably considerably more than the number of vertices.A sparse graph is one in which there are quite few edges. The Floyd Warshall Algorithm runs for O(V3) times regardless of the number of edges in the graph, so it is most appropriate for dense graphs. Regarding sparse graphs, Johnson’s Algorithm is more appropriate.

Floyd Warshall Algorithm- DSA Read More »

How to find Shortest Paths- using Dijkstra’s Algorithm

Find the shortest paths from the source to every other vertex in the provided graph with a weighted graph and a source vertex in the graph. Note: There isn’t any negative edge on the shown graph. Dijkstra’s Algorithm using Adjacency Matrix : Generating a shortest path tree (SPT) using a specified source as a root is the goal. Keep a two-set Adjacency Matrix maintained. One set comprises vertices covered by the shortest-path tree; another set comprises vertices not yet included in that tree.Discover a vertex in the other set (set not yet included) with a minimum distance from the source at every stage of the method. Method: Notes: Though it computes the shortest distance, the code does not compute the path information. Create a parent array, update it when distance changed, then display the shortest path from source to several vertices from that parent array.The period The complexity of the application is O(V 2 ). Using a binary heap will assist one to decrease the input graph represented using adjacency list to O(E * log V). For further information, kindly consult Dijkstra’s Algorithm for Adjacent List Representation.Graphs having negative weight cycles cannot benefit from Dijkstra’s algorithm. Why Dijkstra’s Algorithms fails for the Graphs with negative edges? Negative weights problematic since Dijkstra’s algorithm supposes that once a node included to the list of visited nodes, its distance completed and will not vary. Negative weights, however, can cause this assumption to produce erroneous findings. A is the source node in the above graph; among the edges A to B and A to C, A to B has the smaller weight and Dijkstra assigns the shortest distance of B as 2; yet, because of existence of a negative edge from C to B, the actual shortest distance reduces to 1 which Dijkstra misses to discover. Dijkstra’s Algorithm using Adjacency List in O(E logV): Heap (or priority queue) always advised to be used for Dijkstra’s method since the necessary actions (extract minimum and decrease key) fit with the speciality of the heap (or priority queue). The issue is, though, that priority_queue does not handle the decreasing key. Instead of updating a key to fix this, enter one additional duplicate of it. We thus let the priority queue to contain several instances of the same vertex. This method has below main characteristics and does not call for reducing important processes. We add one more instance of a vertex in priority_queue if the distance of a vertex decreases. We just examine the instance with least distance and overlook other instances even in cases of several occurrences.O(logE) is the same as O(logV) hence the time complexity stays O(E * LogV). At most O(E) vertices will be in the priority queue. O(E * logV) where E is the number of edges and V is the number of vertices determines time complexity.O(V) auxiliary space Google maps shows shortest distance between source and destination by means of Dijkstra’s Algorithm.Dijkstra’s technique lays the foundation for several routing systems like OSPF (Open Shortest Path First) and IS-IS (Intermediate System to Intermediate System) in computer networking.The Dijkstra’s algorithm applied in traffic management systems and transportation to maximize traffic flow, reduce congestion, and create the most effective vehicle routes.Dijkstra’s method helps airlines create fly routes that cut travel time and fuel use.Electronic design automation uses Dijkstra’s algorithm to route connections on very-large-scale integration (VLSI) chips and integrated circuits.

How to find Shortest Paths- using Dijkstra’s Algorithm Read More »

Depth First Search or DFS for a Graph

Depth First Search (or DFS) for a graph resembles Depth First Traversal of a tree. Like trees, we move one by one all around our nearby vertices. We fully complete the traversal of all vertices reachable through that adjacent vertex when we walk across that vertex. We go to the next nearby vertex and resume the process after we have completed traversing one adjacent vertex and their reachable vertices. This is like a tree; we first go entirely over the left subtree then proceed to the right subtree. Graphs differ from trees primarily in that they may include cycles—a node may be visited more than once. We employ a boolean visited array to prevent repeatedly processing a node. Example: Input: adj = [[1, 2], [0, 2], [0, 1, 3, 4], [2], [2]] Starting from a given source, the algorithm searches all reachable vertices from the given source in a specified direction of undirectional graph. It resembles Preorder Tree Traversal in that we visit the root then recur for its offspring. A graph could have loops. We thus ensure that we do not treat a vertex once more by means of an additional visited array. The fundamental knowledge of algorithms such Depth First Search or DFS is very important and also often asked in Technical exams and interview to have the strong knowledge of these concept you should check out our course Tech Interview 101 – From DSA to System Design in which you get the basic to advance knowledge of the data structure and algorithms.Time complexity: O(V + E), in where V is the graph’s vertices count and E its edges count. O(V + E) since an additional visited array of size V is needed, and stack size for recursive calls to DFSRec function. Please find specifics in Complexity Analysis of Depth First Search. DFS towards Complete Traversal of Disconnected Undirected Graph In case of a disconnected graph, the above approach prints just those vertices that are reachable from the source, therefore depending on a source takes a source as an input. Let us now discuss the graph possibly disconnected and the method printing all vertices without any source. The concept is straightforward: we call the above designed DFS for all instead of DFS for a single vertex. Depth First Search or DFS on Directed Graph One fundamental method for investigating graph structures is depth-first search (DFS). DFS in directed graphs can begin from a given point and search all the related nodes. It can also ensure that, even in cases of disconnected parts of the graph, every element in it is visited. Starting from a single point, this paper describes DFS’s operation and how one can traverse a complete graph including disconnected sections. DFS from a Directed Graph’s Given Source Starting at a given vertex and moving each node as far as we can go down in the path, Depth-First Search (DFS) from a given source explores a directed graph. We retreat to the previous vertex to investigate any other unexplored paths should we come upon a vertex devoid of unvisited neighbors. Finding pathways, verifying connectivity, and investigating all reachable nodes from a starting point are only three of the chores where this method is most helpful. The Mechanism: Maintain a boolean visited array to record which vertices have been seen already. When the graph has cycles, this will prevent one from running endlessly.Visit all source node’s unvisited neighbors using recursion:Initially marks the current vertex as visited and handles it (by printing its value, for instance). Recursively then visits every unvisited neighbor of the current vertex.Backtracks to the previous vertex to investigate other unvisited paths if a vertex lacks unvisited neighbors. DFS for Disconnected Directed Graphs Complete Traversal Edge directions in a directed graph allow us to go from one vertex to another just in the direction the edge points. One in which not all vertices are reachable from a single vertex is a disconnected graph. In case of a disconnected graph, the above approach prints just those vertices that are reachable from the source, thereby depending on a source. Now let us discuss the graph maybe disconnected and the method that generates all vertices without any source. We must make sure the DFS algorithm begins from every unvisited vertex in order to manage such a graph in DFS, therefore covering all components of the graph. Time Complexity O(V + E). Here the temporal complexity is identical as we visit every vertex at most once and every edge is traversed twice in undirected and at most once (in directed). Auxiliary Space: O(V + E), as an additional visited array of size V is needed, and stack size for recursive calls to DFSRec function.

Depth First Search or DFS for a Graph Read More »