CXXGraph is a small library, header only, that manages the Graph and it's algorithms in C++. In other words a "Comprehensive C++ Graph Library". An alternative to Boost Graph Library (BGL).
We are looking for:
- Site Developer for the development of the CXXGraph site ( for the moment on GitHub Page );
- Developers and Committers, also at first experience, we will guide you step by step to the open-source world!
If you are interested, please contact us at zigrazor@gmail.com or contribute to this project. We are waiting for you!
Completed | Description | Date of Completition |
---|---|---|
✔️ | First Optimization | Apr 4, 2022 |
✔️ | Add Benchmark for all algorithms | Oct 5, 2022 |
✔️ | Code Optimization | Oct 5, 2022 |
✔️ | Release 0.4.0 | Oct 7, 2022 |
✔️ | "Const" Code Review #155 | Mar 23, 2023 |
✔️ | Release 0.5.0 | Mar 23, 2023 |
❕ | Test on Partition Algorithm #264 | Mar 21, 2023 |
✔️ | Bug Resolution #263 | Mar 21, 2023 |
✔️ | General Performance Optimization #262 #265 | Mar 28, 2023 |
✔️ | Reduction of Code Issue of Static Analysis | Mar 28, 2023 |
✔️ | First Stable Release 1.0.0 | Mar 28, 2023 |
✔️ | Remove External Dependency #209 #274 #275 | May 7, 2023 |
✔️ | Release 1.0.1 | May 7, 2023 |
✔️ | Make CXXGraph MSVC-Compatible #277 | May 8, 2023 |
✔️ | All namespaces should be titlecase #278 | May 8, 2023 |
✔️ | Release 1.1.0 | May 8, 2023 |
📝 | Switch to C++ 20 standard #266 | TBD |
📝 | Markov Chain Algorithm #107 | TBD |
📝 | Release 1.2.1 | TBD |
📝 | FP-GraphMiner algorithm #105 | TBD |
📝 | Release 1.2.2 | TBD |
📝 | Tarjan's algorithm #103 | TBD |
📝 | Release 1.2.3 | TBD |
✔️ | Graph Topological Sort #104 | Nov 3, 2022 |
📝 | Official Site Release | TBD |
📝 | Release 1.3.0 | TBD |
📝 | Custom export and import #19 | TBD |
📝 | Input & Output file format #172 | TBD |
📝 | Release 1.4.0 | TBD |
✔️ | Multi-Thread implementation of BFS #121 | Dec 6, 2022 |
📝 | Release 1.5.1 | TBD |
❕ | Thread Safe implementations of Boruvka, Prim & Kruskal algorithm #128 | Oct 5, 2022 |
📝 | Release 1.6.0 | TBD |
📝 | Edge-Cut Partition Algorithm #183 | TBD |
📝 | Release 1.6.1 | TBD |
✔️ | WB-Libra Parttition Algorithm #178 | Nov 25, 2022 |
📝 | Release 1.7.0 | TBD |
📝 | Introduce Hypergraph #122 | TBD |
📝 | Stable Release 2.0.0 | TBD |
📝 | TBD | TBD |
- CXXGraph
- Introduction
- Hacktoberfest 2k22
- We are Looking for...
- Roadmap
- Table of Contents
- Install and Uninstall
- Classes Explanation
- Requirements
- How to use
- Example
- Unit-Test Execution
- Benchmark Execution
- Packaging
- Algorithm Explanation
- Partition Algorithm Explanation
- How to contribute
- Site
- Contact
- Support
- References
- Credits
- Contributors
- Cite Us
- Hacktoberfest 2k21
- Other Details
- Author
On Unix/Linux system you need to execute the following command to install:
$ sudo tar xjf CXXGraph-{version}.tar.bz2
to uninstall:
$ sudo rm -f /usr/include/Graph.hpp /usr/include/CXXGraph*
On Fedora/CentOS/RedHat system you need to execute the following command to install:
$ sudo rpm -ivh CXXGraph-{version}.noarch.rpm
to uninstall:
$ sudo rpm -e CXXGraph-{version}
On Debian/Ubuntu system you need to execute the following command to install:
$ sudo dpkg -i CXXGraph_{version}.deb
to uninstall:
$ sudo apt-get remove CXXGraph
You can install from source the library using CMake. After the compilation phase, you can use:
$ sudo make install
to install the library.
The Classes Explanation can be found in the Doxygen Documentation, in the Classes Section
- The minimum C++ standard required is C++17
- A GCC compiler version greater than 7.3.0 OR
- A MSVC compiler that supports C++17
The use of the library is very simple, just put the header file where you need!
Work in Progess
The Unit-Test required the CMake version greater than 3.9 and the google test library.
git clone https://github.com/google/googletest.git
cd googletest # Main directory of the cloned repository.
mkdir -p build # Create a directory to hold the build output.
cd build
cmake .. # Generate native build scripts for GoogleTest.
make # Compile
sudo make install # Install in /usr/local/ by default
From the base directory:
mkdir -p build # Create a directory to hold the build output.
cd build # Enter the build folder
cmake .. # Generate native build scripts for GoogleTest.
make # Compile
After the compilation, you can run the executable that is under the "build" directory with the name "test_exe", with the simple command ./test_exe
.
The Benchmark required the CMake version greater than 3.9 and the google test and the google benchmark library.
# Check out the library.
$ git clone https://github.com/google/benchmark.git
# Benchmark requires Google Test as a dependency. Add the source tree as a subdirectory.
$ git clone https://github.com/google/googletest.git benchmark/googletest
# Go to the library root directory
$ cd benchmark
# Make a build directory to place the build output.
$ cmake -E make_directory "build"
# Generate build system files with cmake.
$ cmake -E chdir "build" cmake -DCMAKE_BUILD_TYPE=Release ../
# or, starting with CMake 3.13, use a simpler form:
# cmake -DCMAKE_BUILD_TYPE=Release -S . -B "build"
# Build the library.
$ cmake --build "build" --config Release
# install library
$ sudo cmake --build "build" --config Release --target install
From the base directory:
mkdir -p build # Create a directory to hold the build output.
cd build # Enter the build folder
cmake -DBENCHMARK=ON .. # Generate native build scripts for GoogleTest.
make # Compile
After the compilation, you can run the executable that is under the "build" directory with the name "benchmark", with the simple command ./benchmark
.
You can check benchmark result at this link
To create tarballs package you need to follow the following steps:
# Enter Packaging Directory
$ cd packaging
# execute the script to generate tarballs
$ ./tarballs.sh
To create rpm package you need to follow the following steps:
# Enter Packaging Directory
$ cd packaging/rpm
# execute the script to generate tarballs
$ ./make_rpm.sh
To create deb package you need to follow the following steps:
# Enter Packaging Directory
$ cd packaging/deb
# execute the script to generate tarballs
$ ./make_deb.sh
Graph Dijkstras Shortest Path Algorithm(Dijkstra's Shortest Path) [Dijkstra's Algorithm](https://www.interviewbit.com/blog/find-shortest-path-dijkstras-algorithm/) is used to find the shortest path from a source node to all other reachable nodes in the graph. The algorithm initially assumes all the nodes are unreachable from the given source node so we mark the distances of all nodes as infinity. (infinity) from source node (INF / infinity denotes unable to reach).
Dial specialization of dijkstra’s algorithm.
When edge weights are small integers (bounded by a parameter C), specialized queues which take advantage of this fact can be used to speed up Dijkstra's algorithm. The first algorithm of this type was Dial's algorithm (Dial 1969) for graphs with positive integer edge weights, which uses a bucket queue to obtain a running time O(|E|+|V|C).(source wikipedia)
Below is complete algorithm:
- Maintains some buckets, numbered 0, 1, 2,…,wV.
- Bucket k contains all temporarily labeled nodes with distance equal to k.
- Nodes in each bucket are represented by list of vertices.
- Buckets 0, 1, 2,..wV are checked sequentially until the first non-empty bucket is found. Each node contained in the first non-empty bucket has the minimum distance label by definition.
- One by one, these nodes with minimum distance label are permanently labeled and deleted from the bucket during the scanning process.
- Thus operations involving vertex include:
- Checking if a bucket is empty
- Adding a vertex to a bucket
- Deleting a vertex from a bucket.
- The position of a temporarily labeled vertex in the buckets is updated accordingly when the distance label of a vertex changes.
- Process repeated until all vertices are permanently labeled (or distances of all vertices are finalized).
At this link you can find a step-by-step illustrations.
Prim's Algorithm Prim's Algorithm is is a greedy algorithm that finds a minimum spanning tree for a weighted undirected graph. This means it finds a subset of the edges that forms a tree that includes every vertex, where the total weight of all the edges in the tree is minimized. The algorithm operates by building this tree one vertex at a time, from an arbitrary starting vertex, at each step adding the cheapest possible connection from the tree to another vertex.
Steps:
- Initialize a tree with a single vertex, chosen arbitrarily from the graph.
- Grow the tree by one edge: of the edges that connect the tree to vertices not yet in the tree, find the minimum-weight edge, and transfer it to the tree.
- Repeat step 2 (until all vertices are in the tree).
(Breadth First Search) Breadth First Search Algorithm(Breadth First Search) Breadth First Search, also quoted as BFS, is a Graph Traversal Algorithm. Time Complexity O(|V| + |E|) where V are the number of vertices and E are the number of edges in the graph. Applications of Breadth First Search are :
- Finding shortest path between two vertices say u and v, with path length measured by number of edges (an advantage over depth first search algorithm)
- Ford-Fulkerson Method for computing the maximum flow in a flow network.
- Testing bipartiteness of a graph.
- Cheney's Algorithm, Copying garbage collection.
And there are many more...
(Depth First Search) Depth First Search Algorithm (Depth First Search) Depth First Search, also quoted as DFS, is a Graph Traversal Algorithm. Time Complexity O(|V| + |E|) where V is number of vertices and E is number of edges in graph. Application of Depth First Search are:
- Finding connected components
- Finding 2-(edge or vertex)-connected components.
- Finding 3-(edge or vertex)-connected components.
- Finding the bridges of a graph.
- Generating words in order to plot the limit set of a group.
- Finding strongly connected components.
And there are many more...
Best First Search Best First Search is a class of search algorithms which traverses the graph by exploring the most promising node chosen according to an evaluation function. The worst-case time complexity is O(n * log n) where n is the number of nodes in the graph.
The existence of a cycle in directed and undirected graphs can be determined by whether depth-first search (DFS) finds an edge that points to an ancestor of the current vertex (it contains a back edge). All the back edges which DFS skips over are part of cycles. In an undirected graph, the edge to the parent of a node should not be counted as a back edge, but finding any other already visited vertex will indicate a back edge. In the case of undirected graphs, only O(n) time is required to find a cycle in an n-vertex graph, since at most n − 1 edges can be tree edges.
Many topological sorting algorithms will detect cycles too, since those are obstacles for topological order to exist. Also, if a directed graph has been divided into strongly connected components, cycles only exist within the components and not between them, since cycles are strongly connected.
For directed graphs, distributed message based algorithms can be used. These algorithms rely on the idea that a message sent by a vertex in a cycle will come back to itself. Distributed cycle detection algorithms are useful for processing large-scale graphs using a distributed graph processing system on a computer cluster (or supercomputer).
Applications of cycle detection include the use of wait-for graphs to detect deadlocks in concurrent systems.
Bellman-Ford Algorithm can be used to find the shortest distance between a source and a target node. Time Complexity O(|V| . |E|) where V is number of vertices and E is number of edges in graph which is higher than Dijkstra's shortest path algorithm. The time complexity of dijkstra's algorithm is O(|E| + |V| log |v| ). The advantage of bellman-ford over dijkstra is that it can handle graphs with negative edge weights. Further, if the graph contains a negative weight cycle then the algorithm can detect and report the presense of negative cycle.
This video gives a nice overview of the algorithm implementation. This MIT lecture gives a proof of Bellman-Ford's correctness & its ability to detect negative cycles. Applications:
- Distance‐vector routing protocol
- Routing Information Protocol (RIP)
- Interior Gateway Routing Protocol (IGRP)
We initialize the solution matrix same as the input graph matrix as a first step. Then we update the solution matrix by considering all vertices as an intermediate vertex. The idea is to one by one pick all vertices and updates all shortest paths which include the picked vertex as an intermediate vertex in the shortest path. When we pick vertex number k as an intermediate vertex, we already have considered vertices {0, 1, 2, .. k-1} as intermediate vertices. For every pair (i, j) of the source and destination vertices respectively, there are two possible cases.
- k is not an intermediate vertex in shortest path from i to j. We keep the value of dist[i][j] as it is.
- k is an intermediate vertex in shortest path from i to j. We update the value of dist[i][j] as dist[i][k] + dist[k][j] if dist[i][j] > dist[i][k] + dist[k][j]
Kruskal Algorithm can be used to find the minimum spanning forest of an undirected edge-weighted graph. Time Complexity O(E log E) = O(E log V) where V is number of vertices and E is number of edges in graph. The main speed limitation for this algorithm is sorting the edges.
For a quick understanding of the algorithm procedure, check this video. Some of the real life applications are:
- LAN/TV Network
- Tour Operations
- Water/gas pipe network
- Electric grid
Other algorithms to find the minimum spanning forest are Prim's algorithm or Borůvka's algorithm.
Borůvka's Algorithm is a greedy algorithm that can be used for finding a minimum spanning tree in a graph, or a minimum spanning forest in the case of a graph that is not connected.
The algorithm begins by finding the minimum-weight edge incident to each vertex of the graph, and adding all of those edges to the forest. Then, it repeats a similar process of finding the minimum-weight edge from each tree constructed so far to a different tree, and adding all of those edges to the forest. Each repetition of this process reduces the number of trees, within each connected component of the graph, to at most half of this former value, so after logarithmically many repetitions the process finishes. When it does, the set of edges it has added forms the minimum spanning forest.
Borůvka's algorithm can be shown to take O(log V) iterations of the outer loop until it terminates, and therefore to run in time O(E log V), where E is the number of edges, and V is the number of vertices in G (assuming E ≥ V).
Mathematical definition of the problem: Let G be the set of nodes in a graph and n be a given node in that set. Let C be the non-strict subset of G containing both n and all nodes reachable from n, and let C' be its complement. There's a third set M, which is the non-strict subset of C containing all nodes that are reachable from any node in C'. The problem consists of finding all nodes that belong to C but not to M.
Currently implemented Algorithm:
- Use DFS to find all nodes reachable from n. These are elements of set C.
- Initialize C' to be complement of C (i.e. all nodes - nodes that are in C)
- For all nodes in C', apply DFS and get the list of reachable nodes. This is set M.
- Finally removes nodes from C that belong to M. This is our solution.
Application:
This algorithm is used in garbage collection systems to decide which other objects need to be released, given that one object is about to be released.
Ford-Fulkerson Algorithm is a greedy algorithm for finding a maximum flow in a flow network. The idea behind the algorithm is as follows: as long as there is a path from the source (start node) to the sink (end node), with available capacity on all edges in the path, we send flow along one of the paths. Then we find another path, and so on. A path with available capacity is called an augmenting path.
Kosaraju's Algorithm is a linear time algorithm to find the strongly connected components of a directed graph. It is based on the idea that if one is able to reach a vertex v starting from vertex u, then one should be able to reach vertex u starting from vertex v and if such is the case, one can say that vertices u and v are strongly connected - they are in a strongly connected sub-graph. Following is an example:
1). Create an empty stack ‘S’ and do DFS traversal of a graph. In DFS traversal, after calling recursive DFS for adjacent vertices of a vertex, push the vertex to stack. 2). Reverse directions of all arcs to obtain the transpose graph. 3). One by one pop a vertex from S while S is not empty. Let the popped vertex be ‘v’. Take v as source and do DFS (call DFSUtil(v)). The DFS starting from v prints strongly connected component of v.
Kahn's Algorithm finds topological ordering by iteratively removing nodes in the graph which have no incoming edges. When a node is removed from the graph, it is added to the topological ordering and all its edges are removed allowing for the next set of nodes with no incoming edges to be selected.
A vertex-cut partitioning divides edges of a graph into equal size partitions. The vertices that hold the endpoints of an edge are also placed in the same partition as the edge itself. However, the vertices are not unique across partitions and might have to be replicated (cut), due to the distribution of their edge across different partitions.
Replication factor quantifies how many vertices are replicated over computers compared with the the number of vertices of the original input graph.
This Algorithm is a simple vertex-cut in Round-Robin fashion. It takes the original graph edges and assign them to the partitions, dividing it in equal(or similar) size. This algorithm does not take care of optimization in vertex replication ( Replication Factor) but only balance the edge in the partitions.
Greedy partitioning algorithms uses the entire history of the edge assignments to make the next decision. The algorithm stores the set of partitions A(v) to which each already observed vertex v has been assigned and the current partition sizes. When processing edge e ∈ E connecting vertices vi, vj ∈ V , the greedy algorithm follows this simple set of rules:
- Rule 1: If neither vi nor vj have been assigned to a partition, then e is placed in the partition with the smallest size in P.
- Rule 2: If only one of the two vertices has been already assigned (without loss of generality assume that vi is the assigned vertex) then e is placed in the partition with the smallest size in A(vi).
- Rule 3: If A(vi) ∩ A(vj ) 6= ∅, then edge e is placed in the partition with the smallest size in A(vi) ∩ A(vj).
- Rule 4: If A(vi) != ∅, A(vj ) != ∅ and A(vi)∩A(vj ) = ∅, then e is placed in the partition with the smallest size in A(vi)∪A(vj) and a new vertex replica is created accordingly.
High Degree (are) Replicated First(HDRF) Algorithm is a greedy vertex-cut algorithm as described by this paper. This Algorithm try to optimize Replication Factor by using the history of the edge assignements amd the incremental vertex degree. With a function that take in consideration this two factors calculate the best partition to assign the analyzed edge. The replica created are based on the degree of the verteices, and the vertices replicated are probably a so called "Hub-Node", which are the vertices with higher degree.
Efficient and Balanced Vertex-cut(EBV) is an offline vertex-cut algorithm as described by this paper. This algorithm try to balance the partitions with respect to the number of edges and vertices of each partitions and the Replication Factor. It apply a formula to evaluate the partition in which assigns the edge that take into consideration also the total number of edges and vertices of the graph. The evaluation formula is the following:
The lowest value is taken as partition Id.
If you want give your support you can create a pull request or report an issue . If you want to change the code, or fix issue, or implement a new feature please read our CONTRIBUTING Guide
If you want to disscuss new feature or you have any question or suggestion about library please open a Discussion or simply chat on
E-Mail : zigrazor@gmail.com
To support me just add Star the project or follow me
To get updated watch the project
We are referenced by:
Thanks to the community of TheAlgorithms for some algorithms ispiration.
Thanks to GeeksForGeeks for some algorithms inspiration.
Thank you to all the people who have already contributed to CXXGraph!
If you use this software please follow the CITATION istruction. Thank you!
We have been participated at Hacktoberfest 2021, thank you to all the contributors!
We have been participated at Hacktoberfest 2022, thank you to all the contributors!
View the Estimated Value of the Project
@ZigRazor |
---|