Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revisiting nxp algorithms #63

Merged
merged 9 commits into from
Jun 6, 2024
Merged
4 changes: 2 additions & 2 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ jobs:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install scipy pandas pytest-cov pytest-randomly
# matplotlib lxml pygraphviz pydot sympy # Extra networkx deps we don't need yet
python -m pip install scipy numpy pytest-randomly
# pandas pytest-cov matplotlib lxml pygraphviz pydot sympy # Extra networkx deps we don't need yet
python -m pip install git+https://github.com/networkx/networkx.git@main
python -m pip install .
echo "Done with installing"
Expand Down
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ venv/
ENV/
env.bak/
venv.bak/
nxp-dev/

# Spyder project settings
.spyderproject
Expand All @@ -131,3 +132,6 @@ dmypy.json
# asv
results/
html/

# get_info update script
temp__init__.py
41 changes: 19 additions & 22 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,52 +82,49 @@ To add any additional tests, **specific to nx_parallel**, you can follow the way

## Documentation syntax

For displaying a small note about nx-parallel's implementation at the end of the main NetworkX documentation, we use the `backend_info` [entry_point](https://packaging.python.org/en/latest/specifications/entry-points/#entry-points) (in the `pyproject.toml` file). The [`get_info` function](https://github.com/networkx/nx-parallel/blob/main/_nx_parallel/__init__.py) is used to parse the docstrings of all the algorithms in nx-parallel and display the nx-parallel specific documentation on the NetworkX's main docs, in the "Additional Backend implementations" box, as shown in the screenshot below.
For displaying a small note about nx-parallel's implementation at the end of the main NetworkX documentation, we use the `backend_info` [entry_point](https://packaging.python.org/en/latest/specifications/entry-points/#entry-points) (in the `pyproject.toml` file). The [`get_info` function](./_nx_parallel/__init__.py) is used to parse the docstrings of all the algorithms in nx-parallel and display the nx-parallel specific documentation on the NetworkX's main docs, in the "Additional Backend implementations" box, as shown in the screenshot below.

![backend_box_ss](https://github.com/networkx/nx-parallel/blob/main/assets/images/backend_box_ss.png)
![backend_box_ss](./assets/images/backend_box_ss.png)

Here is how the docstring should be formatted in nx-parallel:
nx-parallel follows [sphinx docstring guidelines](https://the-ultimate-sphinx-tutorial.readthedocs.io/en/latest/_guide/_styleguides/docstrings-guidelines.html) for writing docstrings. But, while extracting the docstring to display on the main networkx docs, only the first paragraph of the function's description and the first paragraph of each parameter's description is extracted and displayed. So, make sure to include all the necessary information in the first paragraphs itself. And you only need to include the additional **backend** parameters in the `Parameters` section and not all the parameters. Also, it is recommended to include a link to the networkx function's documentation page in the docstring, at the end of the function description.

Here is an example of how the docstrings should be formatted in nx-parallel:

```.py
def betweenness_centrality(
G, k=None, normalized=True, weight=None, endpoints=False, seed=None, get_chunks="chunks"
):
"""[FIRST PARA DISPLAYED ON MAIN NETWORKX DOCS AS FUNC DESC]
The parallel computation is implemented by dividing the
nodes into chunks and computing betweenness centrality for each chunk concurrently.
def parallel_func(G, nx_arg, additional_backend_arg_1, additional_backend_arg_2=None):
"""The parallel computation is implemented by dividing the
nodes into chunks and ..... [ONLY THIS PARAGRAPH WILL BE DISPLAYED ON THE MAIN NETWORKX DOCS]

Some more additional information about the function.

networkx.func : <link to the function's networkx docs page>

Parameters
------------ [EVERYTHING BELOW THIS LINE AND BEFORE THE NETWORKX LINK WILL BE DISPLAYED IN ADDITIONAL PARAMETER'S SECTION ON NETWORKX MAIN DOCS]
get_chunks : function (default = "chunks")
A function that takes in nodes as input and returns node_chunks...[YOU CAN MULTIPLE PARAGRAPHS FOR EACH PARAMETER, IF NEEDED, SEPARATED BY 1 BLANK LINE]
----------
additional_backend_arg_1 : int or float
[YOU CAN HAVE MULTIPLE PARAGRAPHS BUT ONLY THE FIRST PARAGRAPH WILL BE DISPLAYED ON THE MAIN NETWORKX DOCS]

[LEAVE 2 BLANK LINES BETWEEN EACH PARAMETER]
parameter 2 : int
additional_backend_arg_2 : None or str (default=None)
....
.
.
.
[LEAVE 1 BLANK LINE BETWEEN THE PARAMETERS SECTION AND THE LINK]
networkx.betweenness_centrality : https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.centrality.betweenness_centrality.html
"""
```

## Chunking

In parallel computing, "chunking" refers to dividing a large task into smaller, more manageable chunks that can be processed simultaneously by multiple computing units, such as CPU cores or distributed computing nodes. It's like breaking down a big task into smaller pieces so that multiple workers can work on different pieces at the same time, and in the case of nx-parallel, this usually speeds up the overall process.

The default chunking in nx-parallel is done by first determining the number of available CPU cores and then allocating the nodes (or edges or any other iterator) per chunk by dividing the total number of nodes by the total CPU cores available. (ref. [chunk.py](https://github.com/networkx/nx-parallel/blob/main/nx_parallel/utils/chunk.py)). This default chunking can be overridden by the user by passing a custom `get_chunks` function to the algorithm as a kwarg. While adding a new algorithm, you can change this default chunking, if necessary (ref. [PR](https://github.com/networkx/nx-parallel/pull/33)). Also, when [the `config` PR](https://github.com/networkx/networkx/pull/7225) is merged in networkx, and the `config` will be added to nx-parallel, then the user would be able to control the number of CPU cores they would want to use and then the chunking would be done accordingly.
The default chunking in nx-parallel is done by first determining the number of available CPU cores and then allocating the nodes (or edges or any other iterator) per chunk by dividing the total number of nodes by the total CPU cores available. (ref. [chunk.py](./nx_parallel/utils/chunk.py)). This default chunking can be overridden by the user by passing a custom `get_chunks` function to the algorithm as a kwarg. While adding a new algorithm, you can change this default chunking, if necessary (ref. [PR](https://github.com/networkx/nx-parallel/pull/33)). Also, when [the `config` PR](https://github.com/networkx/networkx/pull/7225) is merged in networkx, and the `config` will be added to nx-parallel, then the user would be able to control the number of CPU cores they would want to use and then the chunking would be done accordingly.

## General guidelines on adding a new algorithm

- To get started with adding a new algorithm, you can refer to the existing implementations in nx-parallel and also refer to the [joblib's documentation on embarrassingly parallel `for` loops](https://joblib.readthedocs.io/en/latest/parallel.html).
- The algorithm that you are considering to add to nx-parallel should be in the main networkx repository and it should have the `_dispatchable` decorator. If not, you can consider adding a sequential implementation in networkx first.
- check-list for adding a new function:
- [ ] Add the parallel implementation(make sure API doesn't break), the file structure should be the same as that in networkx.
- [ ] add the function to the `Dispatcher` class in [interface.py](https://github.com/networkx/nx-parallel/blob/main/nx_parallel/interface.py) (take care of the `name` parameter in `_dispatchable` (ref. [docs](https://networkx.org/documentation/latest/reference/backends.html)))
- [ ] add the function to the `BackendInterface` class in [interface.py](./nx_parallel/interface.py) (take care of the `name` parameter in `_dispatchable` (ref. [docs](https://networkx.org/documentation/latest/reference/backends.html)))
- [ ] update the `__init__.py` files accordingly
- [ ] docstring following the above format
- [ ] run the [timing script](https://github.com/networkx/nx-parallel/blob/main/timing/timing_individual_function.py) to get the performance heatmap
- [ ] run the [timing script](./timing/timing_individual_function.py) to get the performance heatmap
- [ ] add additional test(if any)
- [ ] add benchmark(s) for the new function(ref. the README in benchmarks folder for more details)

Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ nxp.betweenness_centrality(H)

2. Right now there isn't much difference between `nx.Graph` and `nxp.ParallelGraph` so `method 3` would work fine but it is not recommended because in the future that might not be the case.

Feel free to contribute to nx-parallel. You can find the contributing guidelines [here](https://github.com/networkx/nx-parallel/blob/main/CONTRIBUTING.md). If you'd like to implement a feature or fix a bug, we'd be happy to review a pull request. Please make sure to explain the changes you made in the pull request description. And feel free to open issues for any problems you face, or for new features you'd like to see implemented.
Feel free to contribute to nx-parallel. You can find the contributing guidelines [here](./CONTRIBUTING.md). If you'd like to implement a feature or fix a bug, we'd be happy to review a pull request. Please make sure to explain the changes you made in the pull request description. And feel free to open issues for any problems you face, or for new features you'd like to see implemented.

This project is managed under the NetworkX organisation, so the [code of conduct of NetworkX](https://github.com/networkx/networkx/blob/main/CODE_OF_CONDUCT.rst) applies here as well.

Expand Down
26 changes: 18 additions & 8 deletions _nx_parallel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,9 @@ def get_info():
"number_of_isolates": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/isolate.py#L8",
"additional_docs": "The parallel computation is implemented by dividing the list of isolated nodes into chunks and then finding the length of each chunk in parallel and then adding all the lengths at the end.",
"additional_parameters": None,
"additional_parameters": {
'get_chunks : str, function (default = "chunks")': "A function that takes in a list of all the isolated nodes as input and returns an iterable `isolate_chunks`. The default chunking is done by slicing the `isolates` into `n` chunks, where `n` is the total number of CPU cores available."
},
},
"square_clustering": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/cluster.py#L10",
Expand All @@ -25,22 +27,30 @@ def get_info():
"local_efficiency": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/efficiency_measures.py#L9",
"additional_docs": "The parallel computation is implemented by dividing the nodes into chunks and then computing and adding global efficiencies of all node in all chunks, in parallel, and then adding all these sums and dividing by the total number of nodes at the end.",
"additional_parameters": None,
"additional_parameters": {
'get_chunks : str, function (default = "chunks")': "A function that takes in a list of all the nodes as input and returns an iterable `node_chunks`. The default chunking is done by slicing the `nodes` into `n` chunks, where `n` is the total number of CPU cores available."
},
},
"closeness_vitality": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/vitality.py#L9",
"additional_docs": "The parallel computation is implemented only when the node is not specified. The closeness vitality for each node is computed concurrently.",
"additional_parameters": None,
"additional_parameters": {
'get_chunks : str, function (default = "chunks")': "A function that takes in a list of all the nodes as input and returns an iterable `node_chunks`. The default chunking is done by slicing the `nodes` into `n` chunks, where `n` is the total number of CPU cores."
},
},
"is_reachable": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/tournament.py#L10",
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/tournament.py#L12",
"additional_docs": "The function parallelizes the calculation of two neighborhoods of vertices in `G` and checks closure conditions for each neighborhood subset in parallel.",
"additional_parameters": None,
"additional_parameters": {
'get_chunks : str, function (default = "chunks")': "A function that takes in a list of all the nodes as input and returns an iterable `node_chunks`. The default chunking is done by slicing the `nodes` into `n` chunks, where `n` is the total number of CPU cores available."
},
},
"tournament_is_strongly_connected": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/tournament.py#L54",
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/tournament.py#L59",
"additional_docs": "The parallel computation is implemented by dividing the nodes into chunks and then checking whether each node is reachable from each other node in parallel.",
"additional_parameters": None,
"additional_parameters": {
'get_chunks : str, function (default = "chunks")': "A function that takes in a list of all the nodes as input and returns an iterable `node_chunks`. The default chunking is done by slicing the `nodes` into `n` chunks, where `n` is the total number of CPU cores available."
},
},
"all_pairs_node_connectivity": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/connectivity/connectivity.py#L17",
Expand Down Expand Up @@ -127,7 +137,7 @@ def get_info():
},
},
"all_pairs_shortest_path": {
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/shortest_paths/unweighted.py#L62",
"url": "https://github.com/networkx/nx-parallel/blob/main/nx_parallel/algorithms/shortest_paths/unweighted.py#L63",
"additional_docs": "The parallel implementation first divides the nodes into chunks and then creates a generator to lazily compute shortest paths for each `node_chunk`, and then employs joblib's `Parallel` function to execute these computations in parallel across all available CPU cores.",
"additional_parameters": {
'get_chunks : str, function (default = "chunks")': "A function that takes in an iterable of all the nodes as input and returns an iterable `node_chunks`. The default chunking is done by slicing the `G.nodes` into `n` chunks, where `n` is the number of CPU cores."
Expand Down
53 changes: 33 additions & 20 deletions _nx_parallel/update_get_info.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
import os
import ast

__all__ = ["get_funcs_info", "extract_docstrings_from_file", "extract_from_docs"]
__all__ = [
"get_funcs_info",
"extract_docstrings_from_file",
"extract_add_docs",
"extract_add_params",
"get_url",
]

# Helper functions for get_info

Expand All @@ -21,11 +27,10 @@ def get_funcs_info():
path = os.path.join(root, file)
d = extract_docstrings_from_file(path)
for func in d:
par_docs, par_params = extract_from_docs(d[func])
funcs[func] = {
"url": get_url(path, func),
"additional_docs": par_docs,
"additional_parameters": par_params,
"additional_docs": extract_add_docs(d[func]),
"additional_parameters": extract_add_params(d[func]),
}
return funcs

Expand Down Expand Up @@ -60,8 +65,8 @@ def extract_docstrings_from_file(file_path):
return docstrings


def extract_from_docs(docstring):
"""Extract the parallel documentation and parallel parameter description from the given doctring."""
def extract_add_docs(docstring):
"""Extract the parallel documentation description from the given doctring."""
try:
# Extracting Parallel Computation description
# Assuming that the first para in docstring is the function's PC desc
Expand All @@ -76,30 +81,38 @@ def extract_from_docs(docstring):
except Exception as e:
print(e)
par_docs = None
return par_docs


def extract_add_params(docstring):
"""Extract the parallel parameter description from the given docstring."""
try:
# Extracting extra parameters
# Assuming that the last para in docstring is the function's extra params
par_params = {}
par_params_ = docstring.split("------------\n")[1]

par_params_ = par_params_.split("\n\n\n")
for i in par_params_:
j = i.split("\n")
par_params[j[0]] = "\n".join(
[line.strip() for line in j[1:] if line.strip()]
)
if i == par_params_[-1]:
par_params[j[0]] = " ".join(
[line.strip() for line in j[1:-1] if line.strip()]
)
par_docs = par_docs.replace("\n", " ")
par_params_ = docstring.split("----------\n")[1]
par_params_ = par_params_.split("\n")

i = 0
while i < len(par_params_):
line = par_params_[i]
if " : " in line:
key = line.strip()
n = par_params_.index(key) + 1
par_desc = ""
while n < len(par_params_) and par_params_[n] != "":
par_desc += par_params_[n].strip() + " "
n += 1
par_params[key] = par_desc.strip()
i = n + 1
else:
i += 1
except IndexError:
par_params = None
except Exception as e:
print(e)
par_params = None
return par_docs, par_params
return par_params


def get_url(file_path, function_name):
Expand Down
Binary file modified assets/images/backend_box_ss.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 3 additions & 3 deletions nx_parallel/algorithms/approximation/connectivity.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,16 @@ def approximate_all_pairs_node_connectivity(
will run the parallel implementation of `all_pairs_node_connectivity` present in the
`connectivity/connectivity`. Use `nxp.approximate_all_pairs_node_connectivity` instead.

networkx.all_pairs_node_connectivity : https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.approximation.connectivity.all_pairs_node_connectivity.html

Parameters
------------
----------
get_chunks : str, function (default = "chunks")
A function that takes in `list(iter_func(nbunch, 2))` as input and returns
an iterable `pairs_chunks`, here `iter_func` is `permutations` in case of
directed graphs and `combinations` in case of undirected graphs. The default
is to create chunks by slicing the list into `n` chunks, where `n` is the
number of CPU cores, such that size of each chunk is atmost 10, and at least 1.

networkx.all_pairs_node_connectivity : https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.approximation.connectivity.all_pairs_node_connectivity.html
"""

if hasattr(G, "graph_object"):
Expand Down
Loading
Loading