Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build benchmarking infrastructure to compare parallel speed ups #12

Open
MridulS opened this issue Oct 5, 2023 · 6 comments
Open

Build benchmarking infrastructure to compare parallel speed ups #12

MridulS opened this issue Oct 5, 2023 · 6 comments
Labels
Infrastructure Related to the general infrastructure and organisation of code in the repo

Comments

@MridulS
Copy link
Member

MridulS commented Oct 5, 2023

We need to have a quick way of using either github actions or scripts to run some crude benchmarks while developing new algorithms.

@MridulS
Copy link
Member Author

MridulS commented Oct 11, 2023

The current benchmarking scripts are not too portable, they should run in some automated fashion.

@Schefflera-Arboricola
Copy link
Member

Schefflera-Arboricola commented Oct 14, 2023

@MridulS I'd like to take up this issue, but I don't have any experience in building benchmarking infrastructures, so please guide me on this.

A possible benchmarking approach :

we can check if the average of all the speedup values(in heatmap, for graphs of different sizes and densities) is greater than 1, to ensure that parallel algos are more time efficient or we can also use pytest-benchmark like below.

test_benchmarks.py

num,p=300,0.5
G = nx.fast_gnp_random_graph(num, p, directed=False)
H = nx_parallel.ParallelGraph(G)

@pytest.mark.benchmark
def test_algorithm_performance_G(benchmark): 
    result_seq = benchmark(nx.betweenness_centrality, G) #replace "betweenness_centrality" with the new algorithm added

@pytest.mark.benchmark
def test_algorithm_performance_H(benchmark):
    result_para = benchmark(nx.betweenness_centrality, H)

Test output(pytest test_benchmark.py) :

Screenshot 2023-10-14 at 7 04 28 PM

What are your thoughts on this? What all should I keep in mind before structuring it?

Thank you :)

@MridulS
Copy link
Member Author

MridulS commented Oct 14, 2023

@Schefflera-Arboricola yes, pytest-benchmark could be one way of doing this. We use ASV for networkx benchmarking. But it's possible we would need to come up with a way which incorporates benchmarks with networkx dispatching. Ideally this benchmark suite will be able to swap in any backend (graphblas, nx-parallel, cugraph) and run it against all of them. We still need to think a bit more about how to approach this :)

@Schefflera-Arboricola
Copy link
Member

We need to have a quick way of using either github actions....

by "github actions" did you mean something like this: networkx/networkx#6834 ? or something else?

@MridulS
Copy link
Member Author

MridulS commented Feb 2, 2024

Yes! I'll try to finish the one in NX main repo soon. I think it's already good to go.

@Schefflera-Arboricola Schefflera-Arboricola added the Infrastructure Related to the general infrastructure and organisation of code in the repo label Aug 20, 2024
@Schefflera-Arboricola
Copy link
Member

Schefflera-Arboricola commented Aug 20, 2024

Just adding this for reference here: https://conbench.github.io/conbench/

pytest-benchmarks --> cannot host like asv ; asv benchmarks --> nice tool to compare a library with its past versions but not the best option when we need to compare 2 libraries(i.e. networkx and nx-parallel here).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Infrastructure Related to the general infrastructure and organisation of code in the repo
Development

No branches or pull requests

2 participants