Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable to set custom number of steps in benchmarks #5898

Merged
merged 9 commits into from
Nov 14, 2022

Conversation

kgajdamo
Copy link
Contributor

@kgajdamo kgajdamo commented Nov 4, 2022

  • Added --num-steps argument, that allows to set number of the steps (batches two iterate over) in inference and training benchmarks.
  • Added wrapper for NeighborLoader, that handles custom number of steps.

@codecov
Copy link

codecov bot commented Nov 4, 2022

Codecov Report

Merging #5898 (d5813c8) into master (6eef677) will not change coverage.
The diff coverage is n/a.

@@           Coverage Diff           @@
##           master    #5898   +/-   ##
=======================================
  Coverage   84.49%   84.49%           
=======================================
  Files         361      361           
  Lines       19840    19840           
=======================================
  Hits        16763    16763           
  Misses       3077     3077           

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@yanbing-j
Copy link
Contributor

Hi @kgajdamo , thanks for your hard work! For benchmark purpose, the function of setting num_steps is very useful, which decrases much time that wasted in running the entire epoch. LGTM!

benchmark/training/training_benchmark.py Outdated Show resolved Hide resolved
benchmark/inference/inference_benchmark.py Outdated Show resolved Hide resolved
@rusty1s rusty1s enabled auto-merge (squash) November 9, 2022 13:50
@kgajdamo
Copy link
Contributor Author

kgajdamo commented Nov 9, 2022

@rusty1s please do not merge it yet. I noticed that for the same number of steps the version with RandomSampler is x4 faster. While the time should be the same. I haven't had time to look into it yet.

1 similar comment
@kgajdamo
Copy link
Contributor Author

kgajdamo commented Nov 9, 2022

@rusty1s please do not merge it yet. I noticed that for the same number of steps the version with RandomSampler is x4 faster. While the time should be the same. I haven't had time to look into it yet.

@rusty1s rusty1s disabled auto-merge November 9, 2022 13:54
@rusty1s
Copy link
Member

rusty1s commented Nov 9, 2022

Ok :)

@kgajdamo
Copy link
Contributor Author

kgajdamo commented Nov 10, 2022

@rusty1s I checked it and found out what was the reason off such timing. I provided wrong first argument to the RandomSampler:

sampler = torch.utils.data.RandomSampler(
                     range(len(data)), num_samples=args.num_steps *
                     batch_size)

But when I put instead of range(len(data)) - range(len(data.train_mask)) I am receiving an index out of range error. What do You think about it?

@rusty1s
Copy link
Member

rusty1s commented Nov 10, 2022

I think you need to use range(int(data.train_mask.sum())).

@kgajdamo
Copy link
Contributor Author

I think you need to use range(int(data.train_mask.sum())).

@rusty1s Thank You! Now it's working and You can merge.

@rusty1s rusty1s merged commit 9d56437 into pyg-team:master Nov 14, 2022
JakubPietrakIntel pushed a commit to JakubPietrakIntel/pytorch_geometric that referenced this pull request Nov 25, 2022
- Added --num-steps argument, that allows to set number of the steps
(batches two iterate over) in inference and training benchmarks.
- Added wrapper for NeighborLoader, that handles custom number of steps.

Co-authored-by: Matthias Fey <matthias.fey@tu-dortmund.de>
Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
@kgajdamo kgajdamo deleted the num-steps branch March 10, 2023 13:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants