-
Notifications
You must be signed in to change notification settings - Fork 670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[djl-bench] Run benchmark on multiple GPUs #1144
Conversation
Codecov Report
@@ Coverage Diff @@
## master #1144 +/- ##
============================================
- Coverage 70.12% 70.10% -0.02%
Complexity 5329 5329
============================================
Files 513 513
Lines 23696 23725 +29
Branches 2545 2552 +7
============================================
+ Hits 16617 16633 +16
- Misses 5722 5728 +6
- Partials 1357 1364 +7
Continue to review full report at Codecov.
|
List<PredictorCallable> callables = new ArrayList<>(numOfThreads); | ||
for (int i = 0; i < numOfThreads; ++i) { | ||
callables.add(new PredictorCallable(model, metrics, counter, i, i == 0)); | ||
for (Device device : devices) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means that the number of threads is really the number of threads per GPU. Is this the behavior users would expect? Should we document this somewhere or change some names to make it clearer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
number of threads per GPU is confusing, I change it to total thread and added error checking,
I also changed number of iteration in the same way.
Change-Id: Ie27e090699695526df42fefd01db2aba78dc3f73
Co-authored-by: KexinFeng <fenkexin@amazon.com>
Change-Id: Ie27e090699695526df42fefd01db2aba78dc3f73
Description
Brief description of what this PR is about