-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing performance of model in progress bar #4326
Comments
I believe these outputs are tracked only if you implement
After |
@rohitgr7 Looking at that code you linked, I saw |
@awaelchli Also, |
Yes, it's a misleading name but I also think the behaviour should be different. |
Agreed 👍, should we allowed irrespective of whether training_epoch_end is implemented or not. |
For |
Thanks @rohitgr7 @awaelchli - that explains empty outputs. Should I send a PR?
Where is progress bar iterations/sec coming from and when is it triggered? If I want to measure training performance, is measuring the time between on_train_batch_start and on_train_batch_end accurate? |
yes please go ahead.
it uses tqdm.
yes almost, but if you want to check a bit more precisely then I suggest disable logging. |
When I run training, I see progress bar indicate iterations/sec. How can I access it? I wrote a simple hook:
However, my prints indicate ~1.7it/s but the progress bar sows 6.07s/it.
In my training_step, I also return some stats (batch size, number of tokens) in logs by returning the following:
for more relevant perf metrics. However, in my callback. print(outputs) shows an empty list. Anything I'm missing?
The text was updated successfully, but these errors were encountered: