Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FIX: Update optimization_tutorial.py to expose batch_size in train_loop() #2945

Conversation

loganthomas
Copy link
Contributor

Description

batch_size is not currently exposed as train_loop() parameter

Checklist

  • The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER")
  • Only one issue is addressed in this pull request
  • Labels from the issue that this PR is fixing are added to this pull request
  • No unnecessary issues are included into this pull request.

cc @subramen @albanD @sekyondaMeta @svekars @kit1980 @brycebortree

Copy link

pytorch-bot bot commented Jun 19, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/2945

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit 73bfa34 with merge base 0740801 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@subramen
Copy link
Contributor

Hi, thanks for the PR but the proposed change is not correct. The dataloader takes in the batch size arg (as seen here

train_dataloader = DataLoader(training_data, batch_size=64)
)

@loganthomas
Copy link
Contributor Author

The dataloader takes in the batch size arg (as seen here)

Yes, but part of the train_loop uses the batch_size for reporting:

loss, current = loss.item(), batch * batch_size + len(X)

Copy link

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the stale Stale PRs label Sep 27, 2024
@github-actions github-actions bot closed this Oct 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants