Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid torch type-error under torch.compile #1922

Merged
merged 3 commits into from
May 13, 2024

Conversation

amjames
Copy link
Contributor

@amjames amjames commented May 8, 2024

Description

In RolloutBuffer.compute_returns_and_advantage a numpy array with dtype bool is used as a, operand for subtraction with a python scalar. This relies on some automatic casting rules which pytorch does not support (in particular promoting bool arrays to int/float). When this region is hit by torch-compile a runtime error is encountered. Casting the bool array to an integer dtype resolves the issue.

After the changes at pytorch/pytorch#124481 land, the example presented in this issue will successfully compile with this small change.

Motivation and Context

This change has no observable impact on the behaviors of programs executed normally. It prevents an error when a user's program uses torch-compile.

If this change is not desired the closed PR will document a work-around for anyone who hits this issue.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)

Probably the closest option, but I wouldn't classify this as a bug. The issue is only present when a program is manipulated by a 3rd party library.

  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (update in the documentation)

Checklist

  • I've read the CONTRIBUTION guide (required)
  • I have updated the changelog accordingly (required).
  • My change requires a change to the documentation.
  • I have updated the tests accordingly (required for a bug fix or a new feature).
  • I have updated the documentation accordingly.
  • I have opened an associated PR on the SB3-Contrib repository (if necessary)
  • I have opened an associated PR on the RL-Zoo3 repository (if necessary)
  • I have reformatted the code using make format (required)
  • I have checked the codestyle using make check-codestyle and make lint (required)
  • I have ensured make pytest and make type both pass. (required)
  • I have checked that the documentation builds using make doc (required)

Note: You can run most of the checks using make commit-checks.

Note: we are using a maximum length of 127 characters per line

Copy link
Member

@araffin araffin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the PR =) Juts a minor comment about consistency.
looking forward to use torch compile on the gradient update.

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>
Copy link
Member

@araffin araffin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks =)

@araffin araffin merged commit 766b9e9 into DLR-RM:master May 13, 2024
4 checks passed
friedeggs pushed a commit to friedeggs/stable-baselines3 that referenced this pull request Jul 22, 2024
* Avoid torch type-error under torch.compile

* Update changelog and version

* Update stable_baselines3/common/buffers.py

Co-authored-by: Antonin RAFFIN <antonin.raffin@ensta.org>

---------

Co-authored-by: Antonin Raffin <antonin.raffin@ensta.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants