Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reland: Fix CUDA device guard usage when first arg of kernel is scalar #39956

Conversation

kurtamohler
Copy link
Collaborator

Reland PR #39870

Closes #38889

@kurtamohler kurtamohler requested a review from ngimel June 12, 2020 19:56
@dr-ci
Copy link

dr-ci bot commented Jun 12, 2020

💊 CI failures summary and remediations

As of commit 928ac64 (more details on the Dr. CI page):



🚧 2 fixed upstream failures:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

Since your merge base is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

Check out the recency history of this "viable master" tracking branch.


ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 12 times.

@kurtamohler
Copy link
Collaborator Author

Turns out that I'm getting test failures on TestTorchDeviceTypeCUDA.test_serialization_cuda only if I run all of test_torch.py. If I run just that failing test, it doesn't fail.

@ngimel
Copy link
Collaborator

ngimel commented Jun 12, 2020

Yeah, that happens because you are changing device in your test and don't set it back. You can either explicitly set the device back to original, or, better, use with torch.cuda.device context manager.
so there are 2 problems here

  1. the failing test is poorly written, it should not rely on global state
  2. your test should not change global state.
    It's ok to fix only 2).

@kurtamohler
Copy link
Collaborator Author

Oh right, that makes sense.

test/test_torch.py Outdated Show resolved Hide resolved
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@ngimel merged this pull request in db2b273.

xwang233 pushed a commit to xwang233/pytorch that referenced this pull request Jun 20, 2020
pytorch#39956)

Summary:
Reland PR pytorch#39870

Closes pytorch#38889
Pull Request resolved: pytorch#39956

Differential Revision: D22027956

Pulled By: ngimel

fbshipit-source-id: e6029f450e2da3782b2d05bcc2012c19b82291da
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Normal.icdf is differs on different cuda devices
4 participants