Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix out_of_range bug of multinomial op's cuda kernel #36511

Merged
merged 2 commits into from
Oct 27, 2021

Conversation

pangyoki
Copy link
Contributor

@pangyoki pangyoki commented Oct 18, 2021

PR types

Bug fixes

PR changes

OPs

Describe

A bug in sample method (multinomial op) of Categorical API is mentioned in issue #36401.

issue

  • test case
import paddle
with paddle.no_grad():
    actor=paddle.nn.Sequential(paddle.nn.Linear(20, 2800))
    logits=actor(paddle.rand([20]))
    cat=paddle.distribution.Categorical(logits.exp())
    print(cat.sample([1]))
  • error message
Error: /paddle/paddle/fluid/operators/multinomial_op.cu:42 Assertion `in_data[id] >= 0.0` failed. The input of multinomial distribution should be >= 0, but got -0.038249.
(省略类似错误Error: /paddle/paddle/fluid/operators/multinomial_op.cu:42 Assertion `in_data[id] >= 0.0` failed. The input of multinomial distribution should be >= 0, but got -0.044257.
Traceback (most recent call last):
  File "bug.py", line 6, in <module>
    print(cat.sample([1]))
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/distribution.py", line 771, in sample
    sample_index = multinomial(logits, num_samples, True)
  File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/random.py", line 133, in multinomial
    'replacement', replacement)
SystemError: (Fatal) Operator multinomial raises an thrust::system::system_error exception.
The exception content is
:transform: failed to synchronize: cudaErrorLaunchFailure: unspecified launch failure. (at /paddle/paddle/fluid/imperative/tracer.cc:192)

Reason for error

In the cuda kernel implementation, the number of threads exceeding the size of the input array will be used to perform calculations (the reason is that the block size is limited, when setting the grid size, more threads will be set for rounding).

However, when the cuda kernel calculates, it does not limit the array subscripts. As a result, when calculating in the thread, the space exceeding the size of the array is accessed, causing an error.

bug fix

Restrictions on the subscripts of the accessed arrays.

图片

@paddle-bot-old
Copy link

paddle-bot-old bot commented Oct 18, 2021

✅ This PR's description meets the template requirements!
Please wait for other CI results.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@paddle-bot-old
Copy link

Sorry to inform you that 0641bbc's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

Copy link
Contributor

@chenwhql chenwhql left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for PADDLE_ENFORCE

Copy link
Contributor

@Avin0323 Avin0323 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for PR-CI-OP-benchmark

@pangyoki pangyoki merged commit 51a3396 into PaddlePaddle:develop Oct 27, 2021
pangyoki added a commit to pangyoki/Paddle that referenced this pull request Oct 27, 2021
lanxianghit pushed a commit that referenced this pull request Oct 28, 2021
ghost pushed a commit to piotrekobi/Paddle that referenced this pull request Nov 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants