-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add type promotion for complex and real number. #63842
add type promotion for complex and real number. #63842
Conversation
你的PR提交成功,感谢你对开源项目的贡献! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, All comments in PR #61163 has been modified, this PR is to handle conflicts with other PRs
Sorry to inform you that 46238aa's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* add type promotion for complex and real number. * fix * reduce api support * add more api support * fix * fix * remove matmul * add T+S logic. * fix bug * fix unittest * fix * fix * fix unittest * fix gumbel * rm print * fix more unittests. * fix test_llama_group_log_softmax.py * fix bug, and add 0-d + 0-d logic. * rm print * fix behavior of bool and int * add unittest for all type promotion. * rm unintest which is unsupport dtype * fix * fix * add error unittest * fix increase unittest * bug fix * fixed by comment * remove useless code. * fix * fix * fix TypePromotionForZeroDimTensor * add inplace API support, add special case can skip type promotion (add x=float32,y=float16/bfloat16). * add broatcast support for MultiPrecisionAddKernelImpl.
* add type promotion for complex and real number. * fix * reduce api support * add more api support * fix * fix * remove matmul * add T+S logic. * fix bug * fix unittest * fix * fix * fix unittest * fix gumbel * rm print * fix more unittests. * fix test_llama_group_log_softmax.py * fix bug, and add 0-d + 0-d logic. * rm print * fix behavior of bool and int * add unittest for all type promotion. * rm unintest which is unsupport dtype * fix * fix * add error unittest * fix increase unittest * bug fix * fixed by comment * remove useless code. * fix * fix * fix TypePromotionForZeroDimTensor * add inplace API support, add special case can skip type promotion (add x=float32,y=float16/bfloat16). * add broatcast support for MultiPrecisionAddKernelImpl.
PR Category
Others
PR Types
New features
Description
card-78750
As there were unreasonable type promotion in Paddle, which the previous logic was aligned to the left tensor, like:
This behavior will be fixed more in line with mathematical logic, like:
Furthermore, after discussion, we will limit the behavior of automatic type promotion to floating-point numbers, and between real and complex numbers in Tensor and Tensor. Tensor and Scalar will still support all dtypes.
Those PR #60638 , #59518 fixed the behavior in few APIs between floating-point numbers in Tensor and Tensor.
This PR will support all binary operation API, and type promotion between real and complex numbers in Tensor and Tensor. Also the behavior between Tensor and Scalar will be corrected.