-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Arithmetic between real arrays and Python complex scalars (revisited) #841
Comments
I still agree that this should be supported. We should be careful to not confuse downcasting a complex array (or scalar) to real float, which is indeed not supported and for very good reasons, with upcasting a real float to complex. In fact, real and complex floating-point arrays are already supported to promote together: >>> import array_api_strict as xp
>>> xp.asarray([0., 1.]) + xp.asarray([0j, 1j])
Array([0.+0.j, 1.+1.j], dtype=array_api_strict.complex128) So it seems odd that this doesn't work for Python scalars. The piece of text that @mdhaber quoted at the top of the issue seems to imply that there is ambiguity whether the resulting array should be complex64 or complex128. But I think this is already pretty well spelled out for the Python
i.e., I think the real question is, what is the idea behind the rule "convert the scalar to zero-dimensional array with the same data type as that of the array"? If the idea is that "when adding a Python scalar to an array, the dtype doesn't change", then indeed real_array + complex_scalar shouldn't be supported. But if it's rather that "the precision doesn't change", then it should be supported. |
This is quite clumsy indeed. Independent of the decision on allowing xp.astype(x, 'complex') # or maybe `'complex floating', if we'd use the `isdtype` kind specifiers Doing that would also address the problem for unsigned -> signed integers, which cannot be done with a Python scalar equivalent like |
One rather annoying example I tripped over in scipy is
An "obvious" workaround,
is actually a footgun because if
which looks like something |
The array API standard is now clear (in Type Promotion Rules):
This means that following does not have defined behavior:
However, the behavior is defined consistently in NumPy, CuPy, PyTorch, jax.numpy1, Dask array, and Tensorflow2 in the intuitive way: the real and imaginary component dtypes of the complex output array match the dtype of the input array.
This issue was discussed explicitly in gh-478 beginning with #478 (comment). I see several comments that seem supportive of allowing this operation, e.g.
The comment I see against it (#478 (comment)) is
It looked like the summary comments suggested that this would be allowed, but it was not specifically addressed in the PR that closed the issue. (See postscript for more information.)
I thought it might help to add some perspective on how this impacts a developer translating code to the standard: when tests including an operation like
begin to fail with
array_api_strict
only, the developer is faced with the choice of skipping thearray_api_strict
tests or figuring out what is wrong and changing the code to something like:If they also want it to be able to preserve the lower precision types supported by some libraries (e.g. for NumPy), there would be additional hurdles. I tested for a few libraries, and array API standard compatible version of the code also increases the execution time notably for small arrays. Individually, the inconvenience, complexity, and overhead are small, but they add up. I am excited about adding array API support to SciPy, but not everyone is supportive, and performance regressions and complicated-looking diffs can make it difficult to garner support.
I would suggest that the operation be defined, even if it means adding an exception to the simply stated rules for array/Python arithmetic.
The more specific language about this not being defined was added in gh-513, which provided the justification:
I think that is referring to the language:
That guidance was added in gh-74, but I can't trace the origin further. Perhaps there can be exceptions to that rule?
Footnotes
when higher precision support is enabled, see changelog ↩
With NumPy experimental behavior (https://github.com/data-apis/array-api/issues/478#issuecomment-1272631595). Vanilla Tensorflow doesn't seem to support the other array-scalar operations that are defined, but already this was not deemed to be sufficient reason to prevent their inclusion in the standard (https://github.com/data-apis/array-api/issues/478#issuecomment-1270409660). ↩
The text was updated successfully, but these errors were encountered: