Skip to content

More fixes for torch linalg extension #35

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/array-api-tests-numpy-1-21.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: Array API Tests (NumPy 1.21)
on: [push, pull_request]

jobs:
array-api-tests-numpy:
array-api-tests-numpy-1-21:
uses: ./.github/workflows/array-api-tests.yml
with:
package-name: numpy
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/array-api-tests-numpy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ name: Array API Tests (NumPy Latest)
on: [push, pull_request]

jobs:
array-api-tests-numpy-1-21:
array-api-tests-numpy-latest:
uses: ./.github/workflows/array-api-tests.yml
with:
package-name: numpy
30 changes: 29 additions & 1 deletion array_api_compat/torch/linalg.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,34 @@ def cross(x1: array, x2: array, /, *, axis: int = -1) -> array:
x1, x2 = _fix_promotion(x1, x2, only_scalar=False)
return torch_linalg.cross(x1, x2, dim=axis)

__all__ = linalg_all + ['outer', 'trace', 'matrix_transpose', 'tensordot']
def vecdot(x1: array, x2: array, /, *, axis: int = -1, **kwargs) -> array:
from ._aliases import isdtype

x1, x2 = _fix_promotion(x1, x2, only_scalar=False)

# torch.linalg.vecdot doesn't support integer dtypes
if isdtype(x1.dtype, 'integral') or isdtype(x2.dtype, 'integral'):
if kwargs:
raise RuntimeError("vecdot kwargs not supported for integral dtypes")
ndim = max(x1.ndim, x2.ndim)
x1_shape = (1,)*(ndim - x1.ndim) + tuple(x1.shape)
x2_shape = (1,)*(ndim - x2.ndim) + tuple(x2.shape)
if x1_shape[axis] != x2_shape[axis]:
raise ValueError("x1 and x2 must have the same size along the given axis")
Comment on lines +35 to +38
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These semantics allow for a weird construction like vecdot(xp.randn(1,4,5,6), xp.randn(6), axis=0) which would be equivalent to xp.randn(1,4,5,6) * xp.randn(6). This may be a discussion for the general broadcasting rules for the API, but perhaps you want that to assert that all(x.ndim - 1 >= axis for x in input_tensors) (perhaps with some special treatment for 0-dim tensors).

Copy link
Member Author

@asmeurer asmeurer Mar 29, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I think about it, I wonder if this sort of thing should already be disallowed by the spec. https://data-apis.org/array-api/latest/API_specification/generated/array_api.vecdot.html#array_api.vecdot

I've been working on the assumption that axis refers to the dimension after broadcasting ("Must be an integer on the interval [-N, N), where N is the rank (number of dimensions) of the shape determined according to Broadcasting."). But it also says "The contracted axis (dimension) must not be broadcasted."

I had been interpreting that as meaning you shouldn't allow something like vecdot(empty((3, 3)), empty((1, 3)), axis=0). But I suppose it could also be taken to mean that broadcasting shouldn't "broadcast up" to the contracted dimension either. Something more along the lines of

if axis >= 0:
    ndim = max(x.ndim for x in inputs)
    if any(axis < ndim - x.ndim for x in inputs):
        raise ValueError("Contracted axis cannot be broadcasted")

(e.g., vecdot(empty((1, 2, 3, 4, 5)), empty((3, 4, 5)), axis=2) is fine but vecdot(empty((1, 2, 3, 4, 5)), empty((3, 4, 5)), axis=0) is not)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think disallowing both (not broadcast along the reduced dimension and ask for the axis to be well defined in the sense you just described) would be the safer thing to do, as otherwise you end up with these funny constructions, which is not great.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an issue in all the other implementations too, and arguably the spec as well. I'm going to deal with it in a separate PR.


x1_, x2_ = torch.broadcast_tensors(x1, x2)
x1_ = torch.moveaxis(x1_, axis, -1)
x2_ = torch.moveaxis(x2_, axis, -1)

res = x1_[..., None, :] @ x2_[..., None]
return res[..., 0, 0]
return torch.linalg.vecdot(x1, x2, dim=axis, **kwargs)

def solve(x1: array, x2: array, /, **kwargs) -> array:
x1, x2 = _fix_promotion(x1, x2, only_scalar=False)
return torch.linalg.solve(x1, x2, **kwargs)

__all__ = linalg_all + ['outer', 'trace', 'matrix_transpose', 'tensordot',
'vecdot', 'solve']

del linalg_all
1 change: 1 addition & 0 deletions numpy-1-21-xfails.txt
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@ array_api_tests/test_operators_and_elementwise_functions.py::test_floor_divide[_
array_api_tests/test_operators_and_elementwise_functions.py::test_floor_divide[floor_divide(x1, x2)]
array_api_tests/test_operators_and_elementwise_functions.py::test_greater[greater(x1, x2)]
array_api_tests/test_operators_and_elementwise_functions.py::test_less[__lt__(x1, x2)]
array_api_tests/test_operators_and_elementwise_functions.py::test_less_equal[less_equal(x1, x2)]
array_api_tests/test_operators_and_elementwise_functions.py::test_logaddexp
array_api_tests/test_operators_and_elementwise_functions.py::test_multiply[__imul__(x, s)]
array_api_tests/test_operators_and_elementwise_functions.py::test_multiply[__mul__(x, s)]
Expand Down
4 changes: 2 additions & 2 deletions torch-xfails.txt
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ array_api_tests/test_operators_and_elementwise_functions.py::test_remainder[__im
array_api_tests/test_operators_and_elementwise_functions.py::test_subtract[__sub__(x1, x2)]


# Mac-only bug (overflow near float max)
# array_api_tests/test_operators_and_elementwise_functions.py::test_log1p
# overflow near float max
array_api_tests/test_operators_and_elementwise_functions.py::test_log1p

# torch doesn't handle shifting by more than the bit size correctly
# https://github.com/pytorch/pytorch/issues/70904
Expand Down