Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add complex tensor with subclassing #48

Merged
merged 1 commit into from
Oct 20, 2023

Conversation

pierreguilmin
Copy link
Contributor

Pair-programming with @ezyang at the PyTorch Conference 2023 for a WIP implementation of complex tensors working with torch.compile.

The implementation is inspired from https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/two_tensor.py.

A few todos left, notably a custom autograd for the constructor.

This was tested with the nightly build 2.2.0.dev20231016.

> TORCH_LOGS=+aot python complex_tensor.py
FakeTensor(..., size=(1, 1), dtype=torch.int64) FakeTensor(..., size=(1, 1), dtype=torch.int64)
FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1, 1), dtype=torch.int64))) FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1, 1), dtype=torch.int64)))
FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1, 1), dtype=torch.int64))) FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1, 1), dtype=torch.int64)))
FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1, 1), dtype=torch.int64))) FunctionalTensor(_to_functional_tensor(FakeTensor(..., size=(1, 1), dtype=torch.int64)))
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO] TRACED GRAPH
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]  ===== Forward graph 0 =====
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]  <eval_with_key>.4 from /Users/pierreguilmin/.pyenv/versions/live-compile-complex/lib/python3.9/site-packages/torch/fx/experimental/proxy_tensor.py:506 in wrapped class <lambda>(torch.nn.Module):
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]     def forward(self, arg0_1: i64[1, 1], arg1_1: i64[1, 1], arg2_1: i64[1, 1], arg3_1: i64[1, 1]):
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         # File: /Users/pierreguilmin/Desktop/live-compile-complex/foo.py:53, code: return x @ y
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         mul: i64[1, 1] = torch.ops.aten.mul.Tensor(arg0_1, arg2_1)
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         mul_1: i64[1, 1] = torch.ops.aten.mul.Tensor(arg1_1, arg3_1)
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         sub: i64[1, 1] = torch.ops.aten.sub.Tensor(mul, mul_1);  mul = mul_1 = None
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         mul_2: i64[1, 1] = torch.ops.aten.mul.Tensor(arg0_1, arg3_1);  arg0_1 = arg3_1 = None
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         mul_3: i64[1, 1] = torch.ops.aten.mul.Tensor(arg1_1, arg2_1);  arg1_1 = arg2_1 = None
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         add: i64[1, 1] = torch.ops.aten.add.Tensor(mul_2, mul_3);  mul_2 = mul_3 = None
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]         return [sub, add]
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]
[2023-10-17 11:32:12,381] [0/0] torch._functorch.aot_autograd.__aot_graphs: [INFO]
ComplexTensor(real=tensor([[-5]]), imag=tensor([[10]]))

Copy link
Owner

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks really cool!
Do you want to fix the autograd support before I merge this?

if func is torch.ops.aten.mm.default:
assert not kwargs
x, y = args
re = x.re * y.re - x.im * y.im
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These should be @ right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! 🙇🏼‍♂️

dtype=torch.complex64, # todo: real to complex dtype
layout=re.layout,
device=re.device,
requires_grad=False, # todo: autograd support
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The best way to add autograd support here is to do a parallel to Tensor (which is never differentiable.
So I would recommend that ComplexTensor(...) is never differentiable and you have a create_complex_tensor(...) which is differentiable and built with a custom autograd Function (where you create a ComplexTensor during the fw).

size=re.size(),
strides=re.stride(), # todo: contiguous only
storage_offset=0,
dtype=torch.complex64, # todo: real to complex dtype

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
dtype=torch.complex64, # todo: real to complex dtype
dtype=re.dtype.to_complex(),

since v2.1

@ezyang
Copy link
Collaborator

ezyang commented Oct 20, 2023

@albanD let's just merge and fix it up on main

@albanD albanD merged commit 276d2f0 into albanD:main Oct 20, 2023
1 of 2 checks passed
@pierreguilmin
Copy link
Contributor Author

Sorry I had a very busy week, I'll let you take it from here. 😉 (except if you need anything from our side)

Btw, @gautierronan is one of my colleagues, he works with me on the dynamiqs library to simulate quantum systems with PyTorch.

@ezyang
Copy link
Collaborator

ezyang commented Oct 21, 2023

@pierreguilmin / @gautierronan if the two of you are interested in pushing this subclass forward, I'd recommend opening up a little repo with just this class and starting to chuck stuff into it. The subclass zoo here is just to so "it's possible", it's not a good permanent home for a feature that people want to use.

@pierreguilmin
Copy link
Contributor Author

Thanks for your advice @ezyang. Could you also advise on the next steps to make progress on this? What do you mean by "starting to chuck stuff into it", just implement more operators?

@pierreguilmin pierreguilmin deleted the complex-tensor branch October 22, 2023 16:25
@ezyang
Copy link
Collaborator

ezyang commented Oct 23, 2023

Yup!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants