Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full codegen for logdet #3576

Merged
merged 9 commits into from
May 21, 2022
Merged

Full codegen for logdet #3576

merged 9 commits into from
May 21, 2022

Conversation

miladm
Copy link
Collaborator

@miladm miladm commented May 17, 2022

Full codegen for logdet, slogdet


Generate LazyIr.h

class Logdet : public XlaNode {
 public:
  static torch::lazy::OpKind ClassOpKind() {
    return torch::lazy::OpKind(at::aten::logdet);
  }

  Logdet(const torch_xla::XlaValue& self, std::vector<torch::lazy::Shape>&& shapes)

      : XlaNode(torch::lazy::OpKind(at::aten::logdet),
              {self}, std::move(shapes),
              [&]() { return LogdetOutputShape(self); },
              /* num_outputs */ 1,
              torch::lazy::MHash())
  {  }

  std::string ToString() const override {
    std::stringstream ss;
    ss << XlaNode::ToString();

    return ss.str();
  }

  bool CanBeReused(const torch_xla::XlaValue& self) const {
    return false;
    }

  torch_xla::XlaOpVector Lower(LoweringContext* loctx) const override;
};

Generated XLANativeFunctions.cpp :

    at::Tensor XLANativeFunctions::logdet(const at::Tensor & self) {

        XLA_FN_COUNTER("xla::");
        auto common_device = torch_xla::bridge::GetXlaDevice(self);
        TORCH_INTERNAL_ASSERT(common_device);

        torch_xla::XLATensorPtr lazy_self = torch_xla::bridge::GetXlaTensorOrCreateForWrappedNumber(self, *common_device);
        torch::lazy::NodePtr node = torch::lazy::ReuseNode<Logdet>(lazy_self->GetIrValue());
        if (!node) {

            auto shapes = torch::lazy::compute_shape_logdet(self);
            TORCH_INTERNAL_ASSERT(shapes.size() == 1);
            if(torch::lazy::symbolicShapeEnabled()){
                std::vector<torch::jit::IValue> inputs = { self };
                char* schema_str = "aten::logdet(Tensor self) -> Tensor";
                applySymbolicShapesOnLT(schema_str, inputs, shapes);
            }

            node = torch::lazy::MakeNode<Logdet>(lazy_self->GetIrValue(), std::move(shapes));
            CacheNode(node);
        }

        auto result = torch_xla::bridge::AtenFromXlaTensor(
                torch_xla::XLATensor::Create(std::move(node), *common_device));
        return result;
    };

@miladm miladm requested review from wonjoolee95 and JackCaoG May 17, 2022 09:38
@miladm miladm self-assigned this May 17, 2022
@miladm miladm linked an issue May 17, 2022 that may be closed by this pull request
facebook-github-bot pushed a commit to pytorch/pytorch that referenced this pull request May 20, 2022
…77904)

Summary:
Fixes pytorch/xla#3576

Added support for `slogdet` in LazyTensor shape inference

Pull Request resolved: #77904
Approved by: https://github.com/wconstab, https://github.com/JackCaoG

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/e67284d9ee1f9c8dbb14169c69c71d035014e38b

Reviewed By: seemethere

Differential Revision: D36537815

Pulled By: seemethere

fbshipit-source-id: a594bcabdedbdd6e077fbd17c9c6049c4b12b82d
@miladm miladm reopened this May 20, 2022
@miladm
Copy link
Collaborator Author

miladm commented May 21, 2022

As discussed in #3596, slogdet causes codegen blockers. Commenting out slogdet codegen to unblock logdet. I will address slogdet as a future PR once unblocked.

Comment on lines 1851 to 1853
std::tuple<at::Tensor, at::Tensor> XLANativeFunctions::slogdet(
const at::Tensor& self) {
XLA_FN_COUNTER("xla::");
XLATensor self_tensor = bridge::GetXlaTensor(self);
auto outputs = XLATensor::slogdet(self_tensor);
return std::make_tuple(bridge::AtenFromXlaTensor(std::get<0>(outputs)),
bridge::AtenFromXlaTensor(std::get<1>(outputs)));
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: maybe should we move this back to the original place so this PR doesn't show slogdet as being modified?

@wonjoolee95
Copy link
Collaborator

wonjoolee95 commented May 21, 2022

@miladm Thanks! One thing -- this PR is showing it changed tensorflow version and was wondering why?

nit: can we also change the PR title to remove slogdet to prevent possible confusion?

@miladm miladm changed the title Full codegen for logdet, slogdet Full codegen for logdet May 21, 2022
@miladm
Copy link
Collaborator Author

miladm commented May 21, 2022

This PR should be in a good shape now. Build passed. First test passed. Will merge after the second test passes.

@miladm miladm merged commit 3c0d68d into master May 21, 2022
@miladm miladm deleted the ltc_logdet branch May 21, 2022 15:43
wonjoolee95 added a commit that referenced this pull request May 23, 2022
wonjoolee95 added a commit that referenced this pull request May 23, 2022
wonjoolee95 added a commit that referenced this pull request May 23, 2022
wonjoolee95 added a commit that referenced this pull request May 23, 2022
wonjoolee95 added a commit that referenced this pull request May 23, 2022
wonjoolee95 added a commit that referenced this pull request May 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PyTorch/XLA Codegen Migration
2 participants