Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wasm backend add LogicalNot, LogicalOr, LogicalXor #6431

Merged
merged 7 commits into from
May 31, 2022

Conversation

homfen
Copy link
Contributor

@homfen homfen commented May 20, 2022

  1. wasm backend support LogicalNot, LogicalOr, LogicalXor
  2. update bazel version(5.1.1) for macos 12.3

This change is Reviewable

@google-cla
Copy link

google-cla bot commented May 20, 2022

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

For more information, open the CLA check for this pull request.

@pyu10055 pyu10055 requested a review from mattsoulanille May 20, 2022 04:33
Copy link
Member

@mattsoulanille mattsoulanille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR, @homfen. It looks like your editor has changed the format of several files in this PR, making it difficult to review due to the large number of changes. Could you please update this PR so it doesn't make formatting changes? Once it's updated, I'll review it for content. Thanks!

@homfen
Copy link
Contributor Author

homfen commented May 27, 2022

@mattsoulanille updated

Copy link
Member

@mattsoulanille mattsoulanille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update. I had several comments on this PR, but I didn't know the right solutions to them. I didn't want to point you towards a bad solution, so I ended up addressing them and making the changes myself. I've committed them to this PR.

Reviewable status: 0 of 1 approvals obtained (waiting on @homfen)


tfjs-backend-wasm/src/cc/binary.h line 35 at r1 (raw file):

template <class I, class O>
void binary_not(const I *a_buf, const size_t a_size, O *out_buf, O operation(I),
                const size_t *a_shape_ptr, const size_t a_rank);

This template is likely not necessary, since unary functions are very simple.


tfjs-backend-wasm/src/cc/binary.h line 68 at r1 (raw file):

void compare_not(const int a_id, const size_t *a_shape_ptr,
                 const size_t a_shape_len, const int out_id,

This should be defined in unary.h

Code quote:

void compare_not(const int a_id, const size_t *a_shape_ptr,
                 const size_t a_shape_len, const int out_id,
                 bool operation(bool));

tfjs-backend-wasm/src/cc/binary.cc line 83 at r1 (raw file):

template <class I, class O>
void binary_not(const I *a_buf, const size_t a_size, O *out_buf,

I don't think unary ops can be broadcast, so this logic shouldn't be necessary.

Code quote:

template <class I, class O>
void binary_not(const I *a_buf, const size_t a_size, O *out_buf,
                O operation(I), const size_t *a_shape_ptr,
                const size_t a_rank) {
  std::vector<size_t> a_shape(a_shape_ptr, a_shape_ptr + a_rank);
  const std::vector<size_t> result_strides =
      tfjs::util::compute_strides(a_shape);
  const size_t result_size = tfjs::util::size_from_shape(a_shape);
  const std::vector<size_t> a_strides = tfjs::util::compute_strides(a_shape);
  const std::vector<size_t> a_broadcast_dims =
      tfjs::util::get_broadcast_dims(a_shape, a_shape);

  if (a_broadcast_dims.size() == 0) {
    for (size_t i = 0; i < result_size; ++i) {
      out_buf[i] = operation(a_buf[i % a_size]);
    }
  } else {
    for (size_t i = 0; i < result_size; ++i) {
      const std::vector<size_t> loc =
          tfjs::util::offset_to_loc(i, result_strides);

      std::vector<size_t> a_loc =
          std::vector<size_t>(loc.end() - a_rank, loc.end());
      for (size_t j = 0; j < a_broadcast_dims.size(); ++j) {
        const size_t d = a_broadcast_dims[j];
        a_loc[d] = 0;
      }
      const size_t a_idx = tfjs::util::loc_to_offset(a_loc, a_strides);

      out_buf[i] = operation(a_buf[a_idx]);
    }
  }
}

Copy link
Member

@mattsoulanille mattsoulanille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Reviewed 13 of 14 files at r1, 5 of 8 files at r3, 1 of 1 files at r4, 1 of 1 files at r5, all commit messages.
Reviewable status: :shipit: complete! 1 of 1 approvals obtained

@mattsoulanille mattsoulanille requested a review from lina128 May 31, 2022 19:02
Copy link
Collaborator

@lina128 lina128 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 2 of 1 approvals obtained (waiting on @homfen)


tfjs-backend-wasm/src/cc/kernels/LogicalNot.cc line 1 at r5 (raw file):

/* Copyright 2019 Google LLC. All Rights Reserved.

2022

Code quote:

2019

@mattsoulanille mattsoulanille merged commit 1a4b06f into tensorflow:master May 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants