Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[webnn] Add test data files for expand and where operations. #43739

Merged
merged 2 commits into from
Dec 22, 2023

Conversation

BruceDai
Copy link
Contributor

I'm sorry that I missed uploading these two JSON test data files for expand and where operations.
@fdwr @Honry @huningxin PTAL, thanks.

@BruceDai
Copy link
Contributor Author

Link #43559 and #43558

"name": "output",
"shape": [24],
"data": [
-6.461850643157959,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-6.461850643157959
-6.461850477468587

🤔 Why is the output value different from the input? Expand (and gather) perform no math, just bitwise copy data around, and so the data should be exactly identical.

I see the input data value is marked as float32 but is not actually representable as a float32 value. So we should coerce the float64 value to float32 before storing in the JSON (e.g. write into a Float32Array).

Otherwise it all LGTM.

Copy link
Contributor Author

@BruceDai BruceDai Dec 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the origin data is float64 which was random generated by baseline scripts , for testing precision float32, we set target precision type like Line11, then according to this input type, we will cast origin data to target type, please see these code snippets, thanks.

inputs[subInput.name] = getTypedArrayData(subInput.type, subInput.data);
const TypedArrayDict = {
  // workaround use Uint16 for Float16
  float16: Uint16Array,

  float32: Float32Array,
  int32: Int32Array,
  uint32: Uint32Array,
  int8: Int8Array,
  uint8: Uint8Array,
  int64: BigInt64Array,
};

const getTypedArrayData = (type, data) => {
  let outData;
  if (type === 'float16') {
    // workaround to convert Float16 to Uint16
    outData = new TypedArrayDict[type](data.length);
    for (let i = 0; i < data.length; i++) {
      outData[i] = toHalf(data[i]);
    }
  } else if (type === 'int64') {
    outData = new TypedArrayDict[type](data.length);
    for (let i = 0; i < data.length; i++) {
      outData[i] = BigInt(data[i]);
    }
  } else {
    outData = new TypedArrayDict[type](data);
  }
  return outData;
};

Copy link

@fdwr fdwr Dec 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...for testing precision float32, we set target precision...

Hmm, so if the baseline scripts are already properly casting the original input value (float64) to the target input value (float32) before writing out the JS file, then why isn't expand.json line 19 equal to -6.461850477468587? Bug in the generation script? (it's confusing to see the input value and the data value be inconsistent)

Similarly, if the generation script originally generated 42.3 as an input value for a int32 input tensor, I'd expect the generation script to have casted 42.3 to 42 for the input. e.g.

        "input": {
          "shape": [],
          "data": [
            42
          ],
          "type": "int32"
        }

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug in the generation script? (it's confusing to see the input value and the data value be inconsistent)

Here input value -6.461850477468587 on line9 is actually of float64, and output value -6.461850643157959 on line19 is of float32 as expected by computing for float32 precision.

In baseline generator script, we discussed that we feed input of original "float64", and get output data as expected by given target precision type. And I also follow it for the input of WPT tests. Maybe I misunderstood before.

So here do you mean that I should modify the input data to be align with the "float32" type on line11? Thanks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, if the generation script originally generated 42.3 as an input value for a int32 input tensor, I'd expect the generation script to have casted 42.3 to 42 for the input. e.g.

Current the float64 input is only for testing float32, it doesn't use float64 data with type of "int32" and other integer types.

Copy link

@fdwr fdwr Dec 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So here do you mean that I should modify the input data to be align with the "float32" type on line11?

Yep! Any computations will of course remain in float64 (not that expand/where/gather have any computations). The DML tests do this too - they coerce input tensors to be compatible with their data type and compute in float64 precision, which rules out any mismatches due to premature casting, leaving mismatches solely to computation.

Copy link
Contributor Author

@BruceDai BruceDai Dec 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not that expand/where/gather have any computations

Do I need also update input data be of float32 for data movement operations, likes slice, pad, concat, split, reshape, transpose and identity? Thanks.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pre-casting input tensor values to their tensor data type really ought to be generic to any operator, to avoid any potential extra ULP loss due to that factor, and not just the data movement ones. Separate CR though for that, as that would cause noise?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pre-casting input tensor values to their tensor data type really ought to be generic to any operator

Thanks. Let me firstly update input tensor of float32 in this PR, and later couples of prs for existed other operations.

@BruceDai BruceDai force-pushed the add_expand_where_json branch from 82113ef to b2860d1 Compare December 21, 2023 13:29
@BruceDai
Copy link
Contributor Author

Updated. Please take another look, thanks a lot! @fdwr

Copy link

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Thanks Bruce.

Copy link
Contributor

@Honry Honry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@Honry Honry merged commit 0fcffb8 into web-platform-tests:master Dec 22, 2023
19 checks passed
marcoscaceres pushed a commit that referenced this pull request Feb 23, 2024
* [webnn] Add test data files for expand and where operations.

* Pre-casting input data to be of float32 type for expand and where operations
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants