Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[webnn] Add float16 tests for WebNN concat op #42420

Merged
merged 2 commits into from
Nov 28, 2023

Conversation

BruceDai
Copy link
Contributor

@BruceDai BruceDai commented Oct 8, 2023

I'm still working on implement scripts to generate test data & baseline data for float16 tests based on previous test data & baseline data of float32 precision.
Here I firstly submit this PR to show the overview of testing float16 precision.
These new added float16 tests in webnn/resources/test_data/concat.json use the same input data of float64 precision as ones used for testing float32 precision before, and such expected baseline data of float16 precision.

I'm struggled on readability like Solution-1 which is usingwith this PR and simple like Solution-2.

Solution 1 - positive: do not need modify more current test framework / negative: exist many duplicated test data code

    {
      "name": "concat two float32 1D tensors of same shape along axis 0",
      "inputs": [
        {
          "name": "input1",
          "shape": [12],
          "data": [
            -0.39444134019222243,
            ... ...
            -0.6731740531810844
          ],
          "type": "float32"
        },
        {
          "name": "input2",
          "shape": [12],
          "data": [
            0.4918989118791477,
            ... ...
            0.1211843166661235
          ],
          "type": "float32"
        }
      ],
      "axis": 0,
      "expected": {
        "name": "output",
        "shape": [24],
        "data": [
          -0.3944413363933563,
          ... ...
          0.1211843192577362
        ],
        "type": "float32"
      }
    },
    {
      "name": "concat two float16 1D tensors of same shape along axis 0",
      "inputs": [
        {
          "name": "input1",
          "shape": [12],
          "data": [
            -0.39444134019222243,
           ... ...
            -0.6731740531810844
          ],
          "type": "float16"
        },
        {
          "name": "input2",
          "shape": [12],
          "data": [
            0.4918989118791477,
            ... ...
            0.1211843166661235
          ],
          "type": "float16"
        }
      ],
      "axis": 0,
      "expected": {
        "name": "output",
        "shape": [24],
        "data": [
          -0.39453125,
          ... ....
          0.12115478515625
        ],
        "type": "float16"
      }
    },

Solution 2 - positive: simple, no duplicated test data code / negative: need modify more to support this updated struct, for examples, need iterate precision keys list of "expected" dictionary to test float32 and float16 precisions, according to precision key to prepare relative TypedArray data for input/constant operands and output operand(s), generate each test name with precision word for them to show clearly on UI, etc.,

    {
      "name": "concat two 1D tensors of same shape along axis 0",
      "inputs": [
        {
          "name": "input1",
          "shape": [12],
          "data": [
            -0.39444134019222243,
            ... ...
            -0.6731740531810844
          ],
          "type": "float64"
        },
        {
          "name": "input2",
          "shape": [12],
          "data": [
            0.4918989118791477,
            ... ...
            0.1211843166661235
          ],
          "type": "float64"
        }
      ],
      "axis": 0,
      "expected": {
        "name": "output",
        "shape": [24],
        "data": {
           "float32": [ // baseline for test float32 precision
              -0.3944413363933563,
              ... ...
              0.1211843192577362
            ],
            "float16": [ // baseline for test float16 precision
              -0.39453125,
              ... ....
              0.12115478515625
            ]
         }
      }
    },

@fdwr PTAL, and any suggestions, thanks.


/* This method is faster than the OpenEXR implementation (very often
* used, eg. in Ogre), with the additional benefit of rounding, inspired
* by James Tursa?s half-precision code. */
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tursa's?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed.

const getTypedArrayData = (type, data) => {
let outData;
if (type === 'float16') {
// workaround to convert Float16 to Unit16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uint16

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, fixed.

actualBitwise = actual[i];
// convert expected data of Float16 to Uint16
expectedBitwise = toHalf(expected[i]);
}
distance = actualBitwise - expectedBitwise;
distance = distance >= 0 ? distance : -distance;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this just distance = Math.abs(distance)?

Copy link

@fdwr fdwr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Copy link
Contributor

@Honry Honry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@Honry Honry merged commit 6b961ec into web-platform-tests:master Nov 28, 2023
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants