Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect NaN handling for Min and Max operators on CPU with a single element input #21455

Closed
adamreeve opened this issue Jul 23, 2024 · 2 comments · Fixed by #21492
Closed
Labels
contributions welcome lower priority issues for the core ORT teams core runtime issues related to core runtime

Comments

@adamreeve
Copy link
Contributor

adamreeve commented Jul 23, 2024

Describe the issue

The Min and Max operators should return NaN if any input is NaN, but the CPU provider will incorrectly return the value from the second input when it is a single-element tensor and the first input is NaN. The GPU provider behaves correctly (since #19984).

This issue was discovered when exporting a Torch model to ONNX that used the torch.Tensor.clip method.

To reproduce

import torch  # Not used but initializing the CUDA execution provider fails without it

import numpy as np
import onnx
from onnx.onnx_pb import TensorProto
import onnxruntime


num_cols = 1
num_rows = 3
operator = "Min"  # or "Max"

input = onnx.helper.make_tensor_value_info("input", TensorProto.FLOAT, ["N", num_cols])
output = onnx.helper.make_tensor_value_info("output", TensorProto.FLOAT, ["N", num_cols])

bound_const = onnx.helper.make_node(
        "Constant",
        inputs=[],
        outputs=["bound_const"],
        value=onnx.numpy_helper.from_array(np.array([10.0] * num_cols, dtype=np.float32)))

operator_node = onnx.helper.make_node(
        operator,
        inputs=["input", "bound_const"],
        outputs=["output"],
        )

graph_def = onnx.helper.make_graph(
        nodes=[bound_const, operator_node],
        name="test-model",
        inputs=[input],
        outputs=[output])

opset_import = onnx.helper.make_opsetid("", 17)

model_def = onnx.helper.make_model(
        graph_def,
        opset_imports=[opset_import],
        producer_name="test")

onnx.checker.check_model(model_def, full_check=True)

model_path = 'test_operator.onnx'
onnx.save(model_def, model_path)

input = np.random.randn(num_rows, num_cols).astype(np.float32)
input[0, :] = np.nan

cpu_session = onnxruntime.InferenceSession(model_path, providers=["CPUExecutionProvider"])
output = cpu_session.run(["output"], {"input": input})
print("CPU session output:")
print(output[0])

gpu_session = onnxruntime.InferenceSession(model_path, providers=["CUDAExecutionProvider"])
output = gpu_session.run(["output"], {"input": input})
print("GPU session output:")
print(output[0])

This outputs something like:

CPU session output:
[[10.       ]
 [-0.2347993]
 [ 2.0502098]]
GPU session output:
[[       nan]
 [-0.2347993]
 [ 2.0502098]]

The first row of the CPU session output should be nan.

If I change num_cols to anything > 1 then the behaviour is correct and the first row is all nans for both the CPU and GPU providers. Although I can also reproduce the issue if num_cols is > 1 but the number of rows is 1 and I reverse the order of the inputs to the operator.

Urgency

No response

Platform

Linux

OS Version

Fedora 40

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

@github-actions github-actions bot added the ep:CUDA issues related to the CUDA execution provider label Jul 23, 2024
@skottmckay skottmckay added core runtime issues related to core runtime and removed ep:CUDA issues related to the CUDA execution provider labels Jul 23, 2024
@skottmckay
Copy link
Contributor

May need to use cwiseMin to propagate a NaN.

e.g. for Min:

ProcessBroadcastSpanFuncs funcs{
[](BroadcastHelper& per_iter_bh) {
per_iter_bh.OutputEigen<T>() =
per_iter_bh.EigenInput1<T>().array().min(per_iter_bh.ScalarInput0<T>());
},
[](BroadcastHelper& per_iter_bh) {
per_iter_bh.OutputEigen<T>() =
per_iter_bh.EigenInput0<T>().array().min(per_iter_bh.ScalarInput1<T>());
},
[](BroadcastHelper& per_iter_bh) {
per_iter_bh.OutputEigen<T>() =
per_iter_bh.EigenInput0<T>().array().min(per_iter_bh.EigenInput1<T>().array());
}};

@skottmckay skottmckay added the contributions welcome lower priority issues for the core ORT teams label Jul 23, 2024
@adamreeve
Copy link
Contributor Author

I have a fix nearly ready for this, will make a PR soon.

tianleiwu pushed a commit that referenced this issue Sep 24, 2024
This makes min and max with NaN for either operand always return NaN for
float16 data, matching the behaviour of float and double.

The behaviour for floats and doubles was previously fixed for the CPU
provider in #21492 and the CUDA provider in #19984, but these PRs didn't
fix the behaviour for float16 due to tests causing asan errors. The
memory access violations with float16 data have now been fixed in
#22135, so this PR is a follow up to make float16 min and max behave the
same as float and double for both the CPU and CUDA providers now that we
can add tests for this.

### Motivation and Context

Relevant previous issues (not float16 specific):
* #21455
* onnx/onnx#6003
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributions welcome lower priority issues for the core ORT teams core runtime issues related to core runtime
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants