Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade test opsets and remove deprecated numpy and version usage #2018

Merged
merged 3 commits into from
Aug 4, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 5 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ The common issues we run into we try to document here [Troubleshooting Guide](Tr

| Build Type | OS | Python | TensorFlow | ONNX opset | Status |
| --- | --- | --- | --- | --- | --- |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 9-16 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=main) |
| Unit Test - Full | Linux, MacOS, Windows | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 9-16 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=main) | |
| Unit Test - Basic | Linux, MacOS<sup>\*</sup>, Windows<sup>\*</sup> | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 13-17 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=16&branchName=main) |
| Unit Test - Full | Linux, MacOS, Windows | 3.7-3.10 | 1.13-1.15, 2.1-2.9 | 13-17 | [![Build Status](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_apis/build/status/unit_test-matrix?branchName=main)](https://dev.azure.com/tensorflow-onnx/tensorflow-onnx/_build/latest?definitionId=18&branchName=main) | |
<br/>

## Supported Versions
Expand All @@ -27,7 +27,7 @@ The common issues we run into we try to document here [Troubleshooting Guide](Tr

tf2onnx will use the ONNX version installed on your system and installs the latest ONNX version if none is found.

We support and test ONNX opset-9 to opset-17. opset-6 to opset-8 should work but we don't test them.
We support and test ONNX opset-13 to opset-17. opset-6 to opset-12 should work but we don't test them.
By default we use ```opset-13``` for the resulting ONNX graph.

If you want the graph to be generated with a specific opset, use ```--opset``` in the command line, for example ```--opset 13```.
Expand All @@ -43,7 +43,6 @@ You can install tf2onnx on top of tf-1.x or tf-2.x.
### Python

We support Python ```3.7-3.10```.
Note that on windows for Python > 3.7 the protobuf package doesn't use the cpp implementation and is very slow - we recommend to use Python 3.7 for that reason.

## Prerequisites

Expand Down Expand Up @@ -83,7 +82,7 @@ or

```python setup.py develop```

tensorflow-onnx requires onnx-1.5 or better and will install/upgrade onnx if needed.
tensorflow-onnx requires onnx-1.9 or better and will install/upgrade onnx if needed.

To create a wheel for distribution:

Expand All @@ -100,7 +99,7 @@ To get started with `tensorflow-onnx`, run the `t2onnx.convert` command, providi

The above command uses a default of `13` for the ONNX opset. If you need a newer opset, or want to limit your model to use an older opset then you can provide the `--opset` argument to the command. If you are unsure about which opset to use, refer to the [ONNX operator documentation](https://github.com/onnx/onnx/releases).

```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 16 --output model.onnx```
```python -m tf2onnx.convert --saved-model tensorflow-model-path --opset 17 --output model.onnx```

If your TensorFlow model is in a format other than `saved model`, then you need to provide the inputs and outputs of the model graph.

Expand Down
2 changes: 1 addition & 1 deletion ci_build/azure_pipelines/templates/job_generator.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ parameters:
python_versions: ['3.7']
tf_versions: ['']
onnx_versions: ['']
onnx_opsets: ['17', '16', '15', '14', '13', '12', '11', '10', '9']
onnx_opsets: ['17', '16', '15', '14', '13']
onnx_backends: {onnxruntime: ['1.12.0']}
job: {}
run_setup: 'True'
Expand Down
2 changes: 1 addition & 1 deletion ci_build/azure_pipelines/templates/unit_test.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Run unit test

parameters:
onnx_opsets: ['17', '16', '15', '14', '13', '12', '11', '10', '9']
onnx_opsets: ['17', '16', '15', '14', '13']
skip_tflite_tests: 'True'
skip_tfjs_tests: 'True'
skip_tf_tests: 'False'
Expand Down
13 changes: 1 addition & 12 deletions ci_build/azure_pipelines/unit_test-matrix.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,7 @@ stages:
parameters:
platforms: ['linux', 'windows']
python_versions: ['3.7']
tf_versions: ['1.14.0']
onnx_opsets: ['']
job:
steps:
- template: 'unit_test.yml'
report_coverage: 'True'

- template: 'templates/job_generator.yml'
parameters:
platforms: ['linux', 'windows']
python_versions: ['3.7']
tf_versions: ['1.15.2','2.1.0']
tf_versions: ['1.14.0', '1.15.2']
onnx_opsets: ['']
job:
steps:
Expand Down
6 changes: 3 additions & 3 deletions ci_build/azure_pipelines/unit_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
# TFJS tf 2.9
python_versions: ['3.9']
python_versions: ['3.10']
tf_versions: ['2.9.1']
onnx_opsets: ['']
skip_tfjs_tests: 'False'
Expand All @@ -93,7 +93,7 @@ stages:
- template: 'templates/job_generator.yml'
parameters:
# TFLite tf 2.9
python_versions: ['3.8']
python_versions: ['3.10']
tf_versions: ['2.9.1']
onnx_opsets: ['']
skip_tflite_tests: 'False'
Expand Down Expand Up @@ -162,7 +162,7 @@ stages:
python_versions: ['3.9']
platforms: ['windows']
tf_versions: ['2.9.1']
onnx_opsets: ['15']
onnx_opsets: ['16']
job:
steps:
- template: 'unit_test.yml'
Expand Down
16 changes: 8 additions & 8 deletions tests/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
import unittest
from collections import defaultdict

from distutils.version import LooseVersion
from packaging.version import Version
from parameterized import parameterized
import numpy as np
import tensorflow as tf
Expand Down Expand Up @@ -98,7 +98,7 @@ def _get_backend_version(self):
pass

if version:
version = LooseVersion(version)
version = Version(version)
return version

def __str__(self):
Expand Down Expand Up @@ -178,7 +178,7 @@ def check_opset_after_tf_version(tf_version, required_opset, message=""):
""" Skip if tf_version > max_required_version """
config = get_test_config()
reason = _append_message("conversion requires opset {} after tf {}".format(required_opset, tf_version), message)
skip = config.tf_version >= LooseVersion(tf_version) and config.opset < required_opset
skip = config.tf_version >= Version(tf_version) and config.opset < required_opset
return unittest.skipIf(skip, reason)


Expand Down Expand Up @@ -284,7 +284,7 @@ def check_tfjs_max_version(max_accepted_version, message=""):
except ModuleNotFoundError:
can_import = False
return unittest.skipIf(can_import and not config.skip_tfjs_tests and \
tensorflowjs.__version__ > LooseVersion(max_accepted_version), reason)
Version(tensorflowjs.__version__) > Version(max_accepted_version), reason)

def check_tfjs_min_version(min_required_version, message=""):
""" Skip if tjs_version < min_required_version """
Expand All @@ -296,20 +296,20 @@ def check_tfjs_min_version(min_required_version, message=""):
except ModuleNotFoundError:
can_import = False
return unittest.skipIf(can_import and not config.skip_tfjs_tests and \
tensorflowjs.__version__ < LooseVersion(min_required_version), reason)
Version(tensorflowjs.__version__) < Version(min_required_version), reason)

def check_tf_max_version(max_accepted_version, message=""):
""" Skip if tf_version > max_required_version """
config = get_test_config()
reason = _append_message("conversion requires tf <= {}".format(max_accepted_version), message)
return unittest.skipIf(config.tf_version > LooseVersion(max_accepted_version), reason)
return unittest.skipIf(config.tf_version > Version(max_accepted_version), reason)


def check_tf_min_version(min_required_version, message=""):
""" Skip if tf_version < min_required_version """
config = get_test_config()
reason = _append_message("conversion requires tf >= {}".format(min_required_version), message)
return unittest.skipIf(config.tf_version < LooseVersion(min_required_version), reason)
return unittest.skipIf(config.tf_version < Version(min_required_version), reason)


def skip_tf_versions(excluded_versions, message=""):
Expand Down Expand Up @@ -385,7 +385,7 @@ def check_onnxruntime_min_version(min_required_version, message=""):
config = get_test_config()
reason = _append_message("conversion requires onnxruntime >= {}".format(min_required_version), message)
return unittest.skipIf(config.is_onnxruntime_backend and
config.backend_version < LooseVersion(min_required_version), reason)
config.backend_version < Version(min_required_version), reason)


def skip_caffe2_backend(message=""):
Expand Down
4 changes: 2 additions & 2 deletions tests/run_pretrained_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
import zipfile
import random
from collections import namedtuple
from distutils.version import LooseVersion
from packaging.version import Version


import yaml
Expand Down Expand Up @@ -789,7 +789,7 @@ def main():
continue

if t.tf_min_version:
if tf_utils.get_tf_version() < LooseVersion(str(t.tf_min_version)):
if tf_utils.get_tf_version() < Version(str(t.tf_min_version)):
logger.info("Skip %s: %s %s", test, "Min TF version needed:", t.tf_min_version)
continue

Expand Down
38 changes: 19 additions & 19 deletions tests/test_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@

import os
import unittest
from distutils.version import LooseVersion
from itertools import product

import numpy as np
from numpy.testing import assert_almost_equal
from packaging.version import Version
import tensorflow as tf

from tensorflow.python.ops import lookup_ops
Expand Down Expand Up @@ -72,7 +72,7 @@
matrix_diag_part = tf.compat.v1.matrix_diag_part
fake_quant_with_min_max_args = tf.quantization.fake_quant_with_min_max_args
fake_quant_with_min_max_vars = tf.quantization.fake_quant_with_min_max_vars
elif LooseVersion(tf.__version__) >= "1.13":
elif Version(tf.__version__) >= Version("1.13"):
conv2d_backprop_input = tf.compat.v1.nn.conv2d_backprop_input
conv3d_transpose = tf.compat.v1.nn.conv3d_transpose
multinomial = tf.compat.v1.random.multinomial
Expand All @@ -86,7 +86,7 @@
quantize_and_dequantize = tf.compat.v1.quantization.quantize_and_dequantize
resize_nearest_neighbor = tf.compat.v1.image.resize_nearest_neighbor
resize_bilinear = tf.compat.v1.image.resize_bilinear
if LooseVersion(tf.__version__) >= "1.14":
if Version(tf.__version__) >= Version("1.14"):
resize_bilinear_v2 = tf.compat.v2.image.resize
is_nan = tf.math.is_nan
is_inf = tf.math.is_inf
Expand Down Expand Up @@ -1320,8 +1320,8 @@ def func(x1):

@check_onnxruntime_incompatibility("Add")
def test_logicaland(self):
x_val1 = np.array([1, 0, 1, 1], dtype=np.bool).reshape((2, 2))
x_val2 = np.array([0, 1, 1, 1], dtype=np.bool).reshape((2, 2))
x_val1 = np.array([1, 0, 1, 1], dtype=bool).reshape((2, 2))
x_val2 = np.array([0, 1, 1, 1], dtype=bool).reshape((2, 2))
def func(x1, x2):
mi = tf.logical_and(x1, x2)
return tf.identity(mi, name=_TFOUTPUT)
Expand Down Expand Up @@ -3505,9 +3505,9 @@ def func(x):
def test_where_bool(self):
x_val = np.array([1, 2, -3, 4, -5], dtype=np.float32)
true_result = np.array([True, False, True, False, True],
dtype=np.bool)
dtype=bool)
false_result = np.array([False, True, False, True, True],
dtype=np.bool)
dtype=bool)
def func(x):
picks = tf.where(x > -1, true_result, false_result)
return tf.identity(picks, name=_TFOUTPUT)
Expand Down Expand Up @@ -3770,36 +3770,36 @@ def func(input_1, input_2):
self._run_test_case(func, [_OUTPUT], {_INPUT: input_val_1, _INPUT1: input_val_2}, rtol=1e-4)

def test_logical_not(self):
input_val = np.random.randint(0, 2, (10, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (10, 20)).astype(bool)
def func(x):
res = tf.logical_not(x)
return tf.identity(res, name=_TFOUTPUT)
self._run_test_case(func, [_OUTPUT], {_INPUT: input_val})

def test_reduce_all(self):
input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_all(input_tensor=x, keepdims=False)
res1 = tf.reduce_all(input_tensor=x, axis=[0], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})

input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(input_x):
res = tf.reduce_all(input_tensor=input_x, keepdims=True)
res1 = tf.reduce_all(input_tensor=input_x, axis=[0], keepdims=True)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})

def test_reduce_any(self):
input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=False)
res1 = tf.reduce_any(input_tensor=x, axis=[0], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})

input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=True)
res1 = tf.reduce_any(input_tensor=x, axis=[0], keepdims=True)
Expand All @@ -3808,14 +3808,14 @@ def func(x):

@check_opset_min_version(11, "ReduceMin")
def test_reduce_all_negative_axis(self):
input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_all(input_tensor=x, keepdims=False)
res1 = tf.reduce_all(input_tensor=x, axis=[-1], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})

input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(input_x):
res = tf.reduce_all(input_tensor=input_x, keepdims=True)
res1 = tf.reduce_all(input_tensor=input_x, axis=[-1], keepdims=True)
Expand All @@ -3824,14 +3824,14 @@ def func(input_x):

@check_opset_min_version(11, "ReduceSum")
def test_reduce_any_negative_axis(self):
input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=False)
res1 = tf.reduce_any(input_tensor=x, axis=[-1], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})

input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=True)
res1 = tf.reduce_any(input_tensor=x, axis=[-1], keepdims=True)
Expand All @@ -3841,15 +3841,15 @@ def func(x):
@check_opset_min_version(11, "ReduceSum")
@check_tf_min_version("1.15")
def test_reduce_any_empty_axis(self):
input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_any(input_tensor=x, keepdims=False)
res1 = tf.reduce_any(input_tensor=x, axis=[], keepdims=False)
return tf.identity(res, name=_TFOUTPUT), tf.identity(res1, name=_TFOUTPUT1)
self._run_test_case(func, [_OUTPUT, _OUTPUT1], {_INPUT: input_val})

def test_reduce_all_scalar_axis(self):
input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
def func(x):
res = tf.reduce_all(input_tensor=x, keepdims=False)
res1 = tf.reduce_all(input_tensor=x, axis=0, keepdims=False)
Expand All @@ -3859,7 +3859,7 @@ def func(x):
@check_opset_min_version(13, "ReduceSum")
@check_tf_min_version("1.15")
def test_reduce_any_nonconst_axis(self):
input_val = np.random.randint(0, 2, (2, 20)).astype(np.bool)
input_val = np.random.randint(0, 2, (2, 20)).astype(bool)
y_val = np.array([1], np.int32)
def func(x, y):
res = tf.reduce_any(input_tensor=x, axis=y, keepdims=False)
Expand Down
4 changes: 2 additions & 2 deletions tests/test_onnx_shape_inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@ def test_if(self):
sub = else_subgraph.make_node("Sub", [INPUT1, INPUT3])
else_subgraph.add_graph_output(sub.output[0])

cond = graph.make_const("cond", np.array(True, dtype=np.bool))
cond = graph.make_const("cond", np.array(True, dtype=bool))
branches = {"then_branch": then_subgraph, "else_branch": else_subgraph}
if_node = graph.make_node("If", [cond.output[0]], branches=branches)

Expand Down Expand Up @@ -381,7 +381,7 @@ def test_loop(self):
subgraph.add_graph_output(out.output[0])

max_iter = graph.make_const("max_iter", np.array([10], dtype=np.int64))
cond_const = graph.make_const("cond_const", np.array([True], dtype=np.bool))
cond_const = graph.make_const("cond_const", np.array([True], dtype=bool))
hwangdeyu marked this conversation as resolved.
Show resolved Hide resolved
branches = {"body": subgraph}
loop = graph.make_node("Loop", [max_iter.output[0], cond_const.output[0], INPUT1],
output_count=2, branches=branches)
Expand Down
6 changes: 3 additions & 3 deletions tests/test_optimizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -988,7 +988,7 @@ def _define_loop_graph(external_inputs):

def _make_loop(external_inputs, outputs):
trip_cnt = self._make_onnx_const(np.array(10, dtype=np.int64), "trip_cnt")
cond = self._make_onnx_const(np.array(True, dtype=np.bool), "cond")
cond = self._make_onnx_const(np.array(True, dtype=bool), "cond")
sub_graph = _define_loop_graph(external_inputs)
loop_node = helper.make_node("Loop", ["trip_cnt", "cond", "cond"], outputs,
name="loop", body=sub_graph)
Expand Down Expand Up @@ -1779,7 +1779,7 @@ def test_identity_in_subgraph_non_graph_output(self):
),
)

cond_value = np.array(True, dtype=np.bool)
cond_value = np.array(True, dtype=bool)
node3 = helper.make_node(
'Constant',
inputs=[],
Expand All @@ -1788,7 +1788,7 @@ def test_identity_in_subgraph_non_graph_output(self):
name='cond_value',
data_type=TensorProto.BOOL,
dims=iter_num_value.shape,
vals=cond_value.flatten().astype(np.bool).tolist(),
vals=cond_value.flatten().astype(bool).tolist(),
),
)

Expand Down
Loading