Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

[Model Compression] Expand export_model arguments: dummy input and onnx opset_version #3968

Merged
merged 5 commits into from
Jul 26, 2021

Conversation

xiaowu0162
Copy link
Contributor

@xiaowu0162 xiaowu0162 commented Jul 21, 2021

Fix #3964.

@xiaowu0162
Copy link
Contributor Author

Ready for review @J-shang @QuanluZhang

@J-shang J-shang requested review from QuanluZhang and J-shang July 21, 2021 04:24
device = torch.device('cpu')
input_data = torch.Tensor(*input_shape).to(device)
else:
input_data = dummy_input
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if user both set dummy_input and device

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think device should be ignored in that case. Have updated the docstring.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

recommand we also input_data = dummy_input.to(device), or this may confuse user, if user also set device but we ignore it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I think this operation may fail when e.g., dummy_input is a tuple

@J-shang J-shang requested a review from ultmaster July 21, 2021 07:15
@QuanluZhang QuanluZhang merged commit 68818a3 into microsoft:master Jul 26, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use higher opset_version when exporting ONNX models
4 participants