Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TF to ONNX export fails with CLI using example from docs #18495

Closed
1 of 4 tasks
cchan-lm opened this issue Aug 5, 2022 · 4 comments
Closed
1 of 4 tasks

TF to ONNX export fails with CLI using example from docs #18495

cchan-lm opened this issue Aug 5, 2022 · 4 comments
Labels

Comments

@cchan-lm
Copy link

cchan-lm commented Aug 5, 2022

System Info

  • transformers version: 4.21.1
  • Platform: Linux-4.15.0-187-generic-x86_64-with-debian-buster-sid
  • Python version: 3.7.5
  • Huggingface_hub version: 0.8.1
  • PyTorch version (GPU?): not installed (NA)
  • Tensorflow version (GPU?): 2.7.0 (False)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: No
  • Using distributed or parallel set-up in script?: No

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

  1. Save a TF transformers model (from example at https://huggingface.co/docs/transformers/serialization)
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification

# Load tokenizer and TensorFlow weights from the Hub
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
tf_model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased")
# Save to disk
tokenizer.save_pretrained("local-tf-checkpoint")
tf_model.save_pretrained("local-tf-checkpoint")
  1. Use CLI to export to ONNX to see failure: python -m transformers.onnx --model=local-tf-checkpoint onnx/
  2. Use --framework to use successfully: python -m transformers.onnx --model=local-tf-checkpoint --framework=tf onnx/

Expected behavior

Once the model directory has been provided, the export should know that a TF model is being used. There should be no dependency on PyTorch (there is also no PyTorch in this environment). Instead, I get this error: RuntimeError: Cannot export model to ONNX using PyTorch because no PyTorch package was found.

Either transformers should be updated or the docs at https://huggingface.co/docs/transformers/serialization should be updated to say that --framework=tf for TensorFlow models is required.

@cchan-lm cchan-lm added the bug label Aug 5, 2022
@cchan-lm cchan-lm changed the title Export TF to ONNX fails with CLI using example from docs TF to ONNX export fails with CLI using example from docs Aug 5, 2022
@LysandreJik
Copy link
Member

Hmmm that's interesting, indeed!

The docs should be updated, but it would be nice to also support this out of the box. Would you like to try your hand at a PR?

cc @lewtun @michaelbenayoun @JingyaHuang for knowledge

@cchan-lm
Copy link
Author

cchan-lm commented Aug 9, 2022

Sure, I can try making a PR for it! Will be doing so from my personal account, @rachthree.

@github-actions
Copy link

github-actions bot commented Sep 5, 2022

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@cchan-lm
Copy link
Author

cchan-lm commented Sep 5, 2022

Because PR #18615 has been merged, I'm considering this closed.

@cchan-lm cchan-lm closed this as completed Sep 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants