-
Notifications
You must be signed in to change notification settings - Fork 26.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ONNX export for gpt_neo models #12788
Comments
Hello @softworkz, the documentation for that feature is available here: https://huggingface.co/transformers/master/serialization.html#configuration-based-approach The goal of #11786 isn't to add support for exporting a number of models to ONNX - it is to add configurations that allow very simple exports to ONNX. If a model (official or unofficial) is unsupported, then adding support for it locally should be as simple as defining a configuration, as is explained in the document linked above. It is perfectly possible that applying the same configuration as GPT-2 to GPT Neo would work out - feel free to give it a try. If you're having a hard time using the new exporting approach, please let us know, as we're eager for comments. Thank you. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
Status
A recent PR (#11786) adds support for exporting a number of models to ONNX.
Among those is the gpt2 model but not gpt_neo.
Question
I'm wondering whether it wouldn't be sufficient to simply apply the same changes as for gpt2..
Is there any specific reason why gpt_neo was left out?
@LysandreJik
The text was updated successfully, but these errors were encountered: