Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any possibility of optimizing the flair model (like INT8 quantization etc.,) ? #2317

Closed
abhipn opened this issue Jun 25, 2021 · 1 comment
Labels
wontfix This will not be worked on

Comments

@abhipn
Copy link

abhipn commented Jun 25, 2021

I have been using Flair in our production environment for some time now, and I haven't faced any issues so far. But the issue here is not every organization uses GPU for inference, and having a CPU for inference will not be ideal when latency becomes important.

I was wondering is there a way I couldn't convert flair.pt to flair.onnx in the process to apply integer quantization, a small trade off of accuracy over performance is not actually a bad idea. I have gone through docs, but couldn't find any reference for optimizations or distillation etc.,

If someone managed to do it, really appreciate if you could share the details.

@stale
Copy link

stale bot commented Oct 24, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Oct 24, 2021
@stale stale bot closed this as completed Nov 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

1 participant