-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate long loading time of LM #2384
Comments
|
We have 6 seconds of difference on just loading the LM on cold caches:
|
Using
|
@reuben Tested on Pixel 2, after rebooting, using SpeechModule (part of androidspeech): no more huge seconds blocking of the UI when starting inference. |
FTR, also checking with
|
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
On some arch (RPi, Android) we can feel that there's a several-seconds delay when enabling the language model. This is only on first run. LM should already be
mmap()
'd, so we should investigate what happens, this cripples user experience.The text was updated successfully, but these errors were encountered: