-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make autoimport cache generation non-blocking #499
Make autoimport cache generation non-blocking #499
Conversation
@ccordoba12 before diving further into this, I wanted to run the general idea past you. What is the problem?
That's why I suggest the following solution
CC @bagel897 since she worked on rope a lot. Any thoughts folks? |
Is the database consistent during generation? |
I'd be in favor of using throttling for this, if possible, to let users know what's happening with autoimport in the meantime.
I think this is the right thing to do to avoid blocking the server on |
@ccordoba12 the PR is ready for review. I throttled the progress reporting to 500ms. Progress begin and progress end messages are unaffected by it. I think this is more than enough. I like how responsive the language server is now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @tkrabel for your work on this! Two small comments for you, the rest looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last comment fo you @tkrabel then this should be ready.
pylsp/_utils.py
Outdated
import time | ||
from functools import wraps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please move these two imports to the standard library section above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering: can't we automate that somehow? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isort
maybe, but I'm not really sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me now, thanks @tkrabel!
I'll release 1.10.0 at the beginning of next week with your and @smacke's improvements. |
On a medium-sized virtual environment, rope takes 10-20 sec to create the module cache. During this time, the language server is unresponsive.
This is not a problem on a local machine where the cache is unlikely emptied, but poses a challenge in cloud environments such as Databricks, where the server runs on a cloud VM which gets pre-emptied regularly.
This PR makes cache initialization non-blocking, hence making the server response immediately after receiving an initialized message.
How is this tested?