Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow completion (maybe jedi cache) #823

Closed
ANtlord opened this issue Jun 22, 2020 · 6 comments
Closed

Slow completion (maybe jedi cache) #823

ANtlord opened this issue Jun 22, 2020 · 6 comments

Comments

@ANtlord
Copy link

ANtlord commented Jun 22, 2020

Hello!

I have a weird issue of completion. In a simple script import os; os. completion takes about a second or second and half. It happens only if I use the language. Other language servers for other languages work fine.

First I've done is checking speed of Jedi. It shows quite fine results. The first completion takes 0.69s the second one takes 0.12s. When I try to get completion in my editor (Neovim) it takes about every time. It looks like that cache jedi is ignored somehow or the language does something else.

The second thing I've tried is tracing of system call with strace. I get such statistics of system calls when a cursor stands after the dot in the end of the string import os; os.

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 46.90    0.000340           0       427           read
 29.10    0.000211           0       395           write
 21.66    0.000157           1        85        15 stat
  1.93    0.000014           0        98        88 openat
  0.14    0.000001           0        10           close
  0.14    0.000001           0        19           fstat
  0.14    0.000001           0         9         9 ioctl
  0.00    0.000000           0        18           lseek
  0.00    0.000000           0         2           getdents64
------ ----------- ----------- --------- --------- ----------------
100.00    0.000725           0      1063       112 total

Total time is 0.000725s which is quite good too. The only thing I care about is number of openat. Some process tries to open a lot of pyi files which don't exist but I'm not sure if it the cause of the problem.

The third what I've tried is using VSCode but unfortunately I can't get how to install the language server for the editor.

Unfortunately I don't know the protocol and I can't measure response of the language server and I don't know what else I can check to find the bottle neck.

Tech info:
Python 3.8.3
Linux kernel 5.6.19-300.fc32.x86_64
OS: Fedora 32
Editor: Neovim 0.4.3
Language client: https://github.com/autozimu/LanguageClient-neovim

Jedi benchmark

import jedi
from datetime import datetime
before = datetime.now()
jedi.Script('import os; os.').complete()
after1 = datetime.now()
jedi.Script('import os; os.').complete()
after2 = datetime.now()
print(after1 - before, after2 - after1)
@bastianbeischer
Copy link

I don't have "proof" (or done a systematic investigation) but I subjectively also noticed a noticeable slowdown of either pyls or jedi recently. I found that

python-language-server==0.31.10
jedi==0.15.2

doesn't have those slowdowns. Maybe you can see if you find the same and compare profiles with those versions?

@ANtlord
Copy link
Author

ANtlord commented Jun 30, 2020

It doesn't work for a script consists of import os; os. as switch it to the version

@astier
Copy link

astier commented Aug 4, 2020

I also noticed that when using jedi for completion directly its

  1. Generally faster than pyls
  2. It becomes significantly faster after the first completion

whereas pyls just stays slow even after the first completion.

@ANtlord
Copy link
Author

ANtlord commented Aug 4, 2020

True, true. As I wrote in the PR that shows bottlenecks the issue is related to a lot of "unnecessary" information. It fetches from Jedi documentation and reference of symbols (methods, fields etc) which could be reasonable. For example if you show it within a completion as VSCode does. Anyway the fetching is much slower than fetching of symbols only. May there is a better way to use Jedi API but I don't know it.

@asif-mahmud
Copy link

asif-mahmud commented Oct 12, 2020

is it possible to utilize this settings of jedi - jedi.settings.call_signatures_validity ? it looks like a good way to keep the cache in memory for longer period of time, therefore allowing faster autocompletion. I have asked related question at jedi repo as well. heres the link - davidhalter/jedi#1679 (comment)

@krassowski
Copy link

Here is some additional profiling using modified test_numpy_completions test case on Python 3.6:

def test_numpy_completions(config, workspace):
    doc_numpy = "import numpy as np; np."
    com_position = {'line': 0, 'character': len(doc_numpy)}
    doc = Document(DOC_URI, workspace, doc_numpy)
    import cProfile
    import jedi
    cProfile.runctx(
        'pyls_jedi_completions(config, doc, com_position)', globals(), locals(),
        f'pyls_jedi_{jedi.__version__}_numpy_completions_1.prof'
    )
    cProfile.runctx(
        'pyls_jedi_completions(config, doc, com_position)', globals(), locals(),
        f'pyls_jedi_{jedi.__version__}_numpy_completions_2.prof'
    )
    items = pyls_jedi_completions(config, doc, com_position)

    assert items
    assert any(['array' in i['label'] for i in items])

The very first (zeroth) run after installation can take as long as 25 seconds (!) on a machine with 12 cores, huge RAM and a fast SSD drive. Then it comes down to quite predictable ~12 seconds on first run and ~6 seconds on the consecutive runs (thanks to Jedi cache). I also run it with jedi 0.18 (currently pyls is not compatible with it but it is possible to run this test case after changing version requirements) and there might be an improvement, but not a huge one:

jedi 0.17.2 jedi 0.18.0
first run 12.5s 9.58s
second run 6.81s 6.54s

Without going into the details the conclusion is in agreement with what @ANtlord described in #826: get_signatures() call made in _label() is expensive. While my pull request (#905) eliminates the need to call Completion.docstring() for all the suggestions at once, the _label() slowness is not yet addressed. I believe it is in users interest to be able to turn the enhanced label off as it slows the completion enormously. The new LSP version 3.16 allows to resolve label for a single item only using completionItem/resolve; it would be optimal to defer this expensive operation this way.

The upstream slowness is being tracked in davidhalter/jedi#1059 I believe.

Icicles

jedi 0.17.2 - first run

Screenshot from 2021-02-07 15-27-03

jedi 0.17.2 - second run

Screenshot from 2021-02-07 15-28-01

jedi 0.18.0 - first run

Screenshot from 2021-02-07 15-28-42

jedi 0.18.0 - second run

Screenshot from 2021-02-07 15-29-10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants