Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

c.Completer.use_jedi instruction may be pointing to the wrong config file #531

Closed
bolliger32 opened this issue Feb 18, 2021 · 24 comments · Fixed by #532
Closed

c.Completer.use_jedi instruction may be pointing to the wrong config file #531

bolliger32 opened this issue Feb 18, 2021 · 24 comments · Fixed by #532

Comments

@bolliger32
Copy link
Contributor

What I am trying to do...

Speed up tab completion by telling IPython not to use Jedi

How I would like to learn how to do it...

Following the Readme

How the project might keep the docs accurate...

The readme says to modify c.Completer.use_jedi in ipython_kernel_config.py but I believe that this config setting is actually defined in ipython_config.py.

@krassowski
Copy link
Member

You are right, thank you! Would you like to submit a PR to change it?

@bolliger32
Copy link
Contributor Author

Great! Just submitted. Relatedly, I'm not sure how this applies when using a non-ipykernel kernel (e.g. xeus-python). This config parameter doesn't seem to affect the kernel when using xeus-python.

e.g. calling %config IPCompleter shows that use_jedi is False when running using ipykernel but True when running using Zeus-python (after making the appropriate changes to ipython_config.py.

I think the latest xeus-python does not use Jedi? Could be wrong about that. Does that imply that this fix is not needed if using xeus-python?

@krassowski
Copy link
Member

This is a good question. The most recent version of Xeus Python added support for IPython config, and the previous versions did use Jedi; it is still listed in the requirements, but the xinspect.cpp file is gone and the GitHub search function does not find code references to jedi on master (but it might be wrong). @martinRenou would you have any pointers here?

@krassowski
Copy link
Member

Looking again at the dependencies table, the direct dependency on jedi is now gone but there is IPython, which would still use jedi if available, so the config in theory should be still needed and useful if the complete request is relayed to IPython. But this is me just guessing. I hope Martin might be able to help.

@bolliger32
Copy link
Contributor Author

Got it makes sense - it is strange that the specifying c.Completer.use_jedi = False (and/or c.IPCompleter.use_jedi = False) in the IPython config file does not seem to propagate to the runtime configuration when using xeus-python. At least, running %config IPCompleter still suggests that this parameter is True.

@martinRenou
Copy link
Member

the direct dependency on jedi is now gone but there is IPython, which would still use jedi if available, so the config in theory should be still needed and useful if the complete request is relayed to IPython

Indeed, we used to rely on jedi for custom completion logic, but we now let IPython handle everything.

The readme says to modify c.Completer.use_jedi in ipython_kernel_config.py

IIRC xeus-python should ignore ipython_kernel_config.py as it is the ipykernel config, but it uses ipython_config.py

Got it makes sense - it is strange that the specifying c.Completer.use_jedi = False (and/or c.IPCompleter.use_jedi = False) in the IPython config file does not seem to propagate to the runtime configuration when using xeus-python. At least, running %config IPCompleter still suggests that this parameter is True.

Would you mind opening an issue to xeus-python if you think it's a problem? We might be missing something.

@bolliger32
Copy link
Contributor Author

Thanks @martinRenou! I just opened issue jupyter-xeus/xeus-python#417. Not sure if this is actually an issue or not, as I'm a bit confused myself. See the other issue for the confusion

@krassowski
Copy link
Member

As for the completion speed have you tried our pyls fork too? Is the completion speed satisfying for you after that, or do you think that more work is needed before we shift priority to adding new features?

@bolliger32
Copy link
Contributor Author

bolliger32 commented Feb 19, 2021

So I just started using jupyterlab-lsp (incredibly powerful btw, thank you!). I have it deployed in 3 places - on my local machine, in a kubernetes-backed jupyterhub deployment, and on a remote server on which I start jupyterlab and then connect on my local browser via SSH port forwarding. On the jupyterhub and local machines, the completion speed is fine for me w/ or w/o the fork. On the remote machine accessed via SSH forwarding, it's pretty unusable (talking >10-15s and a lot of times it seems the list is incomplete and/or missing metadata - it just shows <unknown> for the object type). I tried the pyls fork and this didn't seem to help at all. I'm not sure if this has to do with the SSH-based setup? Or maybe something else. Curious if you have any initial thoughts or if there's more info I could provide to help check that out. It's not the exact same environment on all machines so I imagine there could be other issues besides just the SSH difference that could be causing the delay.

@bollwyvl
Copy link
Collaborator

That it works at all with all those pipes is a miracle! 🙏 🌈 🦄

But seriously: we can't really do much to debug that, and certainly won't be able to test. You could look at the actual websocket logs next to some logs from your server, but without some heavy-duty opencensus/jaeger traces that track press'm key to see'm SVG icons, there are just too many factors.

@krassowski
Copy link
Member

it just shows <unknown> for the object type

these are the completions from the IPython kernel. I have already opened ipython/ipython#12820 for it and will be happy to contribute a fix if maintainers agree that my ideas go in the right direction.

if this has to do with the SSH-based setup?

It might. I strongly believe that installing the fork and the latest version of the extension should help there. The bottlenecks are different depending on circumstances: on local machine the actual jedi work is bottlneck (it is slow to resolve labels, so on the fork we cache labels for numpy/matplotlib/pandas/etc) while on a remote the bottleneck might be in transfer of data (fetching documentation for hundreds of completions can take time (so on the fork we do not send documentation for all completion items on the first request but only for the one that is selected in GUI, and one below/above to pre-fetch).

As just mentioned by @bollwyvl we would really need some profiling to help more, whether from the developer tools in your browser (both javascript profile and websocket messages trace would help, though the latter might be easier to get from the jupyter lab --debug and language servers configured for debugging too) or from the tools mentioned above.

A much easier test you could do right now is compare what is a difference between completion in python files and in notebooks (if both are painfully slow we confirm that kernel is not a culprit).

@bolliger32
Copy link
Contributor Author

bolliger32 commented Feb 19, 2021

Thanks @bollwyvl and @krassowski . Trying to do some basic manual profiling like suggested (comparing difference in python files and notebooks and btwn fork and current release of pyls). Caching is messing up the comparison though. For instance, on my local machine with the current pyls release, pd.<tab> took 14s the first time and 2s the second time (after restarting jlab, which I hoped would reset the cache). Any suggestions on how to get a meaningful comparison? As for the --debug output, I'm not getting much from jupyter lab --debug. I tried that yesterday and there didn't seem to be much info other than a bunch of [pyls] Handling a message outputs. How does one configure the language server. would it be specifying a debug flag in the argv property when creating jupyterlab-lsp settings?

@krassowski
Copy link
Member

krassowski commented Feb 19, 2021

if the [pyls] Handling a message has a timestamp attached, this is exactly what we are after. If you clone the pyls fork and run tests (pytest -k numpy should be enough; it might require running pip install and installing some dependencies first) the cProfile .profile files will be generated. Could you upload those here please?

There is a jedi cache file somewhere that can be removed...

@bolliger32
Copy link
Contributor Author

bolliger32 commented Feb 19, 2021

ok which are the files you're looking for? Here's a dump of what I think you're looking for (from my local machine with the fork cloned). Archive.zip. Note one of the two selected tests (the hover test) fails with this traceback:

workspace = <pyls.workspace.Workspace object at 0x10b906e80>

    def test_numpy_hover(workspace):
        # Over the blank line
        no_hov_position = {'line': 1, 'character': 0}
        # Over 'numpy' in import numpy as np
        numpy_hov_position_1 = {'line': 2, 'character': 8}
        # Over 'np' in import numpy as np
        numpy_hov_position_2 = {'line': 2, 'character': 17}
        # Over 'np' in np.sin
        numpy_hov_position_3 = {'line': 3, 'character': 1}
        # Over 'sin' in np.sin
        numpy_sin_hov_position = {'line': 3, 'character': 4}
    
        doc = Document(DOC_URI, workspace, NUMPY_DOC)
    
        contents = ''
        assert contents in pyls_hover(doc, no_hov_position)['contents']
    
        contents = 'NumPy\n=====\n\nProvides\n'
        hov_1 = pyls_hover(doc, numpy_hov_position_1)['contents']
        assert hov_1['kind'] == 'markdown'
        assert contents in hov_1['value']
    
        contents = 'NumPy\n=====\n\nProvides\n'
        hov_2 = pyls_hover(doc, numpy_hov_position_2)['contents']
        assert hov_2['kind'] == 'markdown'
        assert contents in hov_2['value']
    
        contents = 'NumPy\n=====\n\nProvides\n'
        hov_3 = pyls_hover(doc, numpy_hov_position_3)['contents']
        assert hov_3['kind'] == 'markdown'
        assert contents in hov_3['value']
    
        contents = 'Trigonometric sine, element-wise.\n\n'
        hov_sin = pyls_hover(doc, numpy_sin_hov_position)['contents']
>       assert hov_sin['kind'] == 'markdown'
E       TypeError: string indices must be integers

@bolliger32
Copy link
Contributor Author

And here's the same on the remote machine (it gets the same error on the tests):
Archive.zip

@krassowski
Copy link
Member

Oh, sorry forgot about this. There is a bug in jedi related to the fresh numpy 1.20 release: davidhalter/jedi#1746

Both of the files attached are useful, will analyse on weekend, thank you (the -1 is without cache, the -2 is with cache).

I now see you have Jedi 0.17.2 installed. You could give 0.18 a go (you will need to upgrade IPython); in my benchmark 0.18 was slightly faster; you can safely ignore the warning that pyls requires 0.17.2 when using the fork (I have not changed it to avoid clash with another PR upstream that does that but prior to removing Python 2 support).

@bolliger32
Copy link
Contributor Author

cool thanks so much! let me know if there's any other useful info I can provide and good to know that the fork is compatible with jedi 0.18.

@bolliger32
Copy link
Contributor Author

FYI I am getting issues with jedi 0.18. My pyls won't initialize and the jupyter lab logs are saying theres a dependency conflict with jedi.

@bollwyvl
Copy link
Collaborator

bollwyvl commented Feb 20, 2021 via email

@bolliger32
Copy link
Contributor Author

@bollwyvl this was mostly in response to @krassowski 's comment that the new pyls fork was compatible with 0.18 and just issues warnings - just wanted to point out that I still seem to be getting errors even on that fork.

@krassowski
Copy link
Member

FYI I am getting issues with jedi 0.18. My pyls won't initialize and the jupyter lab logs are saying theres a dependency conflict with jedi.

Ah, yes! Sorry about not being clear/saying it in a misleading manner. The codebase of the pyls fork is compatible with Jedi 0.18 and even with Jedi master, but there is a pin that needs to be manually changed for pyls to work, so it will not work by installing it as-is but will require cloning -> changing the pin in setup.py -> pip install .. Plus the IPython version stuff. In any case 0.18 support is not production-ready yet, it's more like "if you really want to check it out for benchmark, then it is possible".

@bolliger32
Copy link
Contributor Author

@krassowski ah! got it. I just made the change in that pin and it seems to be working now!

@g6ai
Copy link

g6ai commented Apr 4, 2021

@bolliger32 Could you kindly share the packages version to pin and/or other necessary setup? Thanks!

@bolliger32
Copy link
Contributor Author

@g6ai I just dropped the <0.18.0 pin here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants