-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Choosing from a list of possible notebook kernels is confusing compared to Jupyter #7373
Comments
Potential solution talked about:
|
Part of the investigation for this was finding out how many people are using the Python extension alongside Jupyter when viewing or editing ipynb files containing Python cells. The result was overwhelming 99.7% have both installed. This is both because we recommend it but also because it makes logical sense. So based on that, here's a proposal.
This mechanism gives a strong hint to the user as to the need to set the Python Extension's default interpreter. I believe that if no interpreter has been set, then the Python extension will prompt the user to do so. Not sure if that would require work on our end. Otherwise the list of kernels will appear to be identical to what they see in any other notebook implementation. We would register the currently selected interpreter as a registered kernel if and when the user saves the ipynb. Bonus: If we can detect that the user started VS Code with an activated conda or venv environment, list that as another option to select. Something like "VS Code's Python Environment". Would be great if we could say what the name of that environment is. Another option is to not make the Python extension a requirement. If we do that, then we still would strongly recommend they install it in the usual locations, but if they choose not to, they'll never see "Python Extension Interpreter" in the kernel selection. They would only see the registered kernels and/or the environment VS Code was started in (If we go the Bonus route) |
I think we need the telemetry first before we can remove all the custom kernels that show up. I have an alternate proposal:
|
What are the custom kernels? what kernels are you referring to? The interpreters that the Python extension finds? |
Yes. All interpreters are allowed at the moment. Your proposal forces a user to set the 'active' interpreter in Python first before picking the 'Python Extension Interpreter' |
I disagree we need know whether users need to see all the python interpreters in our list. If they want to choose a different one for another notebook, they would select a new Python interpreter, then run a cell in the new notebook. We would let Python continue to handle all interpreter operations. |
Your proposal also makes it impossible to have two notebooks open using different kernels. The telemetry in #7376 is to check if anybody needs this though. |
How is that? I open one ipynb. I start it using the default python interpreter. I open a second ipynb, then choose a different default Python interpreter, then run a cell in it. |
So changing the active interpreter kills the kernel? Seems off to me. |
I'm not following. Why would changing the active interpreter that was used as a kernel in notebook A, need to be killed when starting notebook B after changing the default Python interpreter? |
You're also going to have to teach people to do that - maybe. Will they understand the active interpreter is the one running in the kernel? What happens when I change the active interpreter? Does it
|
I feel like there'd still be confusion as to what interpreter is being used in a notebook. It's not the active interpreter, that's only what will be used if you actually hit run. However intellisense (at the moment) is using the active one (well if you haven't run yet) or the one that the kernel was started with if you hit run and then switched. |
There's a disconnect between what's happening in the notebook and what is shown as the 'active' interpreter. I feel like we'd need some way to show this. Maybe we could update the kernel name as soon as it runs. Not sure if that would work on next open though. |
Yeah I hear you. However, the current model isn't very easy to understand either. A couple thoughts. We could register the current interpreter chosen to start the notebook as a new kernel. I think we've done that in the past, but in this case, it would be intended to be seen by the customer (unlike what I think we had before). Second, yeah I think we should indicate the situation in the kernel specification at the top right. Not sure how, but it doesn't seem like a real hard problem to figure out. Additionally, I thought that another selection in the kernel picker could be "Select default Python Interpreter" which will basically just invoke the Python extension's command. To be clear, I'm also assuming that if a notebook has focus, the status bar will not show the active interpreter. That should be the case now, but I've seen it show up there at times even when I only have a notebook focused. The point here is that people who use other notebook products, like Jupyter, essentially select their interpreter up front, by starting the appropriate conda or venv environment shell and running from there. This proposal follows that basic idea. I mean it's not like Jupyter peruses the user's machine to find all the other conda environments that they might want to install jupyter into as part of their kernel picker. |
Disagree here. The current model is VERY clear about what interpreter you're starting for your kernel. What's not clear is why it doesn't work or why I need to pick it in the first place.
This sounds even worse? How does the user know which interpreter they're using then? I believe you specified this:
AFAIK, this is harder than you think. There's problems with changing notebook controllers after they've been given to VS code. We had started down this path before.
This sounds even worse than having the picker at the bottom. How do I know what effect this is going to have on my notebook? I think we need a spec for this idea. I don't think the flow of it works. Or maybe a GIF like Miguel does. |
|
All depends upon what the UI we land on looks like. Not clear in my mind what the proposal actually looks like or how it flows through the different conditions. |
Yes, this can happen today IFF:
Of course other behaviors are possible, but the above two are closest to what we have today or have had in the past. |
I agree with @rchiodo that it would make sense to work with @misolori and the UX team to figure out the best flow for this using VS Code. We should also check in PyCharm, Jupyter Notebooks, and JupyterLab to observe if there are any existing patterns that we could leverage. As an example, the process of changing a kernel in a Jupyter Notebook lists the default Python version first, and then user-defined
Same! I believe I've seen this behavior, too, @greazer : |
To be clear, the list of kernels that are shown in the Jupyter image above only reflect those kernels that are found to be registered in a specific location on disk. Example for Windows its: %appdata%\jupyter\kernels. It does not reflect all environments the user may have defined in their Python installation(s). To me, this is where the confusion comes into play. |
Suggestions
|
A new suggestion to add (though the Controller API would need to change). Would be nice to put somethings just totally off the list like in a "More Kernels" item. Something like interpreters not active workspace interpreter or global kernels that have never been picked.
More Kernels would have other items. Anything that was picked from "More Kernels" once would start to show up in the top list then. |
Discussion points:
Suggestion:
|
Suggestions:
|
Problems
|
Addressed by microsoft/vscode#135502 |
Is there a chance that you could move the controller kind property from proposed API to released? We have the same UI problem with Julia that motivated this issue here in the first place, i.e. a pretty unruly mess of kernels from the Julia extension and the Jupyter extension. Would be great if we could group things in the same way that the Jupyter/Python word can do. |
In Jupyter classic and lab, the list of kernels presented to run within represent those runtimes that are registered with kernelspec.json files as well as one special/default "Python3 (ipykernel)". When a user chooses this last one, the kernel used represents whatever the active Python environment was when Jupyter was instantiated.
In VS Code, most users don't start it from an active Python environment. Therefore, the above behavior isn't possible. Instead we currently show the list of custom kernels registered by the user (ala kernelspec.json files), in addition to every other Python environment the Python extension can find on the system.
This can be highly confusing to a user who may have been working with Jupyter classic or Lab using the default kernel. When they open it in VS Code and have to choose a kernel they'll be presented with a long list of possibilities. This can be startling and confusing, even IF it makes technical sense.
The text was updated successfully, but these errors were encountered: