Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any instruction to use it with (a fork of) notebook ? #3

Open
Carreau opened this issue Oct 22, 2018 · 3 comments
Open

Is there any instruction to use it with (a fork of) notebook ? #3

Carreau opened this issue Oct 22, 2018 · 3 comments

Comments

@Carreau
Copy link
Contributor

Carreau commented Oct 22, 2018

Or any other clients.

I'm currently trying to write a SlurmKernelManager, and that seem more adequate as it is async all-the -way.

I can also try to "just figure it out", or maybe it's not time yet ?

@takluyver
Copy link
Owner

The changes to use this were landed in jupyter_kernel_test in master already, and I started a pull request for nbconvert. I don't think there's any fork of the notebook server using it yet; that will probably be a bigger job.

@takluyver
Copy link
Owner

Actually, this reminds me that there's an API question I've been wanting to get a second opinion on:

When a client makes a request to a kernel, the reply can have status: 'ok' or status: 'error' (or 'abort'). At present, if the reply status is 'error', the client raises an ErrorInKernel exception. However, both in jupyter_kernel_test and in the PR for nbconvert, I've ended up catching this exception and handling the message object as if it had been returned.

So I'm wondering if the client object should just return the reply message, whatever its status is, and let the application code deal with checking the status.

@takluyver
Copy link
Owner

I now have a WIP branch where I'm making the notebook server use this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants