-
Notifications
You must be signed in to change notification settings - Fork 309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the whole lab gets stuck when open a big notebook that contains lots of output #533
Comments
If you're already using Lab 3.0+ the underlying server comes from the
If this resolves your issue, I think we might want to consider a switch of the default in cc: @mwakaba2 |
This issue is more obvious when accessing remote directories that require network interaction, it looks like the main thread is blocking the code that generates large output can reproduce the issue.
|
To verify which server is in use, there should be an informational message logged to the console of the form:
or
The version information isn't as important as the application name that proceeds it. Thanks. |
|
Reproduce step
the dump result:
dump result:
a lot of ThreadPoolExecutors were created, and then I shutdown all of the kernels, these ThreadPoolExecutors are not destroyed. I don't know where these thread pools were created. |
If you did not "warm-up" the server by starting 1 kernel, I would suggest doing that. I suspect at least one of these is to monitor for culling - which won't start until the first kernel starts. So perhaps you could dump the threads following the start of the first kernel as well. Also, is this last comment conflating the other issue (in jupyter_server) you opened (#530)? This issue only talks about opening a notebook that contains a large output area yet you talk about threads related to "all of the kernels" - so I'm a little confused. |
Question
the whole lab gets stuck when open a big notebook that contains lots of output.
I know showing lots of data in a notebook is not appropriate, but I think opening notebook should not cause the whole service to get stuck. Is there some optimization measures to avoid it?
Originally opened as jupyter/notebook#6077 by @icankeep, migration requested by @kevin-bates
The text was updated successfully, but these errors were encountered: