-
-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large notebook issues with saving #17017
Comments
Reproducer from the screenshot: for i in range(10**8):
print("qwertyuiopasdfghjklzxcvbnm") Note: on my machine using the reproducer to generate data requires increasing iopub limits. Even at 10**6 I see that the server is throttling sending the outputs:
To effectively disable the iopub limits I restarted the server with |
I was able to reproduce this with a notebook of 100 MB. Indeed, significant time is spent just serializing the notebook content to JSON in: jupyterlab/packages/services/src/contents/index.ts Lines 1316 to 1325 in 761d34f
With RTC enabled it is a no-op wasting a lot of time on main thread. #16900 could solve it for the RTC case. |
Technically, the standards allow to use
we still could implement it with jupyverse stack but it would require many defensive conditions to avoid failing on the most popular stack. |
We can break up the
|
In principle this sounds right. However, saving has some side effects:
One could argue that these side-effects should only happen when user triggers the save manually. I think this is right, though for (b) we may want to poll in background so that users are not left in dark and later presented with a conflict if they worked on outdated version of a file. |
Description
Large notebooks take a long time to save, both making the heap very large temporarily and blocking the main thread. While this maybe considered ok when a user requests for a save, using autosave means this happens periodically and out of a users control, freezing up the UI.
Reproduce
Create a large notebook with a lot of strings (or any notebook that ends up being a few hundred MB). This happens both with/without using ydoc.
Expected behavior
Ideally the following is true:
Context
The text was updated successfully, but these errors were encountered: