-
-
Notifications
You must be signed in to change notification settings - Fork 414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large amounts of output leads to uninterruptable kernel hang #372
Comments
Duplicate of #243, I think. |
Eek, sorry about that. Btw, I failed to mention that it doesn't always get to the same point in the count, usually it makes it to somewhere in the 11000 - 13000 range. Sometimes it only makes it to 1, here is the terminal output for that case:
These both seem to run fine for me:
Looking at the IPython messages in the terminal it seems to be breaking the command output up into smaller messages more reliably for some reason. |
Hmm, weird, it might be a different issue then. |
So, when I change I added debug lines either side of the
and when I get it to hang, e.g. the original example, the wake doesn't get printed, so 2) maybe the sleep wasn't returning for some reason? 3) Is it hitting the deadlock mentioned in #243 and JuliaLang/julia#8789 because nothing is being read off before 64KB is put on STDOUT? In which case 4) I wonder if we shouldn't remove the |
Instead of sleeping during the read loop to throttle the number of stream messages sent, we now continually read the read_stdout and read_stderr streams (whose buffers are limited to 64KB on OSX/Linux, 4KB Windows?) and add data read into our own IOBuffers for STDOUT/STDERR. The rate of sending is now controlled by a stream_interval parameter; a stream message is now sent at most once every stream_interval seconds (currently 0.1). The exception to this is if the buffer has reached the size specified in max_size (currently 10KB), and will then be sent immediately. This is to avoid overly large stream messages being sent. Improves flush() so that it will have a very high chance of flushing data already written to stdout/stderr. The above changes fix JuliaLang#372 JuliaLang#342 JuliaLang#238 347 Adds timestamps to the debug logging and the task which the vprintln call is made from. Fixes using ?keyword (e.g. ?try)
Instead of sleeping during the read loop to throttle the number of stream messages sent, we now continually read the read_stdout and read_stderr streams (whose buffers are limited to 64KB on OSX/Linux, 4KB Windows?) and add data read into our own IOBuffers for STDOUT/STDERR. The rate of sending is now controlled by a stream_interval parameter; a stream message is now sent at most once every stream_interval seconds (currently 0.1). The exception to this is if the buffer has reached the size specified in max_size (currently 10KB), and will then be sent immediately. This is to avoid overly large stream messages being sent. Improves flush() so that it will have a very high chance of flushing data already written to stdout/stderr. The above changes fix JuliaLang#372 JuliaLang#342 JuliaLang#238 JuliaLang#347 Adds timestamps to the debug logging and the task which the vprintln call is made from. Fixes using ?keyword (e.g. ?try)
Hi, love IJulia, brilliant work.
Currently running into a bug, I've managed to whittle reproduction down to this:
Putting that in a notebook leads to the kernel "hanging" (shows the * in the notebook) and being uninterruptable (via the stop button), so every time I output "too much" I need to restart the kernel (restarting works fine, as does saving, while this is happening)
That's the picture after scrolling to the bottom of the output.
With verbose = true, the last message is the output up to the number that the notebook shows:
I'm on IJulia master (and have run
Pkg.build("IJulia")
since getting the latest julia andPkg.checkout("IJulia")
), OSX 10.9, julia 0.4.0, jupyter 4.0.6 (installed via pip). Presumably this is related to #347 but thought I'd post separately, since the symptoms seem slightly more severe. Cheers.The text was updated successfully, but these errors were encountered: