Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stream messages come in irregular chunks #342

Closed
ghost opened this issue Aug 22, 2015 · 30 comments · Fixed by #392
Closed

stream messages come in irregular chunks #342

ghost opened this issue Aug 22, 2015 · 30 comments · Fixed by #392

Comments

@ghost
Copy link

ghost commented Aug 22, 2015

There is something unusual in the way IJulia is responding to the whos() command. Usually, in a fresh environment the output should be something like this:

julia> whos()
Base                          Module
Core                          Module
Main                          Module
ans                           Nothing

However, using the kernel directly, it outputs a couple of stream messages. The first message only shows one of the variables or modules in the environment and the second message outputs the rest. For example:

First message:

 'text': u'Base                          Module'},

and the second:

  'text': u'\nCompat                        Module\nCore                          Module\nIJulia
Module\nIPythonDisplay                Module\nJSON                          
Module\nMain                          Module\nNettle                        Module\nZMQ
Module\n'},

This is the complete session:

In [10]: kc.execute('whos()')
Out[10]: 'f118c20a-b199-4263-b133-fc801293b591'

In [11]: kc.get_iopub_msg(timeout=1)
Out[11]:
{'buffers': [],
 'content': {u'execution_state': u'busy'},
 'header': {u'msg_id': u'd00a1991-1c78-40d1-84cb-c88a830478d5',
  u'msg_type': u'status',
  u'session': u'????',
  u'username': u'jlkernel',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'd00a1991-1c78-40d1-84cb-c88a830478d5',
 'msg_type': u'status',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 15, 46, 15, 447034),
  u'msg_id': u'f118c20a-b199-4263-b133-fc801293b591',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

In [12]: kc.get_iopub_msg(timeout=1)
Out[12]:
{'buffers': [],
 'content': {u'code': u'whos()', u'execution_count': 1},
 'header': {u'msg_id': u'28393ec3-1d4a-4d6f-b9fc-1a9f1e040809',
  u'msg_type': 'execute_input',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'28393ec3-1d4a-4d6f-b9fc-1a9f1e040809',
 'msg_type': 'execute_input',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 15, 46, 15, 447034),
  u'msg_id': u'f118c20a-b199-4263-b133-fc801293b591',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

In [13]: kc.get_iopub_msg(timeout=1)
Out[13]:
{'buffers': [],
 'content': {u'name': u'stdout',
  'text': u'Base                          Module'},
 'header': {u'msg_id': u'99c055ad-94be-405e-983e-f8237785b653',
  u'msg_type': u'stream',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'99c055ad-94be-405e-983e-f8237785b653',
 'msg_type': u'stream',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 15, 46, 15, 447034),
  u'msg_id': u'f118c20a-b199-4263-b133-fc801293b591',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

In [14]: kc.get_iopub_msg(timeout=1)
Out[14]:
{'buffers': [],
 'content': {u'name': u'stdout',
  'text': u'\nCompat                        Module\nCore                          Module\nIJulia                        Module\nIPythonDisplay                Module\nJSON                          Module\nMain                          Module\nNettle                        Module\nZMQ                           Module\n'},
 'header': {u'msg_id': u'265c7a5a-bf9e-4dda-952a-66c4c7c22305',
  u'msg_type': u'stream',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'265c7a5a-bf9e-4dda-952a-66c4c7c22305',
 'msg_type': u'stream',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 15, 46, 15, 447034),
  u'msg_id': u'f118c20a-b199-4263-b133-fc801293b591',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

In [15]: kc.get_iopub_msg(timeout=1)
Out[15]:
{'buffers': [],
 'content': {u'execution_state': u'idle'},
 'header': {u'msg_id': u'6aa2944a-e315-48dc-9078-e2c70a666836',
  u'msg_type': u'status',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'jlkernel',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'6aa2944a-e315-48dc-9078-e2c70a666836',
 'msg_type': u'status',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 15, 46, 15, 447034),
  u'msg_id': u'f118c20a-b199-4263-b133-fc801293b591',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

I'm not sure why IJulia is using stream messages. As far as I know, other kernels (IPython, IR) are using msg_type: execute_result instead. Is this a bug?

@yuyichao
Copy link
Contributor

I'm not an expert of the Jupyter protocol but this seems to be the correct behavior given whos() prints to standard output.

@ghost
Copy link
Author

ghost commented Aug 22, 2015

Are you sure? Other kernels simply output an execute result. I don't know about using stream messages to obtain results that are not really a stream. In any case, I can see that the same issue happens when I print a value in a loop:

In [20]: kc.get_iopub_msg(timeout=1)
Out[20]:
{'buffers': [],
 'content': {u'code': u'for i in 1:10\nprintln(i)\n;end',
  u'execution_count': 2},
 'header': {u'msg_id': u'bb405cac-4d81-4df1-a80a-cc20fd4643a1',
  u'msg_type': 'execute_input',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'bb405cac-4d81-4df1-a80a-cc20fd4643a1',
 'msg_type': 'execute_input',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 12, 53, 893909),
  u'msg_id': u'8f21e358-ef55-436d-9faa-dc341e566bf9',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

In [21]: kc.get_iopub_msg(timeout=1)
Out[21]:
{'buffers': [],
 'content': {u'name': u'stdout', 'text': u'1\n'},
 'header': {u'msg_id': u'4707f974-4397-4b46-99b5-60082e8d91d9',
  u'msg_type': u'stream',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'4707f974-4397-4b46-99b5-60082e8d91d9',
 'msg_type': u'stream',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 12, 53, 893909),
  u'msg_id': u'8f21e358-ef55-436d-9faa-dc341e566bf9',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

In [22]: kc.get_iopub_msg(timeout=1)
Out[22]:
{'buffers': [],
 'content': {u'name': u'stdout', 'text': u'2\n3\n4\n5\n6\n7\n8\n9\n10\n'},
 'header': {u'msg_id': u'd1ac7901-818a-44da-8895-e04b7143e50f',
  u'msg_type': u'stream',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'd1ac7901-818a-44da-8895-e04b7143e50f',
 'msg_type': u'stream',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 12, 53, 893909),
  u'msg_id': u'8f21e358-ef55-436d-9faa-dc341e566bf9',
  u'msg_type': u'execute_request',
  u'session': u'592fe3e1-eeed-4094-b61d-878ee6023940',
  u'username': u'user'}}

As before, there is an initial message with a single value and then the other values.

@yuyichao
Copy link
Contributor

Do you have a full session?

@ghost
Copy link
Author

ghost commented Aug 22, 2015

Using IJulia or other kernels?

@yuyichao
Copy link
Contributor

Whatever kernels....

@ghost
Copy link
Author

ghost commented Aug 22, 2015

Okay. Let me send you another session.

@yuyichao
Copy link
Contributor

The stream message type should be used for stdout according to this ?

@yuyichao
Copy link
Contributor

My understanding is that execute_result should be used for the result and stream should be used for stdout/stderr so the message type IJulia uses seems to be correct.

Also, since the output of print and whos are not guaranteed to be atomic, it should be allowed to send them out in any number of packages.

@ghost
Copy link
Author

ghost commented Aug 22, 2015

Yes, I think you're right. The only difference is that IJulia produces one initial message containing a single value. That's what is weird but the stream type is okay. Would it be possible to fix that part? I'm not sure how many values are needed to require more than one message, though.

By the way, there was a small interruption when typing input [5]. Anyway, this is the session:

In [1]: from IPython.kernel import MultiKernelManager

In [2]: km = MultiKernelManager()

In [3]: id = km.start_kernel('julia-0.3')

In [4]: kn = km.get_kernel(id)

In [5]: kcStarting kernel event loops.

   ...:
KeyboardInterrupt

In [5]: kc = kn.client()

In [6]: kc.start_channels()

In [7]: kc.wait_for_ready()

In [8]: kc.execute('whos()')
Out[8]: '4fddc9dd-7e7e-47cb-9b9b-a8f1529a18fc'

In [9]: kc.get_shell_msg(timeout=1)
Out[9]:
{'buffers': [],
 'content': {u'execution_count': 1,
  u'payload': [],
  u'status': u'ok',
  u'user_expressions': {}},
 'header': {u'msg_id': u'9c71ba60-8b1c-4246-b8aa-d049034f4933',
  u'msg_type': u'execute_reply',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'9c71ba60-8b1c-4246-b8aa-d049034f4933',
 'msg_type': u'execute_reply',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 34, 33, 34449),
  u'msg_id': u'4fddc9dd-7e7e-47cb-9b9b-a8f1529a18fc',
  u'msg_type': u'execute_request',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user'}}

In [10]: kc.get_iopub_msg(timeout=1)
Out[10]:
{'buffers': [],
 'content': {u'execution_state': u'busy'},
 'header': {u'msg_id': u'ea883645-dddf-4dad-8d76-2ba4912933ca',
  u'msg_type': u'status',
  u'session': u'????',
  u'username': u'jlkernel',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'ea883645-dddf-4dad-8d76-2ba4912933ca',
 'msg_type': u'status',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 34, 33, 34449),
  u'msg_id': u'4fddc9dd-7e7e-47cb-9b9b-a8f1529a18fc',
  u'msg_type': u'execute_request',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user'}}

In [11]: kc.get_iopub_msg(timeout=1)
Out[11]:
{'buffers': [],
 'content': {u'code': u'whos()', u'execution_count': 1},
 'header': {u'msg_id': u'8524e965-4afe-4cab-bddd-4b8137c45dfc',
  u'msg_type': 'execute_input',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'8524e965-4afe-4cab-bddd-4b8137c45dfc',
 'msg_type': 'execute_input',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 34, 33, 34449),
  u'msg_id': u'4fddc9dd-7e7e-47cb-9b9b-a8f1529a18fc',
  u'msg_type': u'execute_request',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user'}}

In [12]: kc.get_iopub_msg(timeout=1)
Out[12]:
{'buffers': [],
 'content': {u'name': u'stdout',
  'text': u'Base                          Module'},
 'header': {u'msg_id': u'ba92f49f-8a1a-40a1-9b2d-e9f444d2c409',
  u'msg_type': u'stream',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'ba92f49f-8a1a-40a1-9b2d-e9f444d2c409',
 'msg_type': u'stream',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 34, 33, 34449),
  u'msg_id': u'4fddc9dd-7e7e-47cb-9b9b-a8f1529a18fc',
  u'msg_type': u'execute_request',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user'}}

In [13]: kc.get_iopub_msg(timeout=1)
Out[13]:
{'buffers': [],
 'content': {u'name': u'stdout',
  'text': u'\nCompat                        Module\nCore                          Module\nIJulia                        Module\nIPythonDisplay                Module\nJSON                          Module\nMain                          Module\nNettle                        Module\nZMQ                           Module\n'},
 'header': {u'msg_id': u'f414fe4b-deb8-452d-a0de-2d9847d233db',
  u'msg_type': u'stream',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'f414fe4b-deb8-452d-a0de-2d9847d233db',
 'msg_type': u'stream',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 34, 33, 34449),
  u'msg_id': u'4fddc9dd-7e7e-47cb-9b9b-a8f1529a18fc',
  u'msg_type': u'execute_request',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user'}}

In [14]: kc.get_iopub_msg(timeout=1)
Out[14]:
{'buffers': [],
 'content': {u'execution_state': u'idle'},
 'header': {u'msg_id': u'e96fefee-b89a-480a-adc4-b3618475a689',
  u'msg_type': u'status',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'jlkernel',
  'version': '5.0'},
 'metadata': {},
 'msg_id': u'e96fefee-b89a-480a-adc4-b3618475a689',
 'msg_type': u'status',
 'parent_header': {u'date': datetime.datetime(2015, 8, 22, 17, 34, 33, 34449),
  u'msg_id': u'4fddc9dd-7e7e-47cb-9b9b-a8f1529a18fc',
  u'msg_type': u'execute_request',
  u'session': u'3fac532e-3ebc-42d2-b997-42e9126dd253',
  u'username': u'user'}}

@yuyichao
Copy link
Contributor

The only difference is that IJulia produces one initial message containing a single value. That's what is weird but the stream type is okay. Would it be possible to fix that part?

My guess is that you want to send out the output as early as possible so that the user can see real time update from the program. I'll wait for others to comment on this though since I'm not 100% sure.

@ghost
Copy link
Author

ghost commented Aug 22, 2015

Well, the idea is to have the kernels behaving the same as much as possible. There is a really small delay after receiving the first message with a single value (in this case, 1\n) and waiting for the next message. I'm not sure if others see that delay.

UPDATE:

The delay is more noticeable when you increase the number of iterations. For example:

for i in 1:100000
println(i)
end

@stevengj
Copy link
Member

There are two three separate issues here:

  • In Julia, print and other I/O statements are buffered and are processed asynchronously by IJulia (in a separate "green" thread), which is why stdout may get sent in pieces. After sending one stream message, the stdout thread sleeps for 0.1s in order to allow output to accumulate (so that we don't send zillions of tiny stream messages in cases where there are lots of little print calls).
    • We could wait until the end of code execution to read the stdout, to lump it into a single stream message, but this is not acceptable — during execution of an input cell that takes a lot of time, we want I/O to display continual updates, rather than waiting until the end of the code execution.
  • It would be nice if the whos() function in Julia returned its result as a datastructure, rather than printing to stdout. This would cause the result to be displayed via the execute_reply, but would be useful for other processing too. (In general, I think non-I/O functions should avoid relying on side effects.)
  • I'm not sure why you expect the whos() command to behave the same across kernels. There is no reason to expect a function like this to even exist in all kernels, much less to behave identically.

@ghost
Copy link
Author

ghost commented Aug 24, 2015

I understand your first point. Obviously, it is not sensible to have many tiny stream messages. Unfortunately, for some reason that is exactly what we have. The first stream message is always tiny consisting of a single element. It would be much better to have a larger threshold to send stream messages.

I don't think other kernels are waiting for all the messages in order to send a single huge stream message, but the partitioning of the messages is more uniform. I don't have a concrete example right now but there are hundreds of elements in each stream message in IPython.

Definitely I'm not suggesting that whos() should behave the same across kernels. Instead, I think it is appropriate that kernels should behave in a similar way.

@stevengj
Copy link
Member

I think the first message is small because the stdout thread has already been asleep for a while, so it wakes up the instant bytes are written. One solution might be to put the stdout task to sleep for 100ms just before executing a code cell.

@stevengj
Copy link
Member

But I'm not sure what problem that would solve. Even if we sleep the stdout task, you cannot in general rely on output occurring within a single stream message.

@stevengj
Copy link
Member

Not can you rely on the partitioning always being uniform. Why should you care?

@ghost
Copy link
Author

ghost commented Aug 24, 2015

As I said, the first message produces a delay in showing the output. In the previous example, printing numbers in a loop, it looks like the first number is shown, then there is a small delay and finally the other numbers are received as usual. It gives the impression something is not as smooth as it should.

If the output needs more than a single stream message, that's perfectly okay.

The idea is not to have stream messages of exactly the same length. They only need to be reasonable in their length.

@yuyichao
Copy link
Contributor

They only need to be reasonable in their length.

What's not reasonable for the current version? It seems like a pretty good compromise between showing the results as the output are generated and not sending too many messages.

@ghost
Copy link
Author

ghost commented Aug 24, 2015

Sending a single element in the first message causes a delay in the output. The first time I launched the notebook, I saw that delay immediately and thought this was due to something wrong with the installation, then when still happened I thought it was a thing of the language.

Your reasoning is correct. If you want to have a compromise between responsiveness and not being wasteful in the number of messages sent, you wouldn't choose to send a single value in the first message and then 100 in the next one.

@stevengj stevengj changed the title whos() outputs msg_type: stream stream messages come in irregular chunks Sep 4, 2015
This was referenced Sep 25, 2015
@amellnik
Copy link

Does anyone have a work-around for this -- perhaps some way to add in a slight pause between the execution of cells when running multiple cells at once?

@amellnik
Copy link

Also I noticed that the output of the cell in IJulia shows up before most of the things sent to STDOUT with println and similar leading to outputs like this:

image

@stevengj
Copy link
Member

@amellnik, see @vtjnash's suggestion at the end of #347. I haven't had a chance to try it out yet, but it looks promising.

@amellnik
Copy link

I'm afraid I can't really follow the discussion there, but calling ccall(:SwitchToThread, Void, stdcall, ()) from within IJulia results in an error:

LoadError: syntax: ccall argument types must be a tuple; try "(T,)" 
while loading In[7], in expression starting on line 4

If this should be added in to IJulia I can't tell where.

@stevengj
Copy link
Member

I think he meant ccall(:SwitchToThread, stdcall, Void, ()) (on Windows only).

@amellnik
Copy link

I might have misunderstood, but running

using DataFrames
df = DataFrame(A=[1,2], B=["one", "two"]) 
println("There are ", nrow(df), " rows in the dataframe")
ccall(:SwitchToThread, stdcall, Void, ());yield();yield()
df

produces the same results as above.

@stevengj
Copy link
Member

Add a call to flush(STDOUT) before the ccall

@damiendr
Copy link

+1 for this, with the mixup of STDERR/OUT across multiple cells Run All produces a very messy notebook.

@damiendr
Copy link

I think the following is related and more problematic as it leads to the loss of some output.

I keep running into the problem that cells that produce a lot of output / take a long time to run have truncated output in the notebook. For instance there might be an exception and only the first two lines of the error are shown:

[interleaved stdout and stderr lines]

LoadError: type Expr has no field name
while loading In[4], in expression starting on line 1
[!!missing backtrace!!]

[more stdout and stderr lines]

The backtrace does appear when the same code is run outside of the notebook.

If there was another cell queued for execution, then the missing output sometimes appears in the output of that cell. If not, the missing output is lost.

Note that I'm using the example of exceptions because it is the most annoying one (can't use the notebook for debugging), but it happens with all sorts of outputs, for instance printing a float number and getting "3" as the cell's output, with ".1415" appended to the next cell's output.

@JobJob
Copy link
Contributor

JobJob commented Nov 11, 2015

I'm working on a fix for these and related IO problems. In the meantime, does adding a sleep(0.11) just above these two lines:

https://github.com/JuliaLang/IJulia.jl/blob/master/src/execute_request.jl#L224
and
https://github.com/JuliaLang/IJulia.jl/blob/master/src/execute_request.jl#L244

work as temporary measure?

(FYI: It's 0.11 just to be slightly longer than the sleep in the read task at https://github.com/JuliaLang/IJulia.jl/blob/master/src/stdio.jl#L42 (which is designed to avoid sending huge numbers of messages to Jupyter, when a lot of output is happening). This should allow the STDOUT and STDERR watching tasks to wake and capture and send output to Jupyter before the result of the execution is sent via the execute_request task.)

@damiendr
Copy link

Thanks for working on it!

  • it does fix the issue of output appearing in the wrong cell.
  • it does not fix the issue of the missing backtrace

Investigating further I find that if I wrap the call in

try
    run()
catch
    @show Base.catch_backtrace()
end

then catch_backtrace() is empty in the notebook but correct in the REPL. So I guess that's a separate issue. EDIT: seems to be a Julia issue, not an IJulia issue. Opened one at JuliaLang/julia#13947

JobJob added a commit to JobJob/IJulia.jl that referenced this issue Nov 27, 2015
Instead of sleeping during the read loop to throttle the number of
stream messages sent, we now continually read the read_stdout and
read_stderr streams (whose buffers are limited to 64KB on OSX/Linux, 4KB
Windows?) and add data read into our own IOBuffers for STDOUT/STDERR.

The rate of sending is now controlled by a stream_interval parameter; a
stream message is now sent at most once every stream_interval seconds
(currently 0.1). The exception to this is if the buffer has reached the
size specified in max_size (currently 10KB), and will then be sent
immediately. This is to avoid overly large stream messages being sent.

Improves flush() so that it will have a very high chance of flushing data
already written to stdout/stderr.

The above changes fix JuliaLang#372 JuliaLang#342 JuliaLang#238 347

Adds timestamps to the debug logging and the task which the vprintln
call is made from.

Fixes using ?keyword (e.g. ?try)
JobJob added a commit to JobJob/IJulia.jl that referenced this issue Nov 27, 2015
Instead of sleeping during the read loop to throttle the number of
stream messages sent, we now continually read the read_stdout and
read_stderr streams (whose buffers are limited to 64KB on OSX/Linux, 4KB
Windows?) and add data read into our own IOBuffers for STDOUT/STDERR.

The rate of sending is now controlled by a stream_interval parameter; a
stream message is now sent at most once every stream_interval seconds
(currently 0.1). The exception to this is if the buffer has reached the
size specified in max_size (currently 10KB), and will then be sent
immediately. This is to avoid overly large stream messages being sent.

Improves flush() so that it will have a very high chance of flushing data
already written to stdout/stderr.

The above changes fix JuliaLang#372 JuliaLang#342 JuliaLang#238 JuliaLang#347

Adds timestamps to the debug logging and the task which the vprintln
call is made from.

Fixes using ?keyword (e.g. ?try)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants