STORM-513 check heartbeat from multilang subprocess#286
STORM-513 check heartbeat from multilang subprocess#286asfgit merged 10 commits intoapache:masterfrom
Conversation
* Spout ** ShellSpout sends "next" to subprocess continuously ** subprocess sends "sync" to ShellSpout when "next" is received ** so we can treat "sync", or any messages to heartbeat * Bolt ** ShellBolt sends tuples to subprocess if it's available ** so we need to send "heartbeat" tuple ** subprocess sends "sync" to ShellBolt when "heartbeat" tuple is received
|
As the person who filed the issue. Thanks for coming up with a solution so quickly! |
|
@dan-blanchard You're welcome! Actually it doesn't let subprocess sends heartbeat periodically themselves, so it can't cover some situations. |
There was a problem hiding this comment.
whe doing debug logging try to refrain from using +. Either surraound with isDebugLevel() or use "current time :{}, last heartbeat : {} ....
There was a problem hiding this comment.
@itaifrenkel Oh, you're right! It's slf4j so we should use {} to log effectively.
|
@itaifrenkel Thanks for pointing out missed spots and points to improve! |
There was a problem hiding this comment.
- You should consult the comitters if they are happy with another thread, or you are requested to use Tick tuples. A pro for another thread is the fact that maybe maybe a multilang bolt would want to use tick tuples too. But still...
- 1 Thread should be enough.
- This thread must not halt the process when main exists (as in tests), so it should be daemonized. The way to do it AFAIK is this
heartBeanExecutor = MoreExecutors.getExitingScheduledExecutorService(Executors.newScheduledThreadPool(1))
There was a problem hiding this comment.
- AFAIK, ShellSpout's nextTuple() waits infinitely if subprocess is hang (or doesn't send anything).
(Same thing applies ShellBolt.)
That's what "sync" is for.
So we should check heartbeat with "new thread".
Please correct me if I'm wrong. :)
There was a problem hiding this comment.
@itaifrenkel I agree that 2, 3. I'll update it.
There was a problem hiding this comment.
new ScheduledThreadPoolExecutor(1) --> Executors.newScheduledThreadPool(1)
There was a problem hiding this comment.
@itaifrenkel
MoreExecutors.getExitingScheduledExecutorService() receives ScheduledThreadPoolExecutor, not ScheduledExecutorService. I tried to change it, but compiler complained.
There was a problem hiding this comment.
You could cast, But that's ok as it is
There was a problem hiding this comment.
@itaifrenkel OK, I don't like explicit cast, so I'd not modify it.
|
+1 |
There was a problem hiding this comment.
I reread the code and think that we need here just to flip an atomicboolean (a priority queue for heartbeats of size 1). The reason is that the size of the _pendingWrites queue is Config.TOPOLOGY_SHELLBOLT_MAX_PENDING which by its name is the number of real tuples to retrieve from the disruptor queue. We set it to 1 to optimize for shortest latency... which would cause this thread to block.... which means you cannot share this thread between bolts event if you wanted too... which we need to think if this is an issue or not. A stronger argument in favor of a priority queue for heartbeats is that the rate of heartbeat messages will not be skewed by the length of the queue.
There was a problem hiding this comment.
@itaifrenkel Oh, I see. I didn't know that options exists. Thanks for letting me know!
Then we can flip (turn on) heartbeat flag which means "it's time to send heartbeat" as you state, and let BoltWriter.run() loop takes care of it first. Definitely it would be flipped (turned off) after sending.
What do you think?
There was a problem hiding this comment.
That sounds very good 👍
There was a problem hiding this comment.
@itaifrenkel I've updated PR to reflect it.
|
@ptgoetz Is there a shared ScheduledExecutor in storm? It seems redundant that each bolt would dedicate a thread for the scheduler. Especially now, that the task itself would be non-blocking. |
* size of _pendingWrites is Config.TOPOLOGY_SHELLBOLT_MAX_PENDING * If users set this to 1 or strict, heartbeat request can affect performance
|
@itaifrenkel No, not that's available for use via the bolt API, but it's an interesting idea. You could effectively do the same by making the scheduler static (1 per worker/JVM), but that feels kind of hacky. |
|
Since there were additional commits added to the pull request, we need to give it more time for others to review before merging, but I am still +1 for the patch. |
|
Can PR be included to 0.9.3, or should be pending for next version? |
|
+1 |
|
@HeartSaVioR Thanks for the patch. can you upmerge the changes. |
|
@harshach Sure. I'll upmerge it. |
|
I've got a change to discuss about this PR with @clockfly from online yesterday , and he also stated if subprocess is too busy, subprocess cannot send heartbeat in time, which I've stated first of this PR. Actually it's better to let subprocess have heartbeat thread and send heartbeat periodically.
Since I'm not a Javascript (nodejs) guy, and I'm a beginner to Ruby, I cannot cover two things with .js. Btw, Nimbus / Supervisor can find dead process due to subprocess hang up to SUPERVISOR_WORKER_TIMEOUT_SECS * 2 + a (maybe), cause there're two heartbeat check, ShellProcess checks subprocess (and suicide if subprocess cannot respond), Nimbus / Supervisor checks ShellProcess. |
Conflicts: storm-core/src/multilang/py/storm.py * also fixes diverged py files (there're 3 storm.py files but seems to diverged)
|
OK, I've upmerged. |
|
Please comment on STORM-528 if you resolved the py files divergence problem. |
|
@HeartSaVioR @clockfly I think we need to keep the multilang protocl implementation as simple as possible. A full roundtrip of heartbeat messages is not that bad, as long as it does not add too much latency. If you would like an optimization for the rountrip messages then you could consider any emit as an heartbeat, and trigger the heartbeat rountrip only if there are not enough emits from the bolt. It makes the java code more complicated :(, but achieves similar goals, and leaves the multilang implementation simpler :). All-in-all I think this commit is good, and we could discuss various optimizations later on. |
|
@itaifrenkel @clockfly AND changes of multilang protocol introduced on this PR should be documented when we announce to release 0.9.3. |
|
@itaifrenkel OK, I've commented to STORM-528. |
|
Maybe we can have a discussion about multilang from mailing list or other issue. |
|
We should document this feature to http://storm.apache.org/documentation/Multilang-protocol.html |
|
@HeartSaVioR I wanted to fix the multilang docs some time ago for nodejs support, but couldn't find a way to provide a pull request ? |
|
AFAIK, there was a discussion about moving documents (containing website) to git, started by @ptgoetz, and +1 by many committers. Personally I'm heavily inspired to Redis documentation. Document has been treated to same as code. |
|
@HeartSaVioR I am working on doing some tests on this PR. I tried to build the storm with your changes in and I am getting these failures. Can you please check if you see any of these issues. Thanks. java.lang.Exception: Shell Process Exception: Exception in bolt: undefined method at backtype.storm.task.ShellBolt.handleError(ShellBolt.java:188) [classes/:na] at backtype.storm.task.ShellBolt.access$1100(ShellBolt.java:69) [classes/:na] at backtype.storm.task.ShellBolt$BoltReaderRunnable.run(ShellBolt.java:331) [classes/:na] at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] 90960 [Thread-1055] ERROR backtype.storm.task.ShellBolt - Halting process: ShellBolt died. java.lang.RuntimeException: backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read. Serializer Exception: |
|
+1 (again) @harshach The problem you saw was due to two of the |
|
+1 |
Related issue link : https://issues.apache.org/jira/browse/STORM-513
It seems that ShellSpout and ShellBolt doesn't check subprocess, and set heartbeat with their only states.
Subprocess could hang, but it doesn't affect ShellSpout / ShellBolt. It just stops working on tuple.
It's better to check heartbeat from subprocess, and suicide if subprocess stops working.