-
Notifications
You must be signed in to change notification settings - Fork 810
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat(batching): refactor; troubleshoot back pressure #630
Conversation
Hello @hrmthw, Thanks for updating this PR. There are currently no PEP 8 issues detected in this PR. Cheers! 🍻 Comment last updated at 2020-05-11 05:56:49 UTC |
279597e
to
4b528b0
Compare
bentoml/marshal/dispatcher.py
Outdated
|
||
def log_outbound_time(self, info): | ||
if info[0] < 5: # skip all small batch | ||
self.o_stat = FixedBucket(self.N_OUTBOUND_SAMPLE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add inline comments here to explain these variables?
bentoml/marshal/dispatcher.py
Outdated
def __init__(self, max_wait_time, max_size, shared_sema: callable = None): | ||
def __init__( | ||
self, | ||
max_expected_time: int, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: rename max_latency_in_ms
?
bentoml/marshal/marshal.py
Outdated
@@ -141,6 +141,7 @@ def __init__( | |||
self.setup_routes_from_pb(self.bento_service_metadata_pb) | |||
if psutil.POSIX: | |||
import resource | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update the docstring in MarshalService class to reflect the changes in this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The usage is almost the same as before.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we changed the semantic of mb_max_latency
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
previously it was closer to max_wait_time
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@parano just updated the docstring
import time | ||
|
||
|
||
class FixedBucket: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider deque
with max length for this?
from collections import deque
deque(maxlen=100)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not used anymore?
bentoml/marshal/dispatcher.py
Outdated
|
||
|
||
class Optimizer: | ||
N_OUTBOUND_SAMPLE = 500 | ||
N_OUTBOUND_WAIT_SAMPLE = 20 | ||
N_OUTBOUND_SAMPLE = 50 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add inline comments about these variables
1e4e514
to
54dcd0e
Compare
Codecov Report
@@ Coverage Diff @@
## master #630 +/- ##
==========================================
- Coverage 54.40% 54.14% -0.26%
==========================================
Files 102 103 +1
Lines 7827 7890 +63
==========================================
+ Hits 4258 4272 +14
- Misses 3569 3618 +49
Continue to review full report at Codecov.
|
8d65e36
to
2aac052
Compare
* Feat(batching): refactor; troubleshoot back pressure * Chore: add dispatcher benchmark * Fix(batching): freeze at beginning with small max_expected_time * Style: docs for batching * Chore(batching): handle exceptions out of parade dispatcher * Fix: event loop lazy loading
(Thanks for sending a pull request! Please make sure to read the contribution guidelines, then fill out the blanks below.)
What changes were proposed in this pull request?
Does this close any currently open issues?
How was this patch tested?