-
-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix (#315): correctly increment this.cer #317
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Actually we need tests |
@@ -112,10 +112,11 @@ function Client (opts) { | |||
|
|||
this.emit('response', resp.headers.statusCode, resp.bytes, resp.duration, this.rate) | |||
|
|||
const rpi = (this.cer + pipelining) % pipelining |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rpi = request pipelining index
cer = current expected response
Separated rpi
and cer
because all pipelining requests are fired at once without waiting for response and the following happens (with pipelining = 2)
- Both requests fired
cer
is 0- Response for first came
- After processing it, we call
_doRequest
with incrementedcer
- Incremented
cer
is now 1 _doRequest
tries to fire the same request again, hence re-setting the startTime to now which messes up theduration
calculation when response comes.
Separating rip
ensures that after getting response for first request, we fire first one again because second is already running right now.
Does it make sense? 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally, put it in a comment ;).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All of this code was written at a time when I understood function character length had a direct impact on it being optimised. Newer versions of v8 are not limited by this afaik. If I was refactoring this code now I would recommend rewriting the variables as their full names, and adding many, many more comments.
rpi
= requestPipeliningIndex
cer
= currentExpectedResponse
After looking back at this code for the first time in years, I was struggling to understand the variable meanings myself. I would even begin to rename many functions (even anonymous ones), to give the code more contextual meaning. I'd aim to make it readable/understandable first, then measure the impact. My biggest mistake writing the code in autocannon was not doing that during initial implementations, so I'm advising to learn from that 😅
I am happy to refactor this in a follow up PR, if you'd like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@salmanm your explanation above makes sense to me, I just want to make sure there isn't an implicit assumption in this statement:
Separating rip ensures that after getting response for first request, we fire first one again because second is already running right now.
This is not assuming that requests will take similar times to execute, right? In other words, this supports the case whereby, with pipelining 2:
- fire first request
- fire second request
- second response comes back before the first
- first response comes back
|
||
client.on('response', (statusCode, length, duration) => { | ||
t.equal(statusCode, 200, 'status code matches') | ||
t.ok(duration > 500 && duration < 600) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is probably going to make it a brittle test at some point. I understand that it's tricky to test though, but relying on timing in tests is something I would try to avoid. I am not sure how to do it though, apart from changing the way the test is written altogether
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're doing such time based assertions in a lot of places in these tests, so I don't think this is any less appropriate than that. But if you have any clever idea on how to improve, I'd be happy to hear it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not really, I think the only improvement to reduce possible brittleness is to only assert that it will take longer than 500, because setTimeout is guaranteed to trigger after and not before the specified time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
our calculations were giving ~half results, so not checking if it takes less than 600ms might not catch if the calculations in future break and start giving 2x time. We can increase it to 700 if builds start to fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fair enough!
@@ -112,10 +112,11 @@ function Client (opts) { | |||
|
|||
this.emit('response', resp.headers.statusCode, resp.bytes, resp.duration, this.rate) | |||
|
|||
const rpi = (this.cer + pipelining) % pipelining |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@salmanm your explanation above makes sense to me, I just want to make sure there isn't an implicit assumption in this statement:
Separating rip ensures that after getting response for first request, we fire first one again because second is already running right now.
This is not assuming that requests will take similar times to execute, right? In other words, this supports the case whereby, with pipelining 2:
- fire first request
- fire second request
- second response comes back before the first
- first response comes back
Not at all, it's based on the fact that the pipelining logic has already fired a number of requests in the beginning and we just wanna avoid making redundant calls. Yes, it supports the case you described. Besides, the existing logic does not seem to care if second response came first. See |
Fixes #315