Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix (#315): correctly increment this.cer #317

Merged
merged 3 commits into from
Dec 18, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions lib/httpClient.js
Original file line number Diff line number Diff line change
Expand Up @@ -112,10 +112,14 @@ function Client (opts) {

this.emit('response', resp.headers.statusCode, resp.bytes, resp.duration, this.rate)

// Separating rip ensures that after getting response for first request, we fire
// first one again because second is already running right now (with pipelining=2).
// See detailed explaination at https://github.com/mcollina/autocannon/pull/317#discussion_r543678831
const rpi = (this.cer + pipelining) % pipelining
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rpi = request pipelining index
cer = current expected response

Separated rpi and cer because all pipelining requests are fired at once without waiting for response and the following happens (with pipelining = 2)

  • Both requests fired
  • cer is 0
  • Response for first came
  • After processing it, we call _doRequest with incremented cer
  • Incremented cer is now 1
  • _doRequest tries to fire the same request again, hence re-setting the startTime to now which messes up the duration calculation when response comes.

Separating rip ensures that after getting response for first request, we fire first one again because second is already running right now.

Does it make sense? 😅

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Totally, put it in a comment ;).

Copy link
Collaborator

@GlenTiki GlenTiki Dec 16, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of this code was written at a time when I understood function character length had a direct impact on it being optimised. Newer versions of v8 are not limited by this afaik. If I was refactoring this code now I would recommend rewriting the variables as their full names, and adding many, many more comments.

rpi = requestPipeliningIndex
cer = currentExpectedResponse

After looking back at this code for the first time in years, I was struggling to understand the variable meanings myself. I would even begin to rename many functions (even anonymous ones), to give the code more contextual meaning. I'd aim to make it readable/understandable first, then measure the impact. My biggest mistake writing the code in autocannon was not doing that during initial implementations, so I'm advising to learn from that 😅

I am happy to refactor this in a follow up PR, if you'd like.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@salmanm your explanation above makes sense to me, I just want to make sure there isn't an implicit assumption in this statement:

Separating rip ensures that after getting response for first request, we fire first one again because second is already running right now.

This is not assuming that requests will take similar times to execute, right? In other words, this supports the case whereby, with pipelining 2:

  • fire first request
  • fire second request
  • second response comes back before the first
  • first response comes back

resp.body = ''
resp.bytes = 0
this.cer = this.cer === pipelining - 1 ? 0 : this.cer++
this._doRequest(this.cer)
this.cer = (this.cer + 1) % pipelining
this._doRequest(rpi)
}

this._connect()
Expand Down
8 changes: 7 additions & 1 deletion test/helper.js
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,13 @@ function startServer (opts) {
}

res.statusCode = statusCode
res.end(typeof body === 'function' ? body(req) : body)
const reply = () => res.end(typeof body === 'function' ? body(req) : body)

if (opts.delayResponse) {
setTimeout(reply, opts.delayResponse)
} else {
reply()
}
}

server.unref()
Expand Down
25 changes: 22 additions & 3 deletions test/httpClient.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,25 @@ test('client calls a https server twice', (t) => {
})
})

test('client calculates correct duration when using pipelining', (t) => {
t.plan(4)

const lazyServer = helper.startServer({ delayResponse: 500 })
const opts = lazyServer.address()
opts.pipelining = 2
const client = new Client(opts)
let count = 0

client.on('response', (statusCode, length, duration) => {
t.equal(statusCode, 200, 'status code matches')
t.ok(duration > 500 && duration < 600)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is probably going to make it a brittle test at some point. I understand that it's tricky to test though, but relying on timing in tests is something I would try to avoid. I am not sure how to do it though, apart from changing the way the test is written altogether

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're doing such time based assertions in a lot of places in these tests, so I don't think this is any less appropriate than that. But if you have any clever idea on how to improve, I'd be happy to hear it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really, I think the only improvement to reduce possible brittleness is to only assert that it will take longer than 500, because setTimeout is guaranteed to trigger after and not before the specified time

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

our calculations were giving ~half results, so not checking if it takes less than 600ms might not catch if the calculations in future break and start giving 2x time. We can increase it to 700 if builds start to fail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fair enough!


if (++count === 2) {
client.destroy()
}
})
})

test('client calls a tls server without SNI servername twice', (t) => {
t.plan(4)

Expand Down Expand Up @@ -891,17 +910,17 @@ test('client invokes appropriate onResponse when using pipelining', (t) => {
t.deepEqual(responses, ['POST'])
break
case 2:
// 3rd was sent as 1st is finished, receiving 2st
// 3rd was sent as 1st is finished, receiving 2nd
t.same(client.getRequestBuffer().toString(), makeResponseFromBody({ server, method: 'PUT' }), 'current should be third request')
t.deepEqual(responses, ['POST', 'GET'])
break
case 3:
// 1st was resent, receiving 3st
// 1st was resent, receiving 3rd
t.same(client.getRequestBuffer().toString(), makeResponseFromBody({ server, method: 'POST' }), 'current should be first request')
t.deepEqual(responses, ['POST', 'GET', 'PUT'])
break
case 4:
// 2st was resent, receiving 1st
// 2nd was resent, receiving 1st
t.same(client.getRequestBuffer().toString(), makeResponseFromBody({ server, method: 'GET' }), 'current should be second request')
t.deepEqual(responses, ['POST', 'GET', 'PUT', 'POST'])
client.destroy()
Expand Down