Skip to content

Commit

Permalink
Merge pull request #559 from hatoo/wait_ongoing_requests_after_deadline
Browse files Browse the repository at this point in the history
Add `--wait-ongoing-requests-after-deadline` option
  • Loading branch information
hatoo authored Aug 17, 2024
2 parents c3ff290 + 7b4732f commit 109c171
Show file tree
Hide file tree
Showing 4 changed files with 143 additions and 58 deletions.
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,9 @@
# Unreleased

- Add `--wait-ongoing-requests-after-deadline` option
- Add `--db-url` option to save results to SQLite database
- Add `--dump-urls` option to debug random URL generation

# 1.4.5 (2024-05-29)

- Some performance improvements
Expand Down
128 changes: 86 additions & 42 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,48 +90,92 @@ Arguments:
<URL> Target URL.

Options:
-n <N_REQUESTS> Number of requests to run. [default: 200]
-c <N_CONNECTIONS> Number of connections to run concurrently. You may should increase limit to number of open files for larger `-c`. [default: 50]
-p <N_HTTP2_PARALLEL> Number of parallel requests to send on HTTP/2. `oha` will run c * p concurrent workers in total. [default: 1]
-z <DURATION> Duration of application to send requests. If duration is specified, n is ignored.
When the duration is reached, ongoing requests are aborted and counted as "aborted due to deadline"
Examples: -z 10s -z 3m.
-q <QUERY_PER_SECOND> Rate limit for all, in queries per second (QPS)
--burst-delay <BURST_DURATION> Introduce delay between a predefined number of requests.
Note: If qps is specified, burst will be ignored
--burst-rate <BURST_REQUESTS> Rates of requests for burst. Default is 1
Note: If qps is specified, burst will be ignored
--rand-regex-url Generate URL by rand_regex crate but dot is disabled for each query e.g. http://127.0.0.1/[a-z][a-z][0-9]. Currently dynamic scheme, host and port with keep-alive are not works well. See https://docs.rs/rand_regex/latest/rand_regex/struct.Regex.html for details of syntax.
--max-repeat <MAX_REPEAT> A parameter for the '--rand-regex-url'. The max_repeat parameter gives the maximum extra repeat counts the x*, x+ and x{n,} operators will become. [default: 4]
--latency-correction Correct latency to avoid coordinated omission problem. It's ignored if -q is not set.
--no-tui No realtime tui
-j, --json Print results as JSON
--fps <FPS> Frame per second for tui. [default: 16]
-m, --method <METHOD> HTTP method [default: GET]
-H <HEADERS> Custom HTTP header. Examples: -H "foo: bar"
-t <TIMEOUT> Timeout for each request. Default to infinite.
-A <ACCEPT_HEADER> HTTP Accept Header.
-d <BODY_STRING> HTTP request body.
-D <BODY_PATH> HTTP request body from file.
-T <CONTENT_TYPE> Content-Type.
-a <BASIC_AUTH> Basic authentication, username:password
--http-version <HTTP_VERSION> HTTP version. Available values 0.9, 1.0, 1.1.
--http2 Use HTTP/2. Shorthand for --http-version=2
--host <HOST> HTTP Host header
--disable-compression Disable compression.
-r, --redirect <REDIRECT> Limit for number of Redirect. Set 0 for no redirection. Redirection isn't supported for HTTP/2. [default: 10]
--disable-keepalive Disable keep-alive, prevents re-use of TCP connections between different HTTP requests. This isn't supported for HTTP/2.
--no-pre-lookup *Not* perform a DNS lookup at beginning to cache it
--ipv6 Lookup only ipv6.
--ipv4 Lookup only ipv4.
--insecure Accept invalid certs.
--connect-to <CONNECT_TO> Override DNS resolution and default port numbers with strings like 'example.org:443:localhost:8443'
--disable-color Disable the color scheme.
--unix-socket <UNIX_SOCKET> Connect to a unix socket instead of the domain in the URL. Only for non-HTTPS URLs.
--vsock-addr <VSOCK_ADDR> Connect to a VSOCK socket using 'cid:port' instead of the domain in the URL. Only for non-HTTPS URLs.
--stats-success-breakdown Include a response status code successful or not successful breakdown for the time histogram and distribution statistics
-h, --help Print help
-V, --version Print version
-n <N_REQUESTS>
Number of requests to run. [default: 200]
-c <N_CONNECTIONS>
Number of connections to run concurrently. You may should increase limit to number of open files for larger `-c`. [default: 50]
-p <N_HTTP2_PARALLEL>
Number of parallel requests to send on HTTP/2. `oha` will run c * p concurrent workers in total. [default: 1]
-z <DURATION>
Duration of application to send requests. If duration is specified, n is ignored.
On HTTP/1, When the duration is reached, ongoing requests are aborted and counted as "aborted due to deadline"
You can change this behavior with `-w` option.
Currently, on HTTP/2, When the duration is reached, ongoing requests are waited. `-w` option is ignored.
Examples: -z 10s -z 3m.
-w, --wait-ongoing-requests-after-deadline
When the duration is reached, ongoing requests are waited
-q <QUERY_PER_SECOND>
Rate limit for all, in queries per second (QPS)
--burst-delay <BURST_DURATION>
Introduce delay between a predefined number of requests.
Note: If qps is specified, burst will be ignored
--burst-rate <BURST_REQUESTS>
Rates of requests for burst. Default is 1
Note: If qps is specified, burst will be ignored
--rand-regex-url
Generate URL by rand_regex crate but dot is disabled for each query e.g. http://127.0.0.1/[a-z][a-z][0-9]. Currently dynamic scheme, host and port with keep-alive are not works well. See https://docs.rs/rand_regex/latest/rand_regex/struct.Regex.html for details of syntax.
--max-repeat <MAX_REPEAT>
A parameter for the '--rand-regex-url'. The max_repeat parameter gives the maximum extra repeat counts the x*, x+ and x{n,} operators will become. [default: 4]
--dump-urls <DUMP_URLS>
Dump target Urls <DUMP_URLS> times to debug --rand-regex-url
--latency-correction
Correct latency to avoid coordinated omission problem. It's ignored if -q is not set.
--no-tui
No realtime tui
-j, --json
Print results as JSON
--fps <FPS>
Frame per second for tui. [default: 16]
-m, --method <METHOD>
HTTP method [default: GET]
-H <HEADERS>
Custom HTTP header. Examples: -H "foo: bar"
-t <TIMEOUT>
Timeout for each request. Default to infinite.
-A <ACCEPT_HEADER>
HTTP Accept Header.
-d <BODY_STRING>
HTTP request body.
-D <BODY_PATH>
HTTP request body from file.
-T <CONTENT_TYPE>
Content-Type.
-a <BASIC_AUTH>
Basic authentication, username:password
--http-version <HTTP_VERSION>
HTTP version. Available values 0.9, 1.0, 1.1.
--http2
Use HTTP/2. Shorthand for --http-version=2
--host <HOST>
HTTP Host header
--disable-compression
Disable compression.
-r, --redirect <REDIRECT>
Limit for number of Redirect. Set 0 for no redirection. Redirection isn't supported for HTTP/2. [default: 10]
--disable-keepalive
Disable keep-alive, prevents re-use of TCP connections between different HTTP requests. This isn't supported for HTTP/2.
--no-pre-lookup
*Not* perform a DNS lookup at beginning to cache it
--ipv6
Lookup only ipv6.
--ipv4
Lookup only ipv4.
--insecure
Accept invalid certs.
--connect-to <CONNECT_TO>
Override DNS resolution and default port numbers with strings like 'example.org:443:localhost:8443'
--disable-color
Disable the color scheme.
--unix-socket <UNIX_SOCKET>
Connect to a unix socket instead of the domain in the URL. Only for non-HTTPS URLs.
--stats-success-breakdown
Include a response status code successful or not successful breakdown for the time histogram and distribution statistics
--db-url <DB_URL>
Write succeeded requests to sqlite database url E.G test.db
-h, --help
Print help
-V, --version
Print version
```
# JSON output
Expand Down
53 changes: 38 additions & 15 deletions src/client.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1237,6 +1237,7 @@ pub async fn work_until(
dead_line: std::time::Instant,
n_connections: usize,
n_http2_parallel: usize,
wait_ongoing_requests_after_deadline: bool,
) {
let client = Arc::new(client);
if client.is_http2() {
Expand Down Expand Up @@ -1353,18 +1354,25 @@ pub async fn work_until(
tokio::time::sleep_until(dead_line.into()).await;
is_end.store(true, Relaxed);

for f in futures {
f.abort();
if let Err(e) = f.await {
if e.is_cancelled() {
report_tx.send(Err(ClientError::Deadline)).unwrap();
if wait_ongoing_requests_after_deadline {
for f in futures {
let _ = f.await;
}
} else {
for f in futures {
f.abort();
if let Err(e) = f.await {
if e.is_cancelled() {
report_tx.send(Err(ClientError::Deadline)).unwrap();
}
}
}
}
};
}

/// Run until dead_line by n workers limit to qps works in a second
#[allow(clippy::too_many_arguments)]
pub async fn work_until_with_qps(
client: Client,
report_tx: flume::Sender<Result<RequestResult, ClientError>>,
Expand All @@ -1373,6 +1381,7 @@ pub async fn work_until_with_qps(
dead_line: std::time::Instant,
n_connections: usize,
n_http2_parallel: usize,
wait_ongoing_requests_after_deadline: bool,
) {
let rx = match query_limit {
QueryLimit::Qps(qps) => {
Expand Down Expand Up @@ -1530,18 +1539,25 @@ pub async fn work_until_with_qps(
tokio::time::sleep_until(dead_line.into()).await;
is_end.store(true, Relaxed);

for f in futures {
f.abort();
if let Err(e) = f.await {
if e.is_cancelled() {
report_tx.send(Err(ClientError::Deadline)).unwrap();
if wait_ongoing_requests_after_deadline {
for f in futures {
let _ = f.await;
}
} else {
for f in futures {
f.abort();
if let Err(e) = f.await {
if e.is_cancelled() {
report_tx.send(Err(ClientError::Deadline)).unwrap();
}
}
}
}
}
}

/// Run until dead_line by n workers limit to qps works in a second with latency correction
#[allow(clippy::too_many_arguments)]
pub async fn work_until_with_qps_latency_correction(
client: Client,
report_tx: flume::Sender<Result<RequestResult, ClientError>>,
Expand All @@ -1550,6 +1566,7 @@ pub async fn work_until_with_qps_latency_correction(
dead_line: std::time::Instant,
n_connections: usize,
n_http2_parallel: usize,
wait_ongoing_requests_after_deadline: bool,
) {
let (tx, rx) = flume::unbounded();
match query_limit {
Expand Down Expand Up @@ -1706,11 +1723,17 @@ pub async fn work_until_with_qps_latency_correction(
tokio::time::sleep_until(dead_line.into()).await;
is_end.store(true, Relaxed);

for f in futures {
f.abort();
if let Err(e) = f.await {
if e.is_cancelled() {
report_tx.send(Err(ClientError::Deadline)).unwrap();
if wait_ongoing_requests_after_deadline {
for f in futures {
let _ = f.await;
}
} else {
for f in futures {
f.abort();
if let Err(e) = f.await {
if e.is_cancelled() {
report_tx.send(Err(ClientError::Deadline)).unwrap();
}
}
}
}
Expand Down
16 changes: 15 additions & 1 deletion src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -55,11 +55,20 @@ struct Opts {
n_http2_parallel: usize,
#[clap(
help = "Duration of application to send requests. If duration is specified, n is ignored.
When the duration is reached, ongoing requests are aborted and counted as \"aborted due to deadline\"
On HTTP/1, When the duration is reached, ongoing requests are aborted and counted as \"aborted due to deadline\"
You can change this behavior with `-w` option.
Currently, on HTTP/2, When the duration is reached, ongoing requests are waited. `-w` option is ignored.
Examples: -z 10s -z 3m.",
short = 'z'
)]
duration: Option<Duration>,
#[clap(
help = "When the duration is reached, ongoing requests are waited",
short,
long,
default_value = "false"
)]
wait_ongoing_requests_after_deadline: bool,
#[clap(help = "Rate limit for all, in queries per second (QPS)", short = 'q')]
query_per_second: Option<usize>,
#[arg(
Expand Down Expand Up @@ -516,6 +525,7 @@ async fn main() -> anyhow::Result<()> {
start + duration.into(),
opts.n_connections,
opts.n_http2_parallel,
opts.wait_ongoing_requests_after_deadline,
)
.await
}
Expand All @@ -532,6 +542,7 @@ async fn main() -> anyhow::Result<()> {
start + duration.into(),
opts.n_connections,
opts.n_http2_parallel,
opts.wait_ongoing_requests_after_deadline,
)
.await
} else {
Expand All @@ -546,6 +557,7 @@ async fn main() -> anyhow::Result<()> {
start + duration.into(),
opts.n_connections,
opts.n_http2_parallel,
opts.wait_ongoing_requests_after_deadline,
)
.await
}
Expand All @@ -561,6 +573,7 @@ async fn main() -> anyhow::Result<()> {
start + duration.into(),
opts.n_connections,
opts.n_http2_parallel,
opts.wait_ongoing_requests_after_deadline,
)
.await
} else {
Expand All @@ -572,6 +585,7 @@ async fn main() -> anyhow::Result<()> {
start + duration.into(),
opts.n_connections,
opts.n_http2_parallel,
opts.wait_ongoing_requests_after_deadline,
)
.await
}
Expand Down

0 comments on commit 109c171

Please sign in to comment.