Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(bindings): ConfigPool should always yield associated connections #4708

Merged
merged 7 commits into from
Aug 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions bindings/rust/bench/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -60,3 +60,7 @@ harness = false
[[bench]]
name = "resumption"
harness = false

[[bench]]
name = "connection_creation"
harness = false
44 changes: 44 additions & 0 deletions bindings/rust/bench/benches/connection_creation.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
// Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
// SPDX-License-Identifier: Apache-2.0

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you didn't already, it's probably a good idea to re-run this with a more recent allocator (e.g., jemalloc) and see how much that changes the results. Still, ~2 microsecond of difference feels to me like there is probably no reason to use a connection pool -- maybe we have plans that suggest we can get more out of pooling than we do today in this benchmark?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rerunning the benchmarks with jemalloc I got the following results.

Connection Creation/connection reuse
                        time:   [751.49 ns 751.64 ns 751.78 ns]
                        change: [-3.3691% -3.3445% -3.3186%] (p = 0.00 < 0.05)
                        Performance has improved.

Connection Creation/connection allocation
                        time:   [2.2163 µs 2.2170 µs 2.2178 µs]
                        change: [-23.654% -23.616% -23.581%] (p = 0.00 < 0.05)
                        Performance has improved.

Essentially jemalloc improved the connection allocation/free performance by ~23%.

I agree that ~2 microseconds is a relatively small improvement.

maybe we have plans that suggest we can get more out of pooling than we do today in this benchmark?

I'm not aware of any, although we're certainly open to suggestions! My goal with this benchmark was to have some empirical data to help highlight the benefit or lack thereof.

My understanding is that connection reuse was one of the older features of s2n-tls, so perhaps the performance benefit was more significant with older platforms/allocators?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Connection reuse is specifically intended for situations where allocation is expensive. There would be no reason to reuse connections with a modern allocator like jemalloc. Unfortunately most of our customers don't use jemalloc ;)

use criterion::{criterion_group, criterion_main, Criterion};
use s2n_tls::{
config::Config,
connection::Builder,
enums::Mode,
pool::{ConfigPool, ConfigPoolBuilder, PooledConnection},
};
use std::sync::Arc;

fn connection_wipe(connection_pool: &Arc<ConfigPool>) {
// get a connection from the pool
let conn = PooledConnection::new(connection_pool).unwrap();
// "drop" the connection, wiping it and returning it to the pool
drop(conn);
}

fn connection_new(config: &Config) {
let conn = config
.build_connection(s2n_tls::enums::Mode::Server)
.unwrap();
drop(conn);
}

fn connection_creation(c: &mut Criterion) {
let mut group = c.benchmark_group("Connection Creation");
let config = s2n_tls::config::Builder::new().build().unwrap();
let connection_pool = ConfigPoolBuilder::new(Mode::Server, config.clone()).build();

group.bench_function("connection reuse", |b| {
b.iter(|| connection_wipe(&connection_pool));
});

group.bench_function("connection allocation", |b| {
b.iter(|| connection_new(&config));
});

group.finish();
}

criterion_group!(benches, connection_creation);
criterion_main!(benches);
84 changes: 80 additions & 4 deletions bindings/rust/s2n-tls/src/pool.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,13 @@
//! memory can be reused by calling
//! [Connection::wipe()](`crate::connection::Connection::wipe()).
//!
//! On modern systems with reasonably performant allocators, the benefits of reusing
//! connections are reduced. Connection reuse is specifically intended for customers
//! who are sensitive to allocations or for whom allocations are more expensive.
//! Customers are encouraged to run their own benchmarks to determine the exact
//! performance benefit. As a starting point, a simple benchmark comparing allocation
//! against reuse can be found `bench/benches/connection_creation.rs`.
//!
//! The [`Pool`] trait allows applications to define an
//! [Object pool](https://en.wikipedia.org/wiki/Object_pool_pattern) that
//! wipes and stores connections after they are dropped.
Expand Down Expand Up @@ -117,6 +124,12 @@ impl<T: Pool> Pool for Arc<T> {
}
}

/// A pool of Connections. Not a pool of Configs.
///
/// Connections yielded from the pool will always be associated with `config`
/// from [ConfigPoolBuilder::new].
///
/// For discussions about expected performance benefits see [self].
#[derive(Debug)]
pub struct ConfigPool {
mode: Mode,
Expand Down Expand Up @@ -150,6 +163,8 @@ impl ConfigPoolBuilder {
/// When the number of connections created exceeds the `max_pool_size`,
/// excess reclaimed connections are dropped instead of stored
/// in the pool.
///
/// If this is not specified, then the max pool size will be usize::MAX
pub fn set_max_pool_size(&mut self, max_pool_size: usize) -> &mut Self {
self.0.max_pool_size = max_pool_size;
self
Expand Down Expand Up @@ -185,9 +200,12 @@ impl Pool for ConfigPool {
Err(_) => None,
};
let conn = match from_pool {
// Wiping a connection doesn't wipe the config,
// so we don't need to reset the config.
Some(conn) => conn,
// Wiping a connection doesn't wipe the config, but callbacks might
// have swapped the config so we reset it anyways.
Some(mut conn) => {
conn.set_config(self.config.clone())?;
conn
}
// Create a new connection with the stored config.
None => self.config.build_connection(self.mode)?,
};
Expand All @@ -211,7 +229,12 @@ impl Pool for ConfigPool {
#[cfg(test)]
mod tests {
use super::*;
use crate::config::Config;
use crate::{
callbacks::ClientHelloCallback,
config::{self, Config},
connection, error,
testing::TestPair,
};

#[test]
fn config_pool_single_connection() -> Result<(), Box<dyn std::error::Error>> {
Expand Down Expand Up @@ -319,4 +342,57 @@ mod tests {

Ok(())
}

// A yielded connection should always be associated with `config` in the
// config pool, even if the connection's config is swapped by callbacks.
#[test]
fn yielded_connection_associated_config() -> Result<(), error::Error> {
fn associated_config_has_ch_callback(conn: &connection::Connection) -> bool {
conn.config()
.unwrap()
.context()
.client_hello_callback
.is_some()
}

struct ConfigSwapCallback(Config);
impl ClientHelloCallback for ConfigSwapCallback {
fn on_client_hello(
&self,
connection: &mut Connection,
) -> crate::callbacks::ConnectionFutureResult {
connection.set_config(self.0.clone())?;
Ok(None)
}
}

let empty_config = config::Builder::new().build()?;

let mut config_with_callback = config::Builder::new();
let dead_end_callback = ConfigSwapCallback(empty_config);
config_with_callback.set_client_hello_callback(dead_end_callback)?;
let config_with_callback = config_with_callback.build()?;

let config_with_pooled_connections =
ConfigPoolBuilder::new(Mode::Server, config_with_callback).build();

let server_from_pool = config_with_pooled_connections.take()?;
let client = Connection::new_client();
let mut pair = TestPair::from_connections(client, server_from_pool);

assert!(associated_config_has_ch_callback(&pair.server));
assert!(pair.handshake().is_err());
assert!(!associated_config_has_ch_callback(&pair.server));

config_with_pooled_connections.give(pair.server);

let server_from_pool = config_with_pooled_connections.take()?;
// reused connection once again has callback
assert!(associated_config_has_ch_callback(&server_from_pool));
config_with_pooled_connections.give(server_from_pool);

// assert that there is only a single connection that was getting reused
assert_eq!(config_with_pooled_connections.pool_size(), 1);
Ok(())
}
}
42 changes: 20 additions & 22 deletions bindings/rust/s2n-tls/src/testing.rs
Original file line number Diff line number Diff line change
Expand Up @@ -220,24 +220,25 @@ impl TestPair {
}

pub fn from_configs(client_config: &config::Config, server_config: &config::Config) -> Self {
// import in smallest namespace to avoid collision with config::Builder;
use crate::connection::Builder;

let client = client_config.build_connection(enums::Mode::Client).unwrap();
let server = server_config.build_connection(enums::Mode::Server).unwrap();

Self::from_connections(client, server)
}

pub fn from_connections(
mut client: connection::Connection,
mut server: connection::Connection,
) -> Self {
let client_tx_stream = Box::pin(Default::default());
let server_tx_stream = Box::pin(Default::default());

let client = Self::register_connection(
enums::Mode::Client,
client_config,
&client_tx_stream,
&server_tx_stream,
)
.unwrap();

let server = Self::register_connection(
enums::Mode::Server,
server_config,
&server_tx_stream,
&client_tx_stream,
)
.unwrap();
Self::register_connection(&mut client, &client_tx_stream, &server_tx_stream).unwrap();

Self::register_connection(&mut server, &server_tx_stream, &client_tx_stream).unwrap();

let io = TestPairIO {
server_tx_stream,
Expand All @@ -254,14 +255,11 @@ impl TestPair {
/// in unit tests. However, this will cause calls to `poll_shutdown` to return
/// Poll::Pending until the blinding delay elapses.
fn register_connection(
mode: enums::Mode,
config: &config::Config,
conn: &mut connection::Connection,
send_ctx: &Pin<Box<LocalDataBuffer>>,
recv_ctx: &Pin<Box<LocalDataBuffer>>,
) -> Result<connection::Connection, error::Error> {
let mut conn = connection::Connection::new(mode);
conn.set_config(config.clone())?
.set_blinding(Blinding::SelfService)?
) -> Result<(), error::Error> {
conn.set_blinding(Blinding::SelfService)?
.set_send_callback(Some(Self::send_cb))?
.set_receive_callback(Some(Self::recv_cb))?;
unsafe {
Expand All @@ -282,7 +280,7 @@ impl TestPair {
recv_ctx as &LocalDataBuffer as *const LocalDataBuffer as *mut c_void,
)?;
}
Ok(conn)
Ok(())
}

/// perform a TLS handshake between the connections
Expand Down
Loading