-
Notifications
You must be signed in to change notification settings - Fork 219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(iroh-relay)!: use explicit key cache #3053
Conversation
# Conflicts: # Cargo.lock # iroh-base/Cargo.toml # iroh-base/src/key.rs
iroh-relay/Cargo.toml
Outdated
@@ -96,6 +96,7 @@ url = { version = "2.5", features = ["serde"] } | |||
webpki = { package = "rustls-webpki", version = "0.102" } | |||
webpki-roots = "0.26" | |||
data-encoding = "2.6.0" | |||
lru = "0.12.5" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Purely a drive-by comment. But I've had another reason to add an LRU cache to iroh-relay very recently. So I think this dependency would make it here at some point.
Documentation for this PR has been generated and is available at: https://n0-computer.github.io/iroh/pr/3053/docs/iroh/ Last updated: 2024-12-16T18:57:52Z |
I think a default of maybe 128 or so would be good, as it is very small still but will improve perf for most basic endpoint usages |
128 for clients, this will help a bit but not have much mem use 1024 * 1024 for servers, this will consume 32 mb and work up to 1 mil connections
81228c6
to
015b28a
Compare
I made 128 the default for client and 1024 * 1024 the default for a http server. Presumably for the server 32 megabytes (or 56 megabytes if you add the 2 ptrs that LruEntry adds and the key) is no big deal, and it will work up to 1m connections. |
Also add DerpCodec::test as a default for tests
|
||
impl DerpCodec { | ||
#[cfg(test)] | ||
pub fn test() -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub(crate)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it matter, given this is cfg(test)?
I am having a hard time finding the place this is configured for a non server endpoint, can you point me to that? |
It is configurable using the ClientBuilder, with a default of 128 as you suggested. /// Create a new [`ClientBuilder`]
pub fn new(url: impl Into<RelayUrl>) -> Self {
ClientBuilder {
is_preferred: false,
address_family_selector: None,
is_prober: false,
server_public_key: None,
url: url.into(),
protocol: Protocol::Relay,
#[cfg(any(test, feature = "test-utils"))]
insecure_skip_cert_verify: false,
proxy_url: None,
key_cache_capacity: 128,
}
}
...
/// Set the capacity of the cache for public keys.
pub fn key_cache_capacity(mut self, capacity: usize) -> Self {
self.key_cache_capacity = capacity;
self
} Is that not the right place? |
It is, I was just being blind |
Description
This wires up an explicit key cache to replace the implicit one that was removed in #3051.
The default for a key cache is Disabled. A disabled key cache has a size of 1 pointer and otherwise zero performance overhead.I have removed the Default instance for both KeyCache and DerpProtocol so you don't accidentally pass the default despite having a cache available.
We use the lru crate for the cache for now. Please comment if it should be something else.
Benchmarks have shown that conversion from a [u8;32] to a VerifyingKey is relatively cheap, so the purpose of the cache is solely to validate incoming public keys.
We add a Borrow instance to PublicKey so we can use it as a cache key.
Some performance measurements:
So just from validating incoming keys, without cache you would be limited to ~250 000 msgs/s per thread. At a message size of 1KiB that would be 250MB/s, which is not great.
With the cache deserialization can do 14 000 000 msgs/s, which means that this is no longer a bottleneck.
Breaking Changes
Notes & open questions
Change checklist