Skip to content

Commit

Permalink
Improve ConfigBuilder (#74)
Browse files Browse the repository at this point in the history
* Correct ConfigBuilder

* Correct CI and doc links

* Correct CI
  • Loading branch information
AgeManning authored Jul 7, 2021
1 parent b4ae852 commit f180c5b
Show file tree
Hide file tree
Showing 10 changed files with 57 additions and 39 deletions.
13 changes: 12 additions & 1 deletion .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,8 @@ jobs:
run: cargo test --all --release --tests
test-all-features:
runs-on: ubuntu-latest
container:
image: rust
needs: cargo-fmt
steps:
- uses: actions/checkout@v2
Expand All @@ -33,6 +35,15 @@ jobs:
runs-on: ubuntu-latest
needs: cargo-fmt
steps:
- uses: actions/checkout@v1
- uses: actions/checkout@v2
- name: Lint code for quality and style with Clippy
run: cargo clippy
check-rustdoc-links:
name: Check rustdoc intra-doc links
runs-on: ubuntu-latest
container:
image: rust
steps:
- uses: actions/checkout@v2
- name: Check rustdoc links
run: RUSTDOCFLAGS="--deny broken_intra_doc_links" cargo doc --verbose --workspace --no-deps --document-private-items
21 changes: 15 additions & 6 deletions src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -174,12 +174,6 @@ impl Discv5ConfigBuilder {
self
}

/// The timeout for an entire query. Any peers discovered before this timeout are returned.
pub fn query_timeout(&mut self, timeout: Duration) -> &mut Self {
self.config.query_timeout = timeout;
self
}

/// The timeout after which a `QueryPeer` in an ongoing query is marked unresponsive.
/// Unresponsive peers don't count towards the parallelism limits for a query.
/// Hence, we may potentially end up making more requests to good peers.
Expand All @@ -188,6 +182,12 @@ impl Discv5ConfigBuilder {
self
}

/// The timeout for an entire query. Any peers discovered before this timeout are returned.
pub fn query_timeout(&mut self, timeout: Duration) -> &mut Self {
self.config.query_timeout = timeout;
self
}

/// The number of retries for each UDP request.
pub fn request_retries(&mut self, retries: u8) -> &mut Self {
self.config.request_retries = retries;
Expand Down Expand Up @@ -294,6 +294,14 @@ impl Discv5ConfigBuilder {
self
}

/// Set the default duration for which nodes are banned for. This timeouts are checked every 5 minutes,
/// so the precision will be to the nearest 5 minutes. If set to `None`, bans from the filter
/// will last indefinitely. Default is 1 hour.
pub fn ban_duration(&mut self, ban_duration: Option<Duration>) -> &mut Self {
self.config.ban_duration = ban_duration;
self
}

/// A custom executor which can spawn the discv5 tasks. This must be a tokio runtime, with
/// timing support.
pub fn executor(&mut self, executor: Box<dyn Executor + Send + Sync>) -> &mut Self {
Expand Down Expand Up @@ -333,6 +341,7 @@ impl std::fmt::Debug for Discv5Config {
let _ = builder.field("ip_limit", &self.ip_limit);
let _ = builder.field("incoming_bucket_limit", &self.incoming_bucket_limit);
let _ = builder.field("ping_interval", &self.ping_interval);
let _ = builder.field("ban_duration", &self.ban_duration);
builder.finish()
}
}
8 changes: 4 additions & 4 deletions src/discv5.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,15 @@
//!
//! This provides the main struct for running and interfacing with a discovery v5 server.
//!
//! A [`Discv5`] struct needs to be created either with an [`Executor`] specified in the
//! [`Discv5Config`] via the [`Discv5ConfigBuilder`] or in the presence of a tokio runtime that has
//! A [`Discv5`] struct needs to be created either with an [`crate::executor::Executor`] specified in the
//! [`Discv5Config`] via the [`crate::Discv5ConfigBuilder`] or in the presence of a tokio runtime that has
//! timing and io enabled.
//!
//! Once a [`Discv5`] struct has been created the service is started by running the [`start()`]
//! Once a [`Discv5`] struct has been created the service is started by running the [`Discv5::start`]
//! functions with a UDP socket. This will start a discv5 server in the background listening on the
//! specified UDP socket.
//!
//! The server can be shutdown using the [`shutdown()`] function.
//! The server can be shutdown using the [`Discv5::shutdown`] function.
use crate::{
error::{Discv5Error, QueryError, RequestError},
Expand Down
8 changes: 4 additions & 4 deletions src/handler/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@
//!
//! The [`Handler`] is responsible for establishing and maintaining sessions with
//! connected/discovered nodes. Each node, identified by it's [`NodeId`] is associated with a
//! [`Session`]. This service drives the handshakes for establishing the sessions and associated
//! `Session`. This service drives the handshakes for establishing the sessions and associated
//! logic for sending/requesting initial connections/ENR's to/from unknown peers.
//!
//! The [`Handler`] also manages the timeouts for each request and reports back RPC failures,
//! and received messages. Messages are encrypted and decrypted using the
//! associated [`Session`] for each node.
//! associated `Session` for each node.
//!
//! An ongoing established connection is abstractly represented by a [`Session`]. A node that provides an ENR with an
//! An ongoing established connection is abstractly represented by a `Session`. A node that provides an ENR with an
//! IP address/port that doesn't match the source, is considered invalid. A node that doesn't know
//! their external contactable addresses should set their ENR IP field to `None`.
//!
Expand All @@ -18,7 +18,7 @@
//!
//! # Usage
//!
//! Interacting with a handler is done via channels. A Handler is spawned using the [`spawn()`]
//! Interacting with a handler is done via channels. A Handler is spawned using the [`Handler::spawn`]
//! function. This returns an exit channel, a sending and receiving channel respectively. If the
//! exit channel is dropped or fired, the handler task gets shutdown.
//!
Expand Down
13 changes: 5 additions & 8 deletions src/kbucket.rs
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,11 @@
//! Pending entries are inserted lazily when their timeout is found to be expired
//! upon querying the `KBucketsTable`. When that happens, the `KBucketsTable` records
//! an [`AppliedPending`] result which must be consumed by calling [`take_applied_pending`]
//! regularly and / or after performing lookup operations like [`entry`] and [`closest`].
//! regularly and / or after performing lookup operations like [`entry`] and [`closest_keys`].
//!
//! [`entry`]: kbucket::KBucketsTable::entry
//! [`closest`]: kbucket::KBucketsTable::closest
//! [`AppliedPending`]: kbucket::AppliedPending
//! [`KBucketsTable`]: kbucket::KBucketsTable
//! [`take_applied_pending`]: kbucket::KBucketsTable::take_applied_pending
//! [`PendingEntry`]: kbucket::PendingEntry
//! [`entry`]: KBucketsTable::entry
//! [`closest_keys`]: KBucketsTable::closest_keys
//! [`take_applied_pending`]: KBucketsTable::take_applied_pending
// [Implementation Notes]
//
Expand Down Expand Up @@ -172,7 +169,7 @@ pub enum InsertResult<TNodeId> {
/// disconnected and whose corresponding peer should be checked for connectivity
/// in order to prevent it from being evicted. If connectivity to the peer is
/// re-established, the corresponding entry should be updated with
/// [`NodeStatus::Connected`].
/// [`bucket::ConnectionState::Connected`].
disconnected: Key<TNodeId>,
},
/// The node existed and the status was updated.
Expand Down
17 changes: 8 additions & 9 deletions src/kbucket/bucket.rs
Original file line number Diff line number Diff line change
Expand Up @@ -171,8 +171,7 @@ pub enum InsertResult<TNodeId> {
/// The key of the least-recently connected entry that is currently considered
/// disconnected and whose corresponding peer should be checked for connectivity
/// in order to prevent it from being evicted. If connectivity to the peer is
/// re-established, the corresponding entry should be updated with
/// [`NodeStatus::Connected`].
/// re-established, the corresponding entry should be updated with a connected status.
disconnected: Key<TNodeId>,
},
/// The attempted entry failed to pass the filter.
Expand Down Expand Up @@ -498,23 +497,23 @@ where
///
/// The status of the node to insert determines the result as follows:
///
/// * `NodeStatus::ConnectedIncoming` or `NodeStatus::ConnectedOutgoing`: If the bucket is full and either all nodes are connected
/// or there is already a pending node, insertion fails with `InsertResult::Full`.
/// * [`ConnectionState::Connected`] for both directions: If the bucket is full and either all nodes are connected
/// or there is already a pending node, insertion fails with [`InsertResult::Full`].
/// If the bucket is full but at least one node is disconnected and there is no pending
/// node, the new node is inserted as pending, yielding `InsertResult::Pending`.
/// node, the new node is inserted as pending, yielding [`InsertResult::Pending`].
/// Otherwise the bucket has free slots and the new node is added to the end of the
/// bucket as the most-recently connected node.
///
/// * `NodeStatus::Disconnected`: If the bucket is full, insertion fails with
/// `InsertResult::Full`. Otherwise the bucket has free slots and the new node
/// * [`ConnectionState::Disconnected`]: If the bucket is full, insertion fails with
/// [`InsertResult::Full`]. Otherwise the bucket has free slots and the new node
/// is inserted at the position preceding the first connected node,
/// i.e. as the most-recently disconnected node. If there are no connected nodes,
/// the new node is added as the last element of the bucket.
///
/// The insert can fail if a provided bucket filter does not pass. If a node is attempted
/// to be inserted that doesn't pass the bucket filter, `InsertResult::FailedFilter` will be
/// to be inserted that doesn't pass the bucket filter, [`InsertResult::FailedFilter`] will be
/// returned. Similarly, if the inserted node would violate the `max_incoming` value, the
/// result will return `InsertResult::TooManyIncoming`.
/// result will return [`InsertResult::TooManyIncoming`].
pub fn insert(&mut self, node: Node<TNodeId, TVal>) -> InsertResult<TNodeId> {
// Prevent inserting duplicate nodes.
if self.position(&node.key).is_some() {
Expand Down
4 changes: 2 additions & 2 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
//! * Handler - The protocol's communication is encrypted with `AES_GCM`. All node communication
//! undergoes a handshake, which results in a [`Session`]. [`Session`]'s are established when
//! needed and get dropped after a timeout. This section manages the creation and maintenance of
//! sessions between nodes and the encryption/decryption of packets from the socket. It is realised by the [`Handler`] struct and it runs in its own task.
//! sessions between nodes and the encryption/decryption of packets from the socket. It is realised by the [`handler::Handler`] struct and it runs in its own task.
//! * Service - This section contains the protocol-level logic. In particular it manages the
//! routing table of known ENR's, topic registration/advertisement and performs various queries
//! such as peer discovery. This section is realised by the [`Service`] struct. This also runs in
Expand Down Expand Up @@ -115,7 +115,7 @@ pub mod permit_ban;
mod query_pool;
pub mod rpc;
pub mod service;
mod socket;
pub mod socket;

#[macro_use]
extern crate lazy_static;
Expand Down
5 changes: 3 additions & 2 deletions src/query_pool/peers.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,12 +39,13 @@
//!
//! A peer iterator can be finished prematurely at any time through `finish`.
//!
//! [`Finished`]: peers::PeersIterState::Finished
//! [`Finished`]: QueryState::Finished
pub mod closest;
pub mod predicate;

/// The state of the query reported by [`Query::next`].
/// The state of the query reported by [`closest::FindNodeQuery::next`] or
/// [`predicate::PredicateQuery::next`].
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum QueryState<TNodeId> {
/// The query is waiting for results.
Expand Down
2 changes: 1 addition & 1 deletion src/socket/filter/cache.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
//!
//!
//! | | Enforced Time |
//! [x,x,x,x,x,x,x,x,x,x,x,x,x,x,x,x]
//! \[x,x,x,x,x,x,x,x,x,x,x,x,x,x,x,x\]
//!
//! The enforced time represents one seconds worth of elements. The target aims to limit the
//! number of elements that can be inserted within the enforced time. The length of the list is
Expand Down
5 changes: 3 additions & 2 deletions src/socket/filter/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@ use super::rate_limiter::RateLimiter;
pub struct FilterConfig {
/// Whether the packet filter is enabled or not.
pub enabled: bool,
/// Set up various rate limits for unsolicited packets. See the [`rate_limiter`] module for
/// further details on constructing rate limits. See the `Default` implementation for default
/// Set up various rate limits for unsolicited packets. See the
/// [`crate::RateLimiterBuilder`] for
/// further details on constructing rate limits. See the [`Default`] implementation for default
/// values.
pub rate_limiter: Option<RateLimiter>,
/// The maximum number of node-ids allowed per IP address before the IP address gets banned.
Expand Down

0 comments on commit f180c5b

Please sign in to comment.