-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Conversation
I marked as "in-progress" because tests have to be written. |
client/network/src/gossip.rs
Outdated
} | ||
|
||
/// Abstraction around `NetworkService` that permits removing the `B` and `H` parameters. | ||
trait AbstractNotificationSender { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to remove this trait. The two type parameters B
and H
are only necessary in the private spawn_task
method and adding those is far less code than this trait plus impl. And your calls to NetworkService::notification_sender
are not dynamically dispatched.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's necessary for DirectedGossipPrototype
to work. See your other comment.
client/network/src/gossip.rs
Outdated
/// Utility. Generic over the type of the messages. Holds a [`NetworkService`] and a [`PeerId`]. | ||
/// Provides a [`DirectedGossipPrototype::build`] function that builds a [`DirectedGossip`]. | ||
#[derive(Clone)] | ||
pub struct DirectedGossipPrototype { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pub
so I assume this will be used from somewhere external? Could you explain this a little more please? Currently it is not in use and the DirectedGossip
and task construction could happen directly in DirectedGossip::new
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I admit that this is type is specifically targeted for the Polkadot use case.
In Polkadot's code, the so-called network bridge communicates via messages with other subsystems. For example, when we connect, a PeerConnected
message is sent.
The network bridge doesn't know what is the type of the networking messages that the various subsystems would manipulate (the M
generic in this PR), and the various subsystems don't know what the B
and H
generics would be.
This DirectedGossipPrototype
would easily plug into that scheme. We would send a DirectedGossipPrototype
to the subsystems, without the need for the subsystem to know what B
and H
would be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should Polkadot-specific code not be put in Polkadot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The network bridge doesn't know what is the type of the networking messages that the various subsystems would manipulate (the M generic in this PR), and the various subsystems don't know what the B and H generics would be.
I am actually changing this right now, so we may not need this type. Will push a PR shortly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
paritytech/polkadot#1535 - now the network bridge is aware of the specific information flowing over the network and we may be able to avoid these prototypes. However, different subsystems will still want Sender
s for different types. However, the sender for a specific variant should also be encoding as |x| ValidationProtocolV1::SomeConcreteVariant(x).encode()
which I think is covered by the encode-fn part of the API here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should Polkadot-specific code not be put in Polkadot?
To answer specifically this: the way I see it, this code isn't really Polkadot-specific.
This code provides a convenient-to-use high-level API on top of a low-level primitive, and people are free to use either the convenient API if it suits them, or the low-level primitive if it doesn't suit them.
I think that, in general, we want to be as "universal" as possible when designing low-level code, but as long as the lower-level primitives are exposed, higher-level code can be a bit more targeted towards certain use-cases.
Co-authored-by: Toralf Wittner <tw@dtex.org>
@@ -16,6 +16,7 @@ targets = ["x86_64-unknown-linux-gnu"] | |||
prost-build = "0.6.1" | |||
|
|||
[dependencies] | |||
async-std = { version = "1.6.2", features = ["unstable"] } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly that this uses the unstable
feature to have futures-timer
? If so, should we not do that across the entire crate for all usage of futures-timer
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, it is for Convar
. It is unfortunate that we have to depend on the unstable
feature, but I couldn't find any crate other than async-std
that provides an asynchronous Condvar
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is something we want to avoid, one can use a channel as a condvar.
use futures::channel::mpsc::channel;
fn main() {
let (tx, rx) = channel(0);
let producer = async move {
// Produce something and put it to place X.
// Signal the consumer that that something can be optained.
tx.send(()).await;
}
let consumer = async move {
// Wait for the signal. Maybe include a timeout like you do today with 10 secs.
let _ = rx.next().await;
// Access place X.
}
}
Problem is that this can't be used in the Drop
implementations and thus cleanup would depend on the 10 sec timeout.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Problem is that this can't be used in the Drop implementations [...]
It does not always need to. When the Sender
is dropped, the receiver will notice and the task can terminate. Instead of let _ = rx.next().await;
one would write if rx.next().await.is_some() { ... }
. In the Drop impl of QueueGuard one can use Sender::try_send
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find it a bit weird to use a channel, which involves an additional Arc
, Mutex
and Vec
, just to wake up a task, rather than a Waker
.
But I have now also tried using a Waker
, and the implementation is considerably more tricky and difficult to read because of potential race conditions and having to introduce manual polling within an async
function and having to implement your own Waker
.
Before going on, I'd like to understand what is wrong with the Condvar
solution, as a Condvar
is exactly the tool that is designed for this specific job.
/// Pushes a message to the queue, or discards it if the queue is full. | ||
/// | ||
/// The returned `Future` is expected to be ready quite quickly. | ||
pub async fn queue_or_discard(&self, message: M) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why offer two ways to queue_or_discard
? Is writing self.lock_queue().await.push_or_discard(message);
as a user not fine as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This API could return a Result<(), M>
or similar as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a convenient shortcut.
|
||
impl<'a, M: Send + 'static> QueueLock<'a, M> { | ||
/// Pushes a message to the queue, or discards it if the queue is full. | ||
pub fn push_or_discard(&mut self, message: M) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. It would be good to get back the message
if the queue is full
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But what would you do with the message that is returned? Put it in another queue?
The only sane things you can do when the queue is full is either discard the message or force-close the connection.
Additionally, what if the connection with the remote is closed? Are we supposed to return back the message as well? If so, then it's very problematic because we can't detect this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only sane things you can do when the queue is full is either discard the message or force-close the connection.
Isn't retain
another option? We could decide to do that based on the type of the message we are trying to send.
And is there no sane thing you can do? Why can't you wait for space to appear in the queue or something like that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The way I see retain
is that you'd call it all the time, even if there is space in the buffer. As far as I can tell, when a message is obsolete, there is no point in leaving it in the queue anyway.
Why can't you wait for space to appear in the queue or something like that?
The entire reason for this API to exist is to remove the need for any waiting. See also this paragraph.
Ultimately there has to be a code somewhere that holds some sort of HashMap<PeerId, DirectedGossip>
. If it needs to send a message to one of the peers and its buffer is full, then it shouldn't wait for this peer and instead continue its processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to have a way to wait, although I agree that the API for that should be used sparingly so you don't degrade to the performance of the slowest peer. There are cases where we don't want to drop messages, for instance when responding to a validator's request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are cases where we don't want to drop messages
I believe that everything that would fall in this category should be covered by request-response protocols.
Rather than adding a wait, I could restore the push_unbounded
method that I've removed after a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have addressed all concerns, except for the @rphmeier Do you confirm that the |
Yes, after paritytech/polkadot#1537 is in |
bot rebase |
Rebasing. |
Any comment on @mxinden's suggestion to use a channel instead of a |
If we do indeed want to not depend on |
bot rebase |
Rebasing. |
Friendly reviewing ping |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me overall.
I am still in favor of using a channel instead of an unstable Condvar
(see https://github.com/paritytech/substrate/pull/6803/files#r466192207). As far as I can tell this would also remove the need for the stop_task
AtomicBool
(see https://github.com/paritytech/substrate/pull/6803/files#r466298691). I wouldn't block merging the pull request for that reason though.
client/network/src/gossip.rs
Outdated
mod tests; | ||
|
||
/// Notifications sender for a specific combination of network service, peer, and protocol. | ||
pub struct QueueSender<M> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pub struct QueueSender<M> { | |
pub struct QueuedSender<M> { |
Not sure whether this is a typo or intentional. In case it is the former I would prefer QueuedSender
.
/// is in total control of the buffer. Messages can only ever be sent out after the [`QueueGuard`] | ||
/// is dropped. | ||
#[must_use] | ||
pub struct QueueGuard<'a, M> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
|
||
impl<'a, M: Send + 'static> QueueLock<'a, M> { | ||
/// Pushes a message to the queue, or discards it if the queue is full. | ||
pub fn push_or_discard(&mut self, message: M) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Co-authored-by: Max Inden <mail@max-inden.de>
bot merge |
Trying merge. |
commit f8c83bd Author: Roman Borschel <romanb@users.noreply.github.com> Date: Tue Aug 18 07:59:32 2020 +0200 Add support for sourced metrics. (paritytech#6895) * Add support for sourced metrics. A sourced metric is a metric that obtains its values from an existing source, rather than the values being independently recorded. It thus allows collecting metrics from existing counters or gauges without having to duplicate them in a dedicated prometheus counter or gauge (and hence another atomic value). The first use-case is to feed the bandwidth counters from libp2p directly into prometheus. * Tabs, not spaces. * Tweak bandwidth counter registration. * Add debug assertion for variable labels and values. * Document monotonicity requirement for sourced counters. * CI * Update client/network/src/service.rs Co-authored-by: Max Inden <mail@max-inden.de> Co-authored-by: Max Inden <mail@max-inden.de> commit 8e1ed7d Author: Shawn Tabrizi <shawntabrizi@gmail.com> Date: Mon Aug 17 22:59:23 2020 +0200 WeightInfo for System, Timestamp, and Utility (paritytech#6868) * initial updates to system * fix compile * Update writer.rs * update weights * finish system weights * timestamp weights * utility weight * Fix overflow in weight calculations * add back weight notes * Update for whitelisted benchmarks * add trait bounds * Revert "add trait bounds" This reverts commit 12b08b7. * Update weights for unaccounted for read commit 399421a Author: Wei Tang <wei@that.world> Date: Mon Aug 17 21:07:30 2020 +0200 Derive Clone for AlwaysCanAuthor, NeverCanAuthor, CanAuthorWithNativeVersion (paritytech#6906) commit 287ecc2 Author: Wei Tang <wei@that.world> Date: Mon Aug 17 19:36:29 2020 +0200 pow: add access to pre-digest for algorithm verifiers (paritytech#6900) * pow: fetch pre-runtime digest to verifier * Add Other error type * Fix log target and change docs to refer to pre_runtime commit 488b7c7 Author: Wei Tang <wei@that.world> Date: Mon Aug 17 13:41:09 2020 +0200 babe, aura, pow: only call check_inherents if authoring version is compatible (paritytech#6862) * pow: check can_author_with before calling check_inherents * babe: check can_author_with before calling check_inherents * aura: check can_author_with before calling check_inherents * Fix node and node template compile * Add missing comma * Put each parameter on its own line * Add debug print * Fix line width too long * Fix pow line width issue commit fc743da Author: Pierre Krieger <pierre.krieger1708@gmail.com> Date: Mon Aug 17 11:19:16 2020 +0200 Add a DirectedGossip struct (paritytech#6803) * Add a DirectedGossip struct * Move protocol from prototype::new to biuld * More traits impls * Explain ordering * Apply suggestions from code review Co-authored-by: Toralf Wittner <tw@dtex.org> * Address concerns * Add basic test * Concerns * More concerns * Remove QueueSenderPrototype * Rename * Apply suggestions from code review Co-authored-by: Max Inden <mail@max-inden.de> Co-authored-by: Toralf Wittner <tw@dtex.org> Co-authored-by: parity-processbot <> Co-authored-by: Max Inden <mail@max-inden.de> commit 0079140 Author: Bastian Köcher <bkchr@users.noreply.github.com> Date: Sun Aug 16 00:05:36 2020 +0200 Don't take the origin in `can_set_code` (paritytech#6899) It makes no sense that `can_set_code` takes the origin for checking it. Everybody reusing this function is only interested in the other checks that are done by this function. The origin should be checked by every dispatchable individually. commit cd3b62b Author: Seun Lanlege <seunlanlege@gmail.com> Date: Sat Aug 15 10:08:31 2020 +0100 RpcHandlers Refactorings (paritytech#6846) * allow access to the underlying Pubsub instance from RpcHandlers * bump Cargo.lock * no more Arc<RpcHandlers> * bump Cargo.lock * Debug,. * Arc<RpcHandlers> * RpcHandler * RpcHandlers::io_handler * remove chain spec from cli * address pr comments * remove stray newline Co-authored-by: Ashley <ashley.ruglys@gmail.com> Co-authored-by: Tomasz Drwięga <tomasz@parity.io> Co-authored-by: Ashley <ashley.ruglys@gmail.com> commit eec7d71 Author: Max Inden <mail@max-inden.de> Date: Fri Aug 14 18:15:45 2020 +0200 client/authority-discovery: Revert query interval change (paritytech#6897) Revert the accidental query interval change from every one minute back to every 10 minutes. commit 13b0650 Author: Roman Borschel <romanb@users.noreply.github.com> Date: Fri Aug 14 10:41:47 2020 +0200 Update to libp2p-0.23. (paritytech#6870) * Update to libp2p-0.23. Thereby incorporate bandwidth measurement along the lines previously done by libp2p itself. * Tweak dependencies for wasm32 compilation. For wasm32 we need to enable unstable features to make `task::Builder::local` available. * Simplify dependencies. * Simplify. Leave the calculation of bytes sent/received per second to the outer layers of the code, subject to their own individual update intervals. * Cleanup * Re-add lost dev dependency. * Avoid division by zero. * Remove redundant metric. * Enable sending of noise legacy handshakes. * Add comment about monotonic gauge. * CI commit 0e703a5 Author: Alan Sapede <alan.sapede@gmail.com> Date: Fri Aug 14 04:15:59 2020 -0400 Adds debug logs to EVM frame (paritytech#6887) commit f16cbc1 Author: Kian Paimani <5588131+kianenigma@users.noreply.github.com> Date: Thu Aug 13 23:30:22 2020 +0200 More renaming to move away from phragmen. (paritytech#6886) commit 8993a75 Author: André Silva <123550+andresilva@users.noreply.github.com> Date: Thu Aug 13 19:38:14 2020 +0100 network: don't log re-discovered addresses (paritytech#6881) * network: move LruHashSet to network crate utils * network: don't log re-discovered external addresses * Update client/network/src/utils.rs Co-authored-by: mattrutherford <44339188+mattrutherford@users.noreply.github.com> Co-authored-by: mattrutherford <44339188+mattrutherford@users.noreply.github.com> commit 4d3c948 Author: Alexander Popiak <alexander.popiak@parity.io> Date: Thu Aug 13 18:54:05 2020 +0200 add runtime migrations to release notes/changelog (paritytech#6875) commit d019a66 Author: Wei Tang <wei@that.world> Date: Thu Aug 13 14:53:42 2020 +0200 pallet-evm: avoid double fee payment (paritytech#6858) * pallet-evm: avoid double fee payment * Only skip fee payment for successful calls commit ed4f7a1 Author: Bastian Köcher <bkchr@users.noreply.github.com> Date: Wed Aug 12 21:35:10 2020 +0200 Make `HexDisplay` useable in `no_std` (paritytech#6883) Actually I use this quite often when debugging some WASM bugs and there is no harm in enabling it by default. Before I just always copied it everytime I needed it. commit 473a23f Author: Max Inden <mail@max-inden.de> Date: Wed Aug 12 16:16:40 2020 +0200 client/authority-discovery: Introduce AuthorityDiscoveryService (paritytech#6760) * client/authority-discovery: Rename AuthorityDiscovery to XXXWorker * client/authority-discovery: Introduce AuthorityDiscoveryService Add a basic `AuthorityDiscoveryService` implementation which enables callers to get the addresses for a given `AuthorityId` from the local cache. * client/authority-discovery: Split into worker and service mod Move `Service` and `Worker` to their own Rust modules resulting in the following file structure. ├── build.rs ├── Cargo.toml └── src ├── error.rs ├── lib.rs ├── service.rs ├── tests.rs ├── worker │ ├── addr_cache.rs │ ├── schema │ │ └── dht.proto │ └── tests.rs └── worker.rs * client/authority-discovery: Cache PeerId -> AuthorityId mapping * client/authority-discovery: Update priority group on interval Instead of updating the authority discovery peerset priority group each time a new DHT value is found, update it regularly on an interval. This removes the need for deterministic random selection. Instead of trying to return a random stable set of `Multiaddr`s, the `AddrCache` now returns a random set on each call. * client/authority-discovery: Implement Service::get_authority_id * client/authority-discovery: Use HashMap instead of BTreeMap * client/authority-discovery: Rework priority group interval * client/authority-discovery: Fix comment * bin/node/cli: Update authority discovery constructor * client/authority-discovery: Fuse from_service receiver * client/authority-discovery: Remove Rng import * client/authority-discovery: Ignore Multiaddr without PeerId * client/authority-discovery/service: Add note on returned None * client/authority-discovery/addr_cache: Replace double clone with deref commit c495f89 Author: Cecile Tonglet <cecile@parity.io> Date: Wed Aug 12 16:07:11 2020 +0200 Add async test helper to timeout and provide a task_executor automatically (paritytech#6651) * Initial commit Forked at: 60e3a69 Parent branch: origin/master * Add async test helper to timeout and provide a task_executor automatically * simplify error message to avoid difference between CI and locally * forgot env var * Use runtime env var instead of build env var * Rename variable to SUBSTRATE_TEST_TIMEOUT * CLEANUP Forked at: 60e3a69 Parent branch: origin/master * Apply suggestions from code review Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> * Re-export from test-utils * Default value to 120 * fix wrong crate in ci * Revert "Default value to 120" This reverts commit 8e45871. * Fix version * WIP Forked at: 60e3a69 Parent branch: origin/master * WIP Forked at: 60e3a69 Parent branch: origin/master * WIP Forked at: 60e3a69 Parent branch: origin/master * remove feature flag * fix missing dependency * CLEANUP Forked at: 60e3a69 Parent branch: origin/master * fix test * Removed autotests=false * Some doc... * Apply suggestions from code review Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> * WIP Forked at: 60e3a69 Parent branch: origin/master * WIP Forked at: 60e3a69 Parent branch: origin/master * Update test-utils/src/lib.rs Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> commit 0c3cdf1 Author: mattrutherford <44339188+mattrutherford@users.noreply.github.com> Date: Wed Aug 12 12:53:21 2020 +0100 Implement tracing::Event handling & parent_id for spans and events (paritytech#6672) * implement events handling, implement parent_id for spans & events * add events to sp_io::storage * update test * add tests * adjust limit * let tracing crate handle parent_ids * re-enable current-id tracking * add test for threads with CurrentSpan * fix log level * remove redundant check for non wasm traces * remove duplicate definition in test * Adding conditional events API * prefer explicit parent_id over current, enhance test * limit changes to client::tracing event implementation * remove From impl due to fallback required on parent_id * implement SPAN_LIMIT change event log output * change version of tracing-core * update dependancies * revert limit * remove duplicate dependency * Apply suggestions from code review Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> Co-authored-by: Matt Rutherford <mattrutherford@users.noreply.github.com> Co-authored-by: Benjamin Kampmann <ben@parity.io> Co-authored-by: Bastian Köcher <bkchr@users.noreply.github.com> commit 5b809d2 Author: Wei Tang <wei@that.world> Date: Wed Aug 12 12:46:28 2020 +0200 pallet-evm: fix wrong logic in mutate_account_basic (paritytech#6786) * pallet-evm: fix wrong logic in mutate_account_basic * Add test for mutate account commit f6d66db Author: Pierre Krieger <pierre.krieger1708@gmail.com> Date: Wed Aug 12 11:58:01 2020 +0200 Add a warning if users pass --sentry or --sentry-nodes (paritytech#6779) * Add a warning if users pass --sentry or --sentry-nodes * Apply suggestions from code review Co-authored-by: Max Inden <mail@max-inden.de> * Fix text Co-authored-by: parity-processbot <> Co-authored-by: Max Inden <mail@max-inden.de> commit d7979d0 Author: Shaopeng Wang <spxwang@gmail.com> Date: Wed Aug 12 21:21:36 2020 +1200 Implement 'transactional' annotation for runtime functions. (paritytech#6763) * Implement 'transactional' annotation for runtime functions. * Allow function attributes for dispatchable calls in decl_module. * decl_module docs: add transactional function example. * decl_module docs: add function attributes notes. * Fix license header. commit d4efdf0 Author: Pierre Krieger <pierre.krieger1708@gmail.com> Date: Wed Aug 12 10:58:16 2020 +0200 Fuse the import queue receiver (paritytech#6876) * Fix the import queue receiver * Add logging commit a20fbd5 Author: André Silva <123550+andresilva@users.noreply.github.com> Date: Tue Aug 11 22:21:45 2020 +0100 docs: fix references to code of conduct document (paritytech#6879) commit e7cc595 Author: h4x3rotab <h4x3rotab@gmail.com> Date: Wed Aug 12 04:12:34 2020 +0800 Add Phala Network SS58 address type (paritytech#6758) commit fe3fc04 Author: André Silva <123550+andresilva@users.noreply.github.com> Date: Tue Aug 11 20:55:15 2020 +0100 docs: convert code of conduct to markdown (paritytech#6878) commit 72addfa Author: Kian Paimani <5588131+kianenigma@users.noreply.github.com> Date: Tue Aug 11 17:07:17 2020 +0200 Fix wrong staking doc about transaction payment. (paritytech#6873) * Fx paritytech#4616 * Fix paritytech#4616 commit 4064378 Author: André Silva <123550+andresilva@users.noreply.github.com> Date: Tue Aug 11 16:05:59 2020 +0100 grandpa: change some logging from trace to debug (paritytech#6872) * grandpa: change some logging from trace to debug * grandpa: cleanup unused import commit 6f57582 Author: Nikolay Volf <nikvolf@gmail.com> Date: Tue Aug 11 18:05:31 2020 +0300 Move to upstream wasmtime, refactor globals snapshot (paritytech#6759) * refactor globals snapshot * ignore test * update pwasm-utils ref * line width * add doc comment for internal struct * add explanation for iteration * Demote rustdoc to a comment * use 0.14 Co-authored-by: Sergei Shulepov <sergei@parity.io> commit a362997 Author: Arkadiy Paronyan <arkady.paronyan@gmail.com> Date: Tue Aug 11 16:12:30 2020 +0200 Block packet size limit (paritytech#6398) * Block packet size limit * Update client/network/src/protocol.rs Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com> * Add block response limit Co-authored-by: Pierre Krieger <pierre.krieger1708@gmail.com>
Provides the primitive that will make it possible to tackle paritytech/polkadot#1453
Adds a new
gossip
module insc-network
, containingDirectedGossip
.The way it works is what is described here.
Here's how you would plug it into Polkadot, regarding paritytech/polkadot#1453:
DirectedGossipPrototype
. This type doesn't have any template parameter.NetworkBridgeEvent::PeerConnected
enum variant should contain a new field of typeDirectedGossipPrototype
(or some abstraction of it). The bridge passes the newly-created object there.PeerConnected
event turns the prototype into an actualDirectedGossip
, passing the protocol name, queue size, and messages type.push_or_discard
,push_unbounded
, orretain
.PeerDisconnected
message is received, the subsystem discards theDirectedGossip
for this peer.