-
Notifications
You must be signed in to change notification settings - Fork 683
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Defensive
ops API design
#221
Comments
Please, go ahead :) |
So after some experimentation, I've hit the following blockers:
TLDR is that this can't be done in a backwards-incompatible way (afaict on a Friday night 😛). If I'm missing something, please let me know! Otherwise, if this refactor is deemed worthy of a breaking change (which IMO it is, lest these traits get even larger and more unwieldy) then I can open a PR with the changes and relevant fixes. (mostly just replacing |
Arithmetic crates like num_traits should support big nums, so they should pass by reference. You might as well pass by value if you're just trying to be even more precise than regular Rust about how you handle u64s. |
IMO, the traits should take their parameters by reference, since the traits should be usable by types that aren't
|
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
* Add Prometheus and Grafana to Docker Compose * Expose relay's Prometheus metrics port * Use Docker network references intead of localhost When you have containers on the same network they don't communicate over localhost, they instead refer to their container names * Move dashboard components into deployment folder * Update folder structure for Grafana and Prometheus config files The new folder structure more closely matches the expected defaults by Grafana and Prometheus, which allows us to clean up the paths in our docker-compose file a bit. * Add documentation about Prometheus and Grafana * Refer to Prometheus server instead of node
This PR updates litep2p to the latest release. - `KademliaEvent::PutRecordSucess` is renamed to fix word typo - `KademliaEvent::GetProvidersSuccess` and `KademliaEvent::IncomingProvider` are needed for bootnodes on DHT work and will be utilized later ### Added - kad: Providers part 8: unit, e2e, and `libp2p` conformance tests ([#258](paritytech/litep2p#258)) - kad: Providers part 7: better types and public API, public addresses & known providers ([#246](paritytech/litep2p#246)) - kad: Providers part 6: stop providing ([#245](paritytech/litep2p#245)) - kad: Providers part 5: `GET_PROVIDERS` query ([#236](paritytech/litep2p#236)) - kad: Providers part 4: refresh local providers ([#235](paritytech/litep2p#235)) - kad: Providers part 3: publish provider records (start providing) ([#234](paritytech/litep2p#234)) ### Changed - transport_service: Improve connection stability by downgrading connections on substream inactivity ([#260](paritytech/litep2p#260)) - transport: Abort canceled dial attempts for TCP, WebSocket and Quic ([#255](paritytech/litep2p#255)) - kad/executor: Add timeout for writting frames ([#277](paritytech/litep2p#277)) - kad: Avoid cloning the `KademliaMessage` and use reference for `RoutingTable::closest` ([#233](paritytech/litep2p#233)) - peer_state: Robust state machine transitions ([#251](paritytech/litep2p#251)) - address_store: Improve address tracking and add eviction algorithm ([#250](paritytech/litep2p#250)) - kad: Remove unused serde cfg ([#262](paritytech/litep2p#262)) - req-resp: Refactor to move functionality to dedicated methods ([#244](paritytech/litep2p#244)) - transport_service: Improve logs and move code from tokio::select macro ([#254](paritytech/litep2p#254)) ### Fixed - tcp/websocket/quic: Fix cancel memory leak ([#272](paritytech/litep2p#272)) - transport: Fix pending dials memory leak ([#271](paritytech/litep2p#271)) - ping: Fix memory leak of unremoved `pending_opens` ([#274](paritytech/litep2p#274)) - identify: Fix memory leak of unused `pending_opens` ([#273](paritytech/litep2p#273)) - kad: Fix not retrieving local records ([#221](paritytech/litep2p#221)) See release changelog for more details: https://github.com/paritytech/litep2p/releases/tag/v0.8.0 cc @paritytech/networking --------- Signed-off-by: Alexandru Vasile <alexandru.vasile@parity.io> Co-authored-by: Dmitry Markin <dmitry@markin.tech>
The current API design for the
DefensiveSaturating
trait isn't great - it's unnecessarily monolithic. It should be split into several smaller traits, and thenDefensiveSaturating
can be retained as a super trait so this won't be a breaking change.I propose
DefensiveSaturating{Add,Sub,Mul}
, and thenDefensiveSaturatingInc
as a super trait ofDefensiveSaturatingAdd + One
andDefensiveSaturatingDec
as a super trait ofDefensiveSaturatingSub + One
. This will allow users more granular control over the functionality they need, and removes this issue: https://github.com/paritytech/substrate/blob/master/frame/support/src/traits/misc.rs#L367-L368 (why does a type need to meMul
to usedefensive_saturating_sub
?)I'm happy to implement this if it's accepted 🙂
The text was updated successfully, but these errors were encountered: