-
Notifications
You must be signed in to change notification settings - Fork 172
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Middleware for metrics #576
Conversation
} | ||
|
||
/// Create a new `MethodSink` with a limited response size | ||
pub fn new_with_limit(tx: mpsc::UnboundedSender<String>, max_response_size: u32) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice 👍
Co-authored-by: David <dvdplm@gmail.com>
* Fix try-build tests * Add a middleware setter and an example * Actually add the example * Grumbles * Use an atomic * Set middleware with a constructor instead * Resolve a todo * Update ws-server/src/server.rs Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> * Update ws-server/src/server.rs Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> * Update ws-server/src/server.rs Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this, it's just matter of adding the remaining code
@@ -76,6 +76,10 @@ pub use jsonrpsee_types as types; | |||
#[cfg(any(feature = "http-server", feature = "ws-server"))] | |||
pub use jsonrpsee_utils::server::rpc_module::{RpcModule, SubscriptionSink}; | |||
|
|||
/// TODO: (dp) any reason not to export this? narrow the scope to `jsonrpsee_utils::server`? | |||
#[cfg(any(feature = "http-server", feature = "ws-server"))] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can remove these feature guards, the utils
by itself have conditional compilation that should be sufficient but I think you have to change utils
to be a non-optional dependency for that to work.
utils/src/server/middleware.rs
Outdated
/// Intended to carry timestamp of a request, for example `std::time::Instant`. How the middleware | ||
/// measures time, if at all, is entirely up to the implementation. | ||
type Instant: Send + Copy; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder whether Instant
is the right name; I'd rather assume very little about how it will be used and call it something like OnRequest
perhaps. In the example, for instance, it's used as a counter, which seems like a perfectly reasonable use case too :)
I also wonder about allowing Clone
rather than Copy
; would this have any performance implications if the type is copyable anyway? It would perhaps open up some additional use cases. (on the other hand, we can relax restrictions later, and there aren't many use cases for an on_request
that knows nothing about the request anyway, so I don't mind either way really!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming this one is hard, like really hard. Maybe it should be X
to communicate its "whatever"-nature... (only half-joking).
Or we could take the opposite stance and remove it and force std::time::Instant
– after all we're not committing to this being the final shape of middleware; all we need is enough to cover substrate's needs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about Track
? type Track = std::time::Instant
reads alright; type Track = usize
is ok-ish. It probably violates all kinds of API guidelines, but this one is a bit special. :/
utils/src/server/middleware.rs
Outdated
} | ||
|
||
impl<A, B> Middleware for (A, B) | ||
where |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth a macro impl to cover tuples up to some certain size (just for better ergonomics for >2 middlewares)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably, but not until we know we need it imo. :)
* Add an example of adding multiple middlewares. * Update examples/multi-middleware.rs Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> * Update examples/Cargo.toml Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com> Co-authored-by: Maciej Hirsz <1096222+maciejhirsz@users.noreply.github.com>
* Move `Middleware` to jsonrpsee-types * Move Middleware trait to jsonrpsee-types * Add some docs.
…ee into mh-metrics-middleware
A strange thing from me; on my M1 mac, when I run
Does anybody else run into this? Running latest stable rust: rustc 1.56.1 (59eed8a2a 2021-11-01) Edit: I can see that the "time" feature is enabled for that crate, and so that enum should def exist. I guess for some reason it's being ignored when I run Edit 2: Ah, I can see in my Cargo.lock that it's resolved to using Edit 3: It looks like |
nice catch, let's pin |
}; | ||
return self.send_error(id, err); | ||
} else { | ||
return self.send_error(id, ErrorCode::InternalError.into()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume that the max size error is an error based on user input being too large?
I wonder whether the tracing::error!("Error serializing response: {:?}", err);
line should instead be above this InternalError bit, so we only get output for the unexpected errors?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It means that output
from an executed call exceeded the max limit but the "request" itself was below the limit.
Thus, it depends what you mean with the user but the registered callback
created a response bigger than the max limit.
Yes, or downgrade the error above to warn or something but the InternalError
"should" be infallible/bug IIRC.
} | ||
}; | ||
|
||
middleware.on_disconnect(); | ||
|
||
// Drive all running methods to completion | ||
method_executors.await; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note that we should perhaps add a comment or something to make it clear to our future selves that we souldn't return early before this await runs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking great to me :) Love the examples and doc comments. Just a couple of small notes (the mains things being ensuring tokio ^1.8 is used, and a comment about early returning before that await
if that seems sensible)
rx_batch.close(); | ||
let results = collect_batch_response(rx_batch).await; | ||
|
||
if let Err(err) = sink.send_raw(results) { | ||
tracing::error!("Error sending batch response to the client: {:?}", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, if the send_raw
will only fail once the server is closed, is that why we don't register on_response
when that fails?
otherwise, we seem to register failed responses too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, clean
Still a bunch of loose ends:
Other than that it should be functional and is backwards compatible. I needed to add an extra boxed future in async method call pipeline that I may be able to remove down the line (depending on how much I want to mess with the
FutureDriver
).