-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement gRPC remote endpoint #29
Conversation
@hawkw most relevant is I'm not sure about the two thread design in the subscriber. I think it makes sense, but at the same time it feels like a hack. Feedback would be appreciated! |
In case people want to try it out: run the example from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great prototype, and I'm eager to spend some time playing around with it! I left comments on potential next steps, as well as a few minor code style nits.
It's great to see everything come together. Good work!
impl Store { | ||
fn new_span(&mut self, span: NewSpan) { | ||
// Update id mapping for span, see `Store` documentation | ||
self.id_map.insert( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since span IDs are guaranteed by the subscriber not to be repeated, I think we should probably check that id_map.insert
returns None
, and warn if it doesn't --- that would indicate that the subscriber sent a NewSpan
message for a span ID that already existed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The subscriber won't guarantee that - the console itself upholds this invariant, exactly with this insert. This means when a span id is reused by the subscriber, this insert will return Some(span)
.
I should stress on that in the documentation.
On the other hand, since we don't transmit DropSpan
s, this means the mapping will not eagerly remove a now unused id, but that's fine, since the subscriber will also not reclaim that memory, but store that the index/id for reusage.
|
||
#[derive(Clone)] | ||
/// A factory for ConsoleForwarder | ||
pub struct BackgroundThreadHandle { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since tracing
will often be used from asynchronous applications using futures, what do you think about implementing the server as a background task, rather than as a background thread? The background work could be a future spawned on an executor, and we could use futures-aware channels rather than std
channels to communicate with it. In cases where users are not using futures and do want to have a dedicated background thread, we could wrap that future with a dedicated executor that runs in its own thread.
This approach is okay for now as a prototype, but we should consider changing this eventually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is what i meant with, "feels like a hack"
I very much agree that this is the right aproach, the honset answer is that i'm not fluent enough with futures/tokio to pull this off in one go, i will change this in a follow up, but i will need some one-on-one time with the tokio docs :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I very much agree that this is the right aproach, the honset answer is that i'm not fluent enough with futures/tokio to pull this off in one go, i will change this in a follow up, but i will need some one-on-one time with the tokio docs :D
I'm also happy to provide guidance on this when you get to that point!
//! | ||
//! # Thread overview: | ||
//! | ||
//! ```schematic,ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice diagram! :)
#[derive(Default)] | ||
pub(crate) struct Registry { | ||
pub spans: Vec<Span>, | ||
pub reusable: Vec<SpanId>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A potential future optimization to think about (not necessary for now) is to store the free list of reusable IDs in the same Vec
as active spans. We could use an enum like
enum SpanState {
Active(Span),
Free { next_id: SpanId },
}
and then track the most recently freed ID as a single SpanId
field on the registry struct. When a span in the spans
vec is closed, we replace it with the SpanState::Free
variant with the next_id
field set to the previous value of Registry.next_id
, and set Registry.next_id
to the closed span's ID. This would probably use space more efficiently. The tracing-fmt
crate uses a similar strategy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honsetly, i'm impressed!
If we change InternalId
s usize to NonZeroUsize
we might also enable niche optimizations, making the enum tag basically free, while saving 3 (Vec) - 1(.next_id) = 2
words
Copy pasta error Co-Authored-By: Eliza Weisman <eliza@buoyant.io>
FYI: if you put the example in an Eventually it might be nice to document how to run the console & examples in a README... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some more notes that I hope will be helpful.
|
||
updated: bool, | ||
id_counter: usize, | ||
id_map: HashMap<u64, InternalId>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like both id_counter
and id_map
are only written or read from in methods on StoreHandle
; and never read from the UI thread. As far as I can tell, the UI thread will never need to touch them, as the thread that receives updates from the server handles the ID translation?
In that case, we should probably move them out of the shared state so that they don't need to be locked.
|
||
/// Locks and updates the underlying `Store` | ||
pub fn handle(&self, variant: Variant) { | ||
let mut store = self.0.lock().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the various functions this method calls, it seems like they have different requirements about being able to read and write to different parts of the registry. For example, the event
method is the only one that accesses the store.events
field, while the store.spans
field is written and read from in other methods. It seems to me that these should be locked independently, so that the UI thread can read events while we are writing to the spans, and so on.
It seems to me that it's probably a good idea to introduce more granular locking on the different parts of the registry. For example, the Store::event
method doesn't need to lock the store.spans
field. We could also consider using a read-write lock on the
This patch got rather big 😄
Suggested review order:
/proto/tracing.proto
/subscriber/*
, doccomments expand on the design/console/*
Closes tokio-rs/console#9