Skip to content
This repository has been archived by the owner on May 23, 2023. It is now read-only.

Scope of OpenTracing, scope of API repositories, RPC or Tracing #33

Open
kriskowal opened this issue Jan 13, 2016 · 30 comments
Open

Scope of OpenTracing, scope of API repositories, RPC or Tracing #33

kriskowal opened this issue Jan 13, 2016 · 30 comments

Comments

@kriskowal
Copy link
Contributor

We on Uber’s RPC group are embarking on a project to generalize context propagation from incoming requests to outgoing requests. We would like to start a discussion about the architecture and scope of open tracing.

@yurishkuro tells us that this undertaking is within the charter of the open tracing working group, which appears to:

“Provide a library in various languages that has common types for trace spans and contexts, and the logic for propagating incoming spans to outgoing spans. Provide a sufficiently general library that bindings for arbitrary clients, servers, and trace reporters can be developed independently and swapped as need arises.”

Instrumenting specific clients (e.g., the Node.js built-in HTTP client), servers (e.g., the built-in Go HTTP server), and trace reporters (e.g., one in Python for Zipkin) is out of scope but some will likely be contributed to the commons anyway. Wire formats for transmitting spans and annotations appear to be out of scope.

Apart from dealing with trace propagation, the tracing library provides an affordance for simple context propagation, copying attributes from incoming spans to outgoing spans. For generalized RPC context propagation, copying from incoming to outgoing requests will not be sufficient for most attributes, and undesirable for performance in other cases, as each hop would bloat fan-out requests.

  • Request properties like an auth token and auth params should be copied from incoming to outgoing.
  • Request context properties like “caller”, “callee”, and “method” should not be propagated, provided either at call sites of the instrumented RPC library, or inferred from the RPC IDL (in our case, Thrift). For completeness, these request properties include retry flags, speculative execution configuration, and shard keys for handle-or-forward stateful services.
  • Request context properties like “timeout” need to be converted to a “deadline” on arrival, copied to outgoing request contexts, then converted back to “timeout” at the time of each attempt, bearing in mind that the IDL for the outgoing endpoint might have a shorter timeout that what remains.
  • Response context properties like invalidated cache keys would need to be merged.

A context propagation library will also be in a good position to forward cancellation to the downstream call graph and perhaps even abort local work.

We have a few options for stacking this architecture. The tracing library as stands provides a tightly coupled solution for general stack propagation, but does not address cases like “timeout” propagation. The common case is that various kinds of request context need unique logic for propagation.

  • We could separate the concern of trace propagation entirely from request context propagation, and use the trace library to serve one of many separable concerns (“cancellation”, “timeouts”, “auth”, “tracing”).
  • We could merge all of these concerns into a single tracing library. This is attractive if these concerns are sufficiently common.
  • We could retain the “basic” context propagation in the tracing library and layer the additional concerns in another library. This is less attractive because it is difficult to distinguish headers (like auth headers) that should be implicitly propagated from those that should be dropped. The current solution proposed as trace context header prefix for all implicitly propagated headers, but auth token and param headers would typically not have this prefix.
  • We could add annotations to headers that indicate one of several propagation strategies should be used for that header, e.g., copy, decrement, merge, append.
@bhs
Copy link
Contributor

bhs commented Jan 14, 2016

@kriskowal this is important stuff, thanks for reaching out... I almost wonder if it would make more sense to have a quick call or something (happy to take notes and summarize takeaways here)? If you're up for that I'm happy to schedule a hangout for all who are interested. Otherwise I can reply on github. Let me know.

@dkuebric
Copy link

Agree important, context propagation is the feature for an open tracing standard. Would be interested in joining such a discussion.

@yurishkuro
Copy link
Member

Want to write my view on this, as similar discussions are happening elsewhere.

First, I want to clarify the terminology, not because mine is better, but to avoid ambiguity.

I will refer to request context as a context propagated in-process, such as Go's net/context. Request context is not limited by wire serialization concerns and can store anything, including the OpenTracing Span object. The data stored in the request context has the life span of the request in the given process, it does not propagate over RPC.

I will refer to distributed context as a context propagated between processes, across RPC boundaries. Data stored in that context must be serializable, and once added to the context it remains visible both in-process and propagates to all levels of the distributed call tree. Or to put it in tracing terms: data in distributed context is propagated to all future children of the current Span and their descendants.

Finally, there is an RPC request itself, which is a logical abstraction of the message sent on the wire during remote call. The format of the message is specific to a given RPC framework, but we expect it supports the transmission of the distributed context, either as opaque data, or as part of the "headers" map.

Note that request context and request provide the propagation fabric, in-process and between processes respectively.

The various properties mentioned in the original post may belong to one or more of the contexts described above. For example:

  • "callee" and "method" only need to be in request

  • auth token should be in the distributed context, which itself is propagated by request and request context

  • timeout should be in the request, then transformed into a deadline and placed into request context, then potentially transformed again when making another downstream call with another request. Only the RPC framework would know how to do these transformations.

    Now, which libraries should implement each context?

  • Request is 100% specific to RPC framework, so it's clear where it belongs.

  • Request context is mainly the in-process propagation technique. I discussed In-Process Request Context Propagation in the opentracing docs. It is often implemented by RPC frameworks (like Finagle), but we cannot assume that the whole SOA is using the same RPC framework, as any organization is likely to use some 3rd party software (e.g. data stores). In Go apps the request context is propagated by the application code. I don't think there is a single solution here, but we can come up with a set of guidelines to allow bridging different implementations, e.g. taking a request context maintained by the server-side RPC framework and transforming it into an client-side RPC.

  • distributed context doesn't really propagate itself, it relies on the two above to do it. Distributed context needs a vendor-neutral API to expose set/get methods and means of marshaling in and out of request. Because of vendor-neutrality requirements, OpenTracing is a good candidate for providing a distributed context layered on top of the Span. The alternative is to define another independent API for distributed context (which in turn can be used by OpenTracing to store the current span), however actually implementing such API in all applications and RPC frameworks is almost as much work as implementing OpenTracing.

@codefromthecrypt
Copy link

I can see the temptation to put things like propagation of security tokens here, for lack of a propagation api. It is also tempting as this project began named distributed context propagation api!

DCP is commonly implemented in RPC apis, but I wouldn't go so far as to think they are cursed to never be useful outside, or that a similar effort as we are doing here for tracing couldnt be done for context propagation.

It certainly feels an accidental conflation to stick duties like auth token propagation in a tracing api.

Each new area of responsibility we consider, logging, attachment handling, arbitrary context propagation etc.. They all add weight to the effort and should be carefully considered. I have heard numerous complaints that people can't keep up as it is.

@michaelsembwever
Copy link

I've mentioned it in other places that DCP shouldn't be promoted for general application domain coding.

Putting infrastructural things like the correlation-id, user-id or auth-token, into the DCP makes a lot of sense to me. Infrastructural systems like Zipkin, Kibana, and what-not can effectively be used together .

The definition of infrastructural here has emphasis, and there is no clear cut definition to it. Some users will use DCP more of the application side of things. That's ok. DCP is a valuable addition in my opinion but, I'm asking that we make sure documented examples focus on the obviously infrastructural.

When users do come along and use DCP for a crap load of application domain stuff (instead of using proper designed parameterisation) and their tracing system grinds to a halt because of the added payload, we can say "look DCP really isn't designed to be doing normal application parameterisation for you".

@yurishkuro
Copy link
Member

To me, the question is not what kind of data people put in the DCP, but how DCP is achieved. If people find it useful to propagate an auth-token, it's their business. If they want to stuff 1Mb of application data - ultimately it's their business too, all we can do is provide guidelines, which we did in the API for Trace Attributes.

@adriancole I think DCP implementations by RPC apis are inherently non-portable - I can build my whole stack around finagle, or grpc, or tchannel, and then suddenly I need to talk to Cassandra, or Redis, or Hadoop stack - then what? DCP has to be vendor neutral, which rules out RPC apis. But I can see a state where OpenTracing is available at all levels, including RPC frameworks and big data systems, so applications have a clear API for saving some value to a DCP and retrieving it 5 levels down in the SOA. And because OpenTracing defines the encoding API, it provides an interop bridge between different RPC apis.

I do not dispute that DCP can exist without tracing, while tracing cannot exist without DCP. I just think that DCP by itself does not have strong enough incentive for all frameworks to implement it, and tracing has a better story, and can kill two birds with one stone.

@michaelsembwever
Copy link

Let me be frank, but keep in mind it's directed ultimately at what we express not what we provide.

If people find it useful to propagate an auth-token, it's their business.

No. It you provide an API, from there the customer is always right.
You need to be explicit in what is the API's business and what is not.

so applications have a clear API for saving some value to a DCP and retrieving it 5 levels down in the SOA

At the application domain, this is shockingly terrible systems design, and I hope that we not promote such a thing. It's a promotion of bringing past headaches of global variables into an even more complex world of distributed programming, i shudder from fingers to toes.

@yurishkuro
Copy link
Member

You need to be explicit in what is the API's business and what is not.

How do you propose we do that, aside from a stern warning that we already have in the API docs?

and I hope that we not promote such a thing.

again, are you talking about documentation or the actual API? I don't see how what is being discussed here is different from say grpc Context, which warns against abusing it, but does not have any API level capability of preventing people from doing stupid things.

@michaelsembwever
Copy link

How do you propose we do that, aside from a stern warning that we already have in the API docs?

Basically, but clear not stern.

Provide examples that are clearly infrastructural examples and can't be confused with application code examples.

Provide documentation that clearly states the intended usage for DCP.

You can't enforce, or make explicit, everything in the API. A good example is the avoidance of null parameters: often the best you can do is clearly document that nulls are not acceptable input and in the few situations they are those parameters will be annotated with @nullable

@dkuebric
Copy link

@yurishkuro I think we agree about context propagation, and I think we are very close to having a complete API for it already. The remaining piece is around a standard for implicit request context propagation, which I'll make the case for here.

The reason I think it valuable to consider standards around what is above referred to as request context has to do with interoperability/compatibility of instrumentation. If instrumentation is happening separately in a library vs the framework it is used in, the instrumenters of each need to agree on where they will find the request context.

As an example, one could imagine that an application framework is instrumented by placing request context in some sort of framework-specific request-global field. If an HTTP client library or ORM is instrumented for use in conjunction with this framework, it must now somehow be framework-aware. This presents a serious reusability challenge to module-by-module instrumentation.

It could be decided that solving this problem is outside the scope of OT 1.0 because all users are expected to manually instrument their applications. This might make sense in terms of scope-limiting, but it will pose a serious challenge to library integration and adoption. I'd prefer a future where libraries can come with OT hooks built in (ala DTrace), allowing for reusable instrumentation. As far as I can tell, this requires some way for instrumentation developers to agree on a request context propagation mechanism.

In TraceView, we've gone with thread-local in the majority of cases, monkey-patching extensively for evented systems and thread pools in order to keep request context propagated. (In some cases, like nginx or apache module, the work is simple enough that per-request structs are more than sufficient for this request context storage.)

However, OT should not dictate a particular storage mechanism. It seems that the biggest variable in what request context propagation mechanism is appropriate is the concurrency model, which is variable by application. Instead of demanding a particular context store, we might provide an API that allows instrumentation to discover where a request context is stored: it could be initialized by frameworks (things that call start_trace), then this knowledge of storage mechanism can be relied on by non-trace-starting instrumentation.

This isn't well thought out yet, but for discussion purposes, an example API might look like this:

  • set_request_context_store(write_handler, read_handler) - associate handler methods to read and write context information on a process-level. Intended use is at the initialization of an application.
  • write_handler(ctx) - accepts a context and stores it in a way that will be available via subsequent call to read_handler
  • read_handler() - a callback to retrieve current context

You can imagine handlers that read/write thread-local, or in an evented framework, pull from a global which can manage per-request metadata.

This might be the wrong design. But it seems important to have some sort of interface if we want to make instrumentation of libraries as portable as tracers. Thoughts?

@yurishkuro
Copy link
Member

@dankosaur I agree that we should have some sort of a story about in-process propagation, although I would be very hesitant with putting it as a blocker for OT v1.0

We at Uber are trying to solve this exact problem, e.g. in one scenario where a Python, Tornado-based server makes a downstream call over TChannel (Uber's RPC). The server is instrumented implicitly via monkey patching (lib here), and the context is propagated via thread-local + Tornado StackContext combination request_context.py. TChannel has no idea about it, because its own propagation mechanism is explicit (in all languages). We haven't quite solved it, but the approach I want to take is to have a static registration on TChannel object (set at app init time) that takes a hook that can retrieve tracing context from somewhere. This sounds very much like your read_hander, so I am glad we're thinking alike.

What's interesting about this approach is that the decision about the actual method of in-process propagation is in the hands of the application itself. That's important because it affects how the application does multi-threading internally. It would be good to have a way for the handler-based approach to also work with explicit propagation.

One other complication in this overall problem is that in order to support thing like read_hander we need to agree on the api for the "context" object itself. Maybe it's not too difficult since it's probably implemented as a map in most cases.

I would like to stop at this point and wait for @kriskowal to respond if in-process propagation is the direction he wants to explore in this issue or spin it into another one, since the original question was broader in scope, and I would like to settle that one first.

@dkuebric
Copy link

Yes let's see if it makes sense to split this out.

I think it will be difficult to combine explicit & implicit propagation, requiring a bit of extra work on the part of the explicit propagation implementer. Something like invoking the write_handler manually before any call into an implicitly-instrumented library, or just not using any. But that's what one is buying into with explicit propagation, I suppose, and would choose it only in situations where explicit propagation is absolutely necessary.

@codefromthecrypt
Copy link

I do not dispute that DCP can exist without tracing, while tracing cannot
exist without DCP. I just think that DCP by itself does not have strong
enough incentive for all frameworks to implement it, and tracing has a
better story, and can kill two birds with one stone.

OK, what we are saying by defining an API that is implicitly N apis, is
"you have to pay to play" In other words,

if you want to be an OpenTracing implementation, you now have to implement
DCP and deal with bugs around this space forever. There is no scoped down
version, which only needs to deal with incidental propagated state (as
exists today in other tracers).

2 birds with one stone, perhaps, just saying that maybe not everyone can or
wants to lift that stone.

An alternate route is to develop a DCP api separately from the tracer
api. Not only is this more like what exists today, but also like today, it
buckets the complexities. Tracers who support the feature of arbitrary
context propagation now have an api contract to rely on. People who want to
participate in OT, but find the depth of responsibility daunting, can now
at least have focused areas to contribute to.

@dkuebric
Copy link

@adriancole I'm not convinced that play is "free" -- regardless of extended use-cases for DCP, I think every user of OT is going to have to implement context propagation for tracing, and guidance / a framework for doing so will help.

I can see the argument for limited scope: some will want to approach this as they might a statsd type solution, manually instrumenting their app and without holding much hope for an ecosystem of instrumentation. Given the limited set of instrumentation likely to be available at OT 1.0, that will certainly be the majority use-case at the beginning.

On the flip side, if there is a default notion of implicit context propagation (which can be disabled/configured for manual explicit propagation), that would provide both a model and potentially out-of-the-box convenience for instrumentors in many languages.

@bhs
Copy link
Contributor

bhs commented Jan 25, 2016

@adriancole a few thoughts:

  1. I have always felt ambivalent about the "baggage" API. It's semantically profound (despite the seeming simplicity of the function signatures), yet also opens a Pandora's Box of failure scenarios given naive programmers (esp high up in the stack). This is why I wanted to separate them from the vanilla (and less interesting+dangerous) "span tags" in the API. There has been some pressure lately (e.g., your own comment at https://github.com/opentracing/opentracing-go/pull/40/files#diff-c36a63bd5139369ebe5b275a1fba1599R24) to consolidate the two. I initially separated them in order to clearly delineate them for both callers and implementors. Maybe this thread helps understand my motivation there?
  2. On that note, I am comfortable with the idea of certain OT implementations that don't support baggage/DCP. As mentioned elsewhere, this is a feature I want OT callers to be able to opt-out of, hence how the propagation fields are separated along this axis.

Practical idea: we could (?) add a "capabilities" section to the sort of implementation-introspection call proposed here: opentracing/opentracing.io#33 ... That would remove the "pay to play" requirement. (The other initial "capabilities" candidate (in my mind) would be human-readable log messages, btw) It would also allow us to be clear about what must be implemented (i.e., what's not a negotiable capability), as those features would not be present in the set of optional capabilities.

@kriskowal
Copy link
Contributor Author

By way of summary,

@yurishkuro proposes nomenclature for “request”, “request context”, “distributed context propagation”, and “in-process context propagation”. I’ll track these definitions hereafter.

@yurishkuro Proposes two alternatives to solving “distributed context propagation”, one of which (A) is to piggy-back a key value store API on spans, and the other (B) is to provide spans alone and leave the concern of incorporating them in “requests” and “traces” as an exercise to RPC libraries. I am in favor of the latter (B) because I am not convinced of the utility or wisdom of fanning out an arbitrary key-value store to all downstream requests.

I believe it unwise because it establishes an ad-hoc global namespace shared by multiple services (as argued by @michaelsembwever and @bensigelman). This will cause request sizes to bloat as they go downstream.

I also don’t believe DCP over requests will be useful. There are specific properties of “request contexts” (properties of incoming requests) that should be propagated in very specific ways to “requests” (properties of outgoing requests). Copying is seldom sufficient. There are also corresponding response and response context properties, each with their own semantics particularly when merging multiple responses into an aggregate response. Solving the problem of RPC propagation and submitting ad-hoc key-values scoped to spans to a trace collector collectively take the wind out of propagating context with a key-value store on requests. RPC libraries would also miss some performance opportunities if they had to serialize and deserialize this bundle at every hop.

@dankosaur is in favor of solving context propagation. Would you be satisfied if we took a layered approach where OpenTracing focused on tracing and another collaborative and open-source project used OpenTracing as part of a comprehensive RPC library with pluggable transport protocols and pluggable encoding schemes?

@bensigelman I am open to a conference call. I believe @yurishkuro may be in the best position to organize a chat.

@yurishkuro
Copy link
Member

Let's have a hangouts call this week, please pick the time slots that work: http://doodle.com/poll/5kvcvag2vkga38fz

@bhs
Copy link
Contributor

bhs commented Jan 25, 2016

Thanks, @yurishkuro: I am confirming that the early time slots are 1pm PT, not 1pm ET, correct?

@yurishkuro
Copy link
Member

@bensigelman correct - I had the timezone enabled on the poll, so you should see your local time. For me the first slot is 4pm EST.

@michaelsembwever
Copy link

Nice comment @kriskowal

It leaves me torn between adding the DCP functionality via

  • additional layered API classes/structures, and
  • the additional/optional Span.setCascadingTag(..) method.

We share concerns around an ad-hoc global namespace across services. At the same time effort is underway to reduce the concepts involved for the end-user to tracing. it is and should be a very simple domain to understand. Furthermore the end-user knows this, and already having to deal with a horde of different libraries will often have a very limited patience toward how complex tracing instrumentation is.

I was thinking the Span.setCascadingTag(..) according to the Specification is supposed to implement DCP (given that we're convinced we need it). But that the Specification has room that some implementations may only provide a limited implementation where cascading tags are only applied in-process. This gives us the layered API approach, without forcing the whole DCP schema complexity upon the end-user; "cascading tag" being an intuitive terminology.

To deal with the risk of "request sizes to bloat as they go downstream" a rough idea off the top of my head is that the Specification implies a limit to the character length of the cascading tags map. Along the lines that this is a limited context across services for tracing/system-level stuff. (There could be a way, like a system variable or something not so visible to the main API, to increase this limit, if the end-user absolutely had to.)

The concern around request-->response changes in tags is an interesting one. Kinda hoping that isn't involved in the OpenTracing DCP proposal.

@yurishkuro
Copy link
Member

Useful background reference on Distributed Context Propagation, from Rodrigo Fonseca, co-designer of X-Trace: http://www.cs.brown.edu/~rfonseca/pubs/fonseca-tracing-1973.pptx

@dkuebric
Copy link

@kriskowal thanks for the summary. To clarify on my end I am not sure that an RPC-library-based solution is the right approach. Certain companies have taken the step of standardizing around a certain RPC framework, using it for both send and receive sides of all services. However, in many cases I have seen, the distributed app is a hodgepodge of different libraries for communication over various channels (HTTP, AMQP, Kafka, etc.). If there's a basic standard for context propagation in-process, each RPC system can interoperate.

With the current APIs, a large part of DCP is already provided for: tracers can figure out how to marshall that context for the wire, and RPC libraries will have the responsibility to implement the APIs that do so. The missing piece is in-process propagation. That's why I suggest providing an API by which different RPC libraries, frameworks, and other instrumented libraries can agree on how context is implicitly propagated in-process.

I don't think it is necessary to force DCP schema complexity on the user as part of this: I suspect in majority of use-cases the Context (or "baggage" per Fonseca paper) will only be used for propagating the span ID. The schema only needs to be considered to the extent that a user is explicitly making use of it, or an RPC library that does. This is similar to the expectations with HTTP headers today.

As for the bloat issue: makes sense to introduce safeguards, and a tracer (which takes care of marshalling the context for propagation) has a good interception point.

@bhs
Copy link
Contributor

bhs commented Jan 25, 2016

@kriskowal, one correction:

I believe it unwise because it establishes an ad-hoc global namespace shared by multiple services (as argued by @michaelsembwever and @bensigelman). This will cause request sizes to bloat as they go downstream.

I don't think it's unwise per se, I think it's risky and should come with caveat emptor warnings for both implementors and users. I like the idea of a "capabilities" feature for OpenTracing in general, and this would be a perfect motivating use case.

The thing is, naming aside this is a really easy feature to implement if the core tracing machinery is assumed to be there already (i.e., the situation we find ourselves in).

@blampe
Copy link

blampe commented Jan 25, 2016

I agree @bensigelman. Tracing is a subset of the DCP problem, so it makes sense to expose some of the machinery we're using to solve it.

@yurishkuro
Copy link
Member

Re video call, the only common time slot was this Fri, Jan 29, 4:00 PM EST. I will send invites.

@michaelsembwever and @dkuebric - do you want to drop me a note to ys at uber.com? I don't have your emails.

@codefromthecrypt
Copy link

@adriancole a few thoughts:

Thanks, I often learn from them.

I have always felt ambivalent about the "baggage" API. It's semantically profound (despite the seeming simplicity of the function signatures), yet also opens a Pandora's Box of failure scenarios given naive programmers (esp high up in the stack). This is why I wanted to separate them from the vanilla (and less interesting+dangerous) "span tags" in the API. There has been some pressure lately (e.g., your own comment at https://github.com/opentracing/opentracing-go/pull/40/files#diff-c36a63bd5139369ebe5b275a1fba1599R24) to consolidate the two. I initially separated them in order to clearly delineate them for both callers and implementors. Maybe this thread helps understand my motivation there?

I am definitely getting more of this, especially as the several
in-flight topics are settling down. Ex it is easy to simultaneously
have a concern about how to name something and if we should expose
something! The latter point really hit me when I saw the bit about
propagating auth tokens.

I'm also becoming increasingly aware of the simplicity afforded by
"not" taking this on. Ex. zipkin impls have a fixed-length, easy
contract for propagation. ex in pyramid_zipkin there's even a function
create_headers_for_new_span. I guess what I'm getting at is that the
non-obvious impacts of variable size, and variable purpose propagation
is becoming more clear to me now, even if you raised alarm earlier :P

On that note, I am comfortable with the idea of certain OT implementations that don't support baggage/DCP. As mentioned elsewhere, this is a feature I want OT callers to be able to opt-out of, hence how the propagation fields are separated along this axis.
Can't underscore enough that in practical terms, this is a big win, as
it makes implementing features incremental. Tracer owners having
requests to add something.. because their users want it, makes the
extra effort more meaningful.

Practical idea: we could (?) add a "capabilities" section to the sort of implementation-introspection call proposed here: opentracing/opentracing.io#33 ... That would remove the "pay to play" requirement. (The other initial "capabilities" candidate (in my mind) would be human-readable log messages, btw) It would also allow us to be clear about what must be implemented (i.e., what's not a negotiable capability), as those features would not be present in the set of optional capabilities.
Although capabilities apis aren't the easiest thing to get right, but
I'm sure we will run into this sooner or later. Ex in jclouds we had
to do this as not all cloud providers supported the same feature
subsets. Same thing with OpenStack, and most portability apis I've
seen.

@bhs
Copy link
Contributor

bhs commented Jan 26, 2016

Yeah. Not sure how others feel, but a capabilities struct – used responsibly! – seems like a net positive to me.

(Side note: it would also help with, e.g., your concern about the noop impls and trace attributes: opentracing/opentracing.io#27 (comment))

@rektide
Copy link

rektide commented Jan 28, 2016

Hello. I started a Node.js Zipkin client/server project a bit back (& switched to using Uber's new Thrift library!), and have been very interested in the collision of tracing and log-based compute, and this ticket represents a compelling budding promise to me. I'd greatly appreciate being able to show up to better understand other's take on this; if it's not too much of an ask I'd like to be included in tomorrow's hangout. I take it the 4:00 time is PST? [ED: EST!] It's my username at gmail.com.

@yurishkuro
Copy link
Member

@rektide I'll add you. Do you have a link to that client/server project?

@rektide
Copy link

rektide commented Jan 29, 2016

The collector made it the furthest but between a holiday season and laying eyes on OpenTracing, traction fell away & I'm tempted to change tacts on it (very interested in applying Apache Flink).
Thrift bindings, have some commits to push- https://github.com/rektide/node-openzipkin-thrift

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants