-
-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support push model for service discovery #492
Conversation
sd/cache/observer.go
Outdated
var _ sd.Subscriber = &Observer{} // API check | ||
var _ sd.Discoverer = &Observer{} // API check | ||
|
||
// NewObserver crates a new Observer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
crates -> creates
Thanks for this. I'll need a little while before I can review it. Please don't get discouraged :) |
@peterbourgon I can wait. At the moment I have wired our internal service discovery system to essentially the same API, so I know that this approach can be made to work. |
@peterbourgon ping |
@yurishkuro Thanks for the ongoing discussion and the PR. Please accept my apologies for the delay, I was inundated with other responsibilities in the past weeks. Before I give my feedback I'd like to establish a canonical set of requirements that this effort should satisfy. Here is my first guess:
How do you feel about that? Am I missing important details? |
@peterbourgon yes, I think that covers it. The only "however" I'd add is that "both" is perhaps not a requirement. The sync / async models are two independent interfaces and an implementation can support either or both. As for stings vs. Endpoint, I think none of the discovery implementations needs to support the Endpoint API, because it can always be added as a composition of string-based API and a cache + endpoint factory. |
OK, great. So, I will attempt to describe the type catalog of the PR as it stands right now. We have
One could imagine, but we do not currently have,
We have also
So far this gets us instance strings, but in order to get actual endpoints we need to compose an endpoint Factory with
Finally we have
With these types, whoever will want to provide complete support for a new SD system will need to write
Whoever will want to use a concrete SD system to generate callable Endpoints will need to construct
This is roughly the same as before. The extra value is for those who want to subscribe to the raw feed of instance strings, they may now do so by leveraging the InstanceNotifier capabilities of the concrete Notifier struct. So, reviewing our requirements and how we meet them, we see
— – - IMO there are too many new, intermediating types and concepts for our goals. I think there is an alternative that reduces the number of both of these things. Very roughly,
The rest follows, I hope, naturally. What do you think? |
A few thoughts:
|
I think both Dispatcher and Observer are extraneous, but on reflection I think a lot of my feeling of being overwhelmed is indeed a function of the names. I would like to see each concrete package need to provide the least possible code, to me this seems like it is a (using old terminology) Subscriber struct, connecting to the SD system and implementing (using new terminology) sd.Instancer, and optionally sd.InstanceNotifier if the system supports pub/sub semantics. This job is so simple I don't think there is any need for package SD to provide an embeddable instance Cache or subscription Dispatcher helper or anything like that, but I may be convinced otherwise. If that's the contract for each concrete package then everything else can be provided by us in package sd, but I guess "everything else" reduces to a concrete type (name I guess TBD) that wraps an Instancer and a Factory to implement the sd.Endpointer interface. It could even do something tricky, here I'm speculating but something like func NewEndpointer(src Instancer, f Factory, ...) Endpointer {
if notifier, ok := src.(InstanceNotifier); ok {
c := make(chan []string)
notifier.Register(c)
return streamCachingEndpointer{c, f, ...}
}
return passthruEndpointer{src, f, ...}
} Does this make sense? Am I oversimplifying, or ignoring any necessary complexity?
OK, I am convinced :) |
(quotes rephrased)
Agreed. That's exactly what I have in this PR. We should have a common name for such struct, the former "Subscriber" (and the new Observer) increase the mental overhead. Should the guideline be to name those I think Dispatcher is a useful util to have, just like the Cache struct before, it's some 50 lines of code that every implementation would have to repeat to support channel-based observers. It could be renamed SimpleInstanceNotifier.
sgtm |
@peterbourgon Btw, somewhat relevant prior art: // Discoverer is an interface that wraps the Discover method.
type Discoverer interface {
// Discover looks up the etcd servers for the domain.
Discover(domain string) ([]string, error)
} |
39f0c3e
to
d9e61b0
Compare
OK, so iterating on all of that, how does this sound? package sd
type Instancer interface {
Instances() ([]string, error)
}
type InstanceNotifier interface {
Register(chan []string)
Deregister(chan []string)
}
type Endpointer interface {
Endpoints() ([]endpoint.Endpoint, error)
}
// EndpointNotifier possible but elided for now.
type Factory func(string) endpoint.Endpoint
func NewEndpointer(src Instancer, f Factory) Endpointer {
if notifier, ok := src.(InstanceNotifier); ok {
return newStreamCachingEndpointer(notifier, f)
}
return newSimpleEndpointer(src, f)
} package sd/internal/instance
type Cache struct { ... }
func (c *Cache) Update(instances []string) { ... }
func (c *Cache) Instances() ([]string, error) { ... }
type Notifier struct { ... }
func (n *Notifier) Update(instances []string) { ... }
func (n *Notifier) Register(c chan []string) { ... }
func (n *Notifier) Deregister(c chan []string) { ... } package sd/whateversystem
import (
"github.com/go-kit/kit/sd/internal/instance"
)
type Instancer struct {
instance.Cache // always
instance.Notifier // optional, if the system supports pub/sub
}
func NewInstancer(c *whatever.Client, ...) *Instancer {
i := &Instancer{...}
go i.loop()
return i
}
func (i *Instancer) loop() {
for instances := range i.client.Stream() {
i.Cache.Update(instances) // always
i.Notifier.Update(instances) // optional
}
} Key points:
WDYT? |
Sounds good. I will update the PR to match. Will be traveling next week, so probably around the next weekend. |
@peterbourgon I'm running into a couple of issues with
Thoughts? |
re: 1, yep, I understand, and your idea could work. Another way could be to add Close method to Endpointer interface, with no-op implementation for non-streaming version. My intuition prefers that one, WDYT? re: 2, I don't think that calling Cache.Instances with every request is a high cost, it's a function call and returning a slice (i.e. reference semantics). I'm happy to be proven wrong if you want to benchmark it against an alternative... re: exposing TTL for polling-based implementations, I would expect it be a parameter to the e.g. dnssrv.NewInstancer constructor, because that's what's responsible for fetching new instance strings. And there we have complete freedom in designing the constructor API. But perhaps I'm missing something? |
let's defer the Closer discussion as it depends on point 2. The cost is not in calling
The cost is in invoking Update() on every read of Endpoints, as opposed to the previous solution where Update was only called on TTL expiring (in case of dnssrv). At minimum Update will do sorting of instances and comparing them with the current map in the cache - not terribly expensive, but just silly to do for the data that changes 2-3 orders of magnitude slower than the frequency of calls to Endpoints() (which could be |
Ah, right. Hm. |
OK, further iteration — and bringing us closer to the current system, a bit... package sd
type Instancer interface {
Instances() ([]string, error)
Register(chan<- []string)
Deregister(chan<- []string)
}
type Endpointer interface {
Endpoints() ([]endpoint.Endpoint, error)
}
// EndpointNotifier possible but elided for now.
type Factory func(string) endpoint.Endpoint
func NewEndpointer(src Instancer, f Factory) *SimpleEndpointer {
return &SimpleEndpointer{src, f}
} package sd/internal/instance
import "sync"
type Cache struct {
mtx sync.RWMutex
instances []string
reg registry
}
func NewCache() *Cache {
return &Cache{
reg: registry{},
}
}
func (c *Cache) Update(instances []string) {
c.mtx.Lock()
defer c.mtx.Unlock()
sort.Strings(instances)
if reflect.DeepEqual(instances, c.instances) {
return
}
c.instances = instances
c.reg.broadcast(instances)
}
func (c *Cache) Instances() []string {
c.mtx.RLock()
defer c.mtx.RUnlock()
return c.instances
}
func (c *Cache) Register(c chan<- []string) {
c.mtx.Lock()
defer c.mtx.Unlock()
c.reg.register(c)
}
func (c *Cache) Deregister(c chan<- []string) {
c.mtx.Lock()
defer c.mtx.Unlock()
c.reg.deregister(c)
}
// registry is not goroutine-safe.
type registry map[chan<- []string]struct{}
func (r registry) broadcast(instances []string) {
for c := range r {
c <- instances
}
}
func (r registry) register(c chan<- []string) {
r[c] = struct{}{}
}
func (r registry) deregister(c chan<- []string) {
delete(r, c)
} package sd/whateversystem
import (
"github.com/go-kit/kit/sd/internal/instance"
)
type Instancer struct {
instance.Cache
}
func NewInstancer(c *whatever.Client, ...) *Instancer {
i := &Instancer{...}
go i.loop()
return i
}
func (i *Instancer) loop() {
for instances := range i.client.Stream() {
i.Cache.Update(instances)
}
} In summary,
Does this improve things? |
Yes, basically if we make |
Sounds reasonable to me. |
d9e61b0
to
dfc00d3
Compare
sd/dnssrv/subscriber.go
Outdated
@@ -82,6 +82,11 @@ func (p *Subscriber) loop(t *time.Ticker, lookup Lookup) { | |||
} | |||
} | |||
|
|||
// Instances implements the Discoverer interface. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ignore this, I have not updated dnssrv yet
sd/endpointer.go
Outdated
|
||
"github.com/go-kit/kit/endpoint" | ||
"github.com/go-kit/kit/log" | ||
iEndpoint "github.com/go-kit/kit/sd/internal/endpoint" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unhappy with the name clash here. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, naming another package endpoint
is a non-starter.
// to the service discovery system, or within the system itself; an Endpointer | ||
// may yield no endpoints without error. | ||
type Endpointer interface { | ||
Endpoints() ([]endpoint.Endpoint, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Open question: I think it makes sense to remove the error
return value. In practice the real Endpointer gets instances via channel, so it never gets errors from sd implementation, and it never has any error to return here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the connection between a service and the SD system is broken, Endpoints
should return an error. If that's not currently true, we should make it true. It's important that client be able to distinguish "my connection to the rest of the world is broken" from "my dependency simply has no instances at the moment".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it's important in theory, but this is not how the current implementations behave, e.g. consul
and dnssrv
, as examples of internal push and pull models, both return nil for the error from their current Endpoints()
method. If an implementation uses push model both internally and in the Instancer
(via internal/instance.Cache
), how do you see it returning an error from the pull model's Endpoints()
method? I.e. which error would it be, the last one received via push (such as lost connection to sd backend)?
Note that I had a TBD in the Instancer interface about using richer types in the channels than plain []string
. Doing it that way would at least allow the internal sd errors to bubble up to the Instancer subscribers. But it still runs into the same issue as above when push model of the Instancer flips to a pull model of the Endpointer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that it's important in theory, but this is not how the current implementations behave
Yeah, that's a problem, and my bad :(
If an implementation uses push model both internally and in the Instancer (via internal/instance.Cache), how do you see it returning an error from the pull model's Endpoints() method? I.e. which error would it be, the last one received via push (such as lost connection to sd backend)?
If the subscription to the Consul server is broken, then for the duration of the break, I would expect Instancer to return some sd.UnavailableError, perhaps with a Cause or Reason error field with the specific error returned by the consul client library. Likewise if (say) 3 DNS SRV polls in a row fail, I would expect to get exactly the same sd.UnavailableError.
Is this enough to go on?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@peterbourgon Take a look at the latest commit. I changed Instancer
from pushing []string
to pushing Event
struct that contains either instances or an error. consul
implementation updated accordingly.
I have not introduced any special error, it is simply whichever error the underlying sd system returns, because it may be difficult to find common error patterns across all sd implementations. The semantics are simple - if a notification contains an error, previously pushed instances are unreliable, stale, "use at your own risk".
This leaves one open question. Typically discovery middleware should avoid breaking the application when temporarily loosing connection to discovery backend. However, in the current form loosing a connection will trigger a push of an Event with the error, and the endpointCache
will close all Endpoints, even though the actual service instances are probably just fine at their previously known locations. Previous pull model did not have that behavior because all errors from SD backend were simply logged, without erasing the Endpoints. We can restore that (desired) behavior by not closing Endpoints in the endpointCache
when an error is received, however that effectively puts us back to the state the code was in yesterday, i.e. the Endpointer always returns Endpoints and never returns the error. If we change the API contract to say that Endpointer may return both Endpoints and error, then it's a breaking change, and in particular it breaks the lb implementations since they bail on error right away.
Thus, while I like the latest commit, I am back to my suggestion of removing the error
from the Endpointer
signature - it doesn't do any good. If the application needs to be notified of an issue with discovery system and take some measures, then it can easily register another channel on the Instancer
. Maybe I am missing some use cases (like a decorator Endpointer that does health checks - but that seems like it should sit between Instancer and endpointCache).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had typed a more thorough reply but in the end it boils down to: LGTM. So,
Event
struct . . .
Yep, LGTM.
I am back to my suggestion of removing the
error
from theEndpointer
signature
Also SGTM to remove the error from the Endpointer interface. It means you must choose what to do when the endpointCache receives a non-nil error Event from the Instancer. The most obvious choice is to leave the current behavior and immediately invalidate all of the Endpoints. If you want to do something smarter, like only invalidate them after a timeout, you'll need to parameterize that behavior in the endpointCache constructor. Without having made prototype implementations to see how each option feels, I don't have strong opinions which choice is better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the internal/endpoint package is problematic...
sd/endpointer.go
Outdated
|
||
"github.com/go-kit/kit/endpoint" | ||
"github.com/go-kit/kit/log" | ||
iEndpoint "github.com/go-kit/kit/sd/internal/endpoint" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, naming another package endpoint
is a non-starter.
// to the service discovery system, or within the system itself; an Endpointer | ||
// may yield no endpoints without error. | ||
type Endpointer interface { | ||
Endpoints() ([]endpoint.Endpoint, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the connection between a service and the SD system is broken, Endpoints
should return an error. If that's not currently true, we should make it true. It's important that client be able to distinguish "my connection to the rest of the world is broken" from "my dependency simply has no instances at the moment".
sd/endpointer.go
Outdated
// | ||
// Users are expected to provide their own factory functions that assume | ||
// specific transports, or can deduce transports by parsing the instance string. | ||
type Factory func(instance string) (endpoint.Endpoint, io.Closer, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we can't have package sd/internal/endpoint
.
sd/endpointer.go
Outdated
return f(instance) | ||
} | ||
se := &simpleEndpointer{ | ||
Cache: *iEndpoint.NewCache(factory, logger), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What prevents e.g. endpointCache from being defined in this package as a non-exported type?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I can do that.
@peterbourgon take a look at 54081bf your comment about timeout made me realize that I can keep the error in the Endpointer signature and still have "fail open" behavior by default. I think this was the last open question. If we agree on the current approach I can go & fix the rest of the implementations and write tests |
@peterbourgon are we good with the current approach? |
In general yes. I will probably have some code-organization-style feedback when it's complete but nothing structural. |
1ff1ee7
to
449bebb
Compare
@peterbourgon bring it on :-) I fixed all other sd implementations and all examples, it's ready for review & clean-up (& possibly more tests). Note a couple TODOs, including one "thought experiment" in the examples which makes the initialization code a lot more compact. Also, the prior endpoints-based tests were flaky due to Endpointer now being async. I've converted some of the tests (consul / eureka) to test just the Instancer behavior by calling |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this actually looks pretty good so far. I was afraid the new abstractions wouuld be awkward to use in practice, but they actually don't change very much. A few things especially re: embedding, otherwise mostly LGTM. Ping me again when you consider it done?
sd/endpointer.go
Outdated
// NewEndpointer creates an Endpointer that subscribes to updates from Instancer src | ||
// and uses factory f to create Endpoints. If src notifies of an error, the Endpointer | ||
// keeps returning previously created Endpoints assuming they are still good, unless | ||
// this behavior is disabled with ResetOnError option. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
InvalidateOnError
sd/endpointer.go
Outdated
// and uses factory f to create Endpoints. If src notifies of an error, the Endpointer | ||
// keeps returning previously created Endpoints assuming they are still good, unless | ||
// this behavior is disabled with ResetOnError option. | ||
func NewEndpointer(src Instancer, f Factory, logger log.Logger, options ...EndpointerOption) Endpointer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer it if SimpleEndpointer were exported, and this returned *SimpleEndpointer
. At the moment, users have no way to invoke Close...
sd/cache.go
Outdated
} | ||
|
||
c.mtx.RUnlock() | ||
c.mtx.Lock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is racy, and you better take the write Lock for the whole method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather check again after W-lock. I think it's good to keep the top under R-lock only, since it's on the hot path and very likely to be used concurrently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that the current version is less contentious, but (unless I'm missing something) it's incorrect. Between line 129 and 130, another goroutine can e.g. invoke Update and change the value of c.err.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it can, but the behavior will be the same as if we had just one write-lock and that other update happened before we took the lock. It's a standard double-checked locking pattern. The only time it will be a problem is if in the future someone makes a code change so that L123 and L133 use different conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe I'm dense, but I don't believe you.
- Goroutine 1: enters Endpoints, takes RLock
- Goroutine 1: observes c.err != nil, skips early return
- Goroutine 1: releases RLock
- Goroutine 2: enters Update, takes Lock
- Goroutine 2: observes event.Err == nil, sets c.err = nil
- Goroutine 2: leaves Update, releases Lock
- Goroutine 1: takes Lock
- Goroutine 1: calls c.updateCache(nil) under the prior assumption that c.err != nil — but it isn't anymore
AFAIK, there's no generally safe way to "upgrade" read locks to write locks in Go.
func (t *thing) foo() {
t.mtx.RLock()
// ...
t.mtx.RUnlock() // this is
t.mtx.Lock() // not safe
// unless this code makes no assumptions from the previous block
t.mtx.Unlock()
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you looking at the latest code? When I navigate through the comments link Github shows a stale commit. In the latest version the last step in your sequence won't happen because the code again checks for the same err == nil
condition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I was looking at an old version. My mistake. I still don't believe the performance penalty of the Lock is worth this dance, but I defer to you :)
sd/cache.go
Outdated
c.err = nil | ||
} | ||
|
||
c.logger.Log("err", event.Err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be in an else
block or something? I guess we don't want to see err=<nil>
logged with every update...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was missing a return
above
sd/endpointer.go
Outdated
} | ||
|
||
type endpointerOptions struct { | ||
invalidateOnErrorTimeout *time.Duration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pointers as ersatz Maybe types make me :( Can we have this be a value, and treat 0 as not set, please?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@peterbourgon zero value for this field is actually meaningful, i.e. it can be used to make the cache invalidate all endpoints as soon as an error event is received from sd. Whereas having a nil timeout means the endpoints will remain in use indefinitely, until updated by a real event from sd. Given that the pointer is an implementation detail of a private struct, how strongly do you feel about it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then I would prefer to see this expressed as
invalidateOnError bool
invalidateTimeout time.Duration
sd/cache.go
Outdated
// set new deadline to invalidate Endpoints unless non-error Event is received | ||
c.invalidateDeadline = time.Now().Add(*c.options.invalidateOnErrorTimeout) | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Happy path.
if event.Err == nil {
c.udpateCache(event.Instances)
c.invalidateDeadline = time.Time{}
c.err = nil
return
}
// Sad path. Something's gone wrong.
c.logger.Log("err", event.Err)
if c.options.invalidateOnErrorTimeout.IsZero() { // assuming you make the change suggested below
return // keep the old instances
}
if c.err != nil { // I think this is a better sigil for the error state...
return // already in the error state, do nothing & keep original error
}
c.err = event.Err
c.invalidateDeadline = time.Now().Add(c.options.invalidateOnErrorTimeout)
?
sd/endpointer.go
Outdated
} | ||
|
||
type simpleEndpointer struct { | ||
endpointCache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm irrationally afraid of struct embedding, especially in cases like this when it's not strictly true that simpleEndpointer is-an endpointCache. Specifically I'm afraid of endpointCache methods leaking through to the public type definition. Would you mind making it a named member and deliberately plumbing through the methods you intend to support—even if, at the moment, that's most or all of them?
sd/etcd/instancer.go
Outdated
// Instancer yields instances stored in a certain etcd keyspace. Any kind of | ||
// change in that keyspace is watched and will update the Instancer's Instancers. | ||
type Instancer struct { | ||
instance.Cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise to the other comment, I'd feel safer if these weren't embedded but were regular named fields, with necessary methods plumbed through.
sd/zk/instancer.go
Outdated
@@ -40,17 +37,18 @@ func NewSubscriber(c Client, path string, factory sd.Factory, logger log.Logger) | |||
instances, eventc, err := s.client.GetEntries(s.path) | |||
if err != nil { | |||
logger.Log("path", s.path, "msg", "failed to retrieve entries", "err", err) | |||
// TODO why zk constructor exits when other implementations continue? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, looks like inconsistency, please feel free to update.
@peterbourgon any other comments? |
.gitignore
Outdated
glide.lock | ||
glide.yaml | ||
vendor/ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove these, if you don't mind.
coverage.bash
Outdated
@@ -7,7 +7,7 @@ | |||
set -e | |||
|
|||
function go_files { find . -name '*_test.go' ; } | |||
function filter { grep -v '/_' ; } | |||
function filter { grep -v -e '/_' -e vendor ; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Likewise here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just those two minor things. Are you happy with the current state? — Thanks for bearing with me on this incredible journey 😲
7f9fba4
to
f5bc66e
Compare
@peterbourgon yeah, I think it's good to go. The main Instancer interface is very similar to what we have in Jaeger internally already, so will be easy to adapt. I added a bit more tests to have coverage in core functions close to 100%. |
Support push model for service discovery
This is a POC to solve #475, #403, and #209, and based on #463/#478.
Main changes:
examples/profilesvc/client/client.go
to demonstrate how the new approach can be usedSome open questions: