-
Notifications
You must be signed in to change notification settings - Fork 38
Adding etcd #2
Comments
Yep, that's pretty much exactly what I was thinking. Also allowing daemons On Sat, Jan 10, 2015, 9:24 PM savorywatt notifications@github.com wrote:
|
That would be nice, or to have a cluster of daemons register themselves with a cluster of clients all using different connection profiles (Comcast). So either way it would be nice if they could all know about each other easily. |
alright, after looking this over here is my proposal Create a new package, coordinator (lightweight service discovery server) So to get things running you would start a coordinator flotilla-coordinator --client-config testrun5-client.json --server-config testrun5-server.json Start some clients flotilla-client --coordinator http://localhost:81818181/ --flota "testrun5" Start some servers flotilla-server --coordinator 10.1.1.5 --flota "testrun5" This is an interesting setup because it should let you be able to do a fleet based deployment of docker containers to a cluster on EC2 or wherever (using coreos of course). The flota is to group the discreet units configurations & test results. https://github.com/coreos/fleet Regardless the coordinator could also expand to having the metrics streamed to them as well and manage how each client/server should use comcast. If you like this path I'll try to get something thrown together, just didn't want to go too far down the rabbit hole if you were going to hate it. @tylertreat |
Are you seeing the coordinator as something that would go in place of etcd? It'd be easy enough to roll something simple since it basically just needs to listen for a heartbeat from the daemons and add/remove them from the registry when they spin up or die. Just making sure I understand. I'm trying to understand your example use case, but I'm not sure if I completely follow. The clients are still kicking off the benchmarks, right? In this case, you'd be running three different benchmarks? Why are you starting the servers after the clients? And yeah, having the coordinator aggregate results seems like it could be a good idea. The HdrHistograms can be merged, and eventually I'd like to be able to plot the results. |
I understand your confusion now that I have actually walked through your code in more depth. Forget the coordinator for the moment (I was thinking of embedding etcd but that is a hassle). This would be much simpler. start etcd somewhere both the client and the server are going to be able to talk to it. ./etcd
flotilla-server --coordinator 10.1.1.5 --flota "testrun5" server registers itself and establishes heartbeat. (heartbeat not really 'needed' for first pass) could do that here Flotilla/flotilla-server/main.go Line 31 in fa5c030 flotilla-client --coordinator 10.1.1.5 --flota "testrun5" client pulls down all active registered 'peers' could do that here in the client It would be easy to expand it out eventually do share configuration details but since you are already sending out the specifics of the test from the client to the servers directly there is no reason to change all that just to have them pull it down from etcd. |
Yeah, that sounds good to me. |
K, I'l take a stab at it then |
Initial pass here Ran into some flotilla-server compilation errors
When I have everything working I'll do a PR |
Google loves to break their oauth2 lib... On Fri, Jan 16, 2015, 1:20 AM savorywatt notifications@github.com wrote:
|
Yeah, it was just too late to deal with it, I'll hack on it later.
|
Got the client pulling down the IP & port the daemon's are setting, all via etcd. Got some cleanup to do and more testing then I'll do a PR. This fixed the google oauth2 problem savorywatt@7be1801 |
What do you think about using etc.d to share configuration for the messaging clusters that are spun up, so each daemon wrapper can pull down the configuration details it needs to know about how to set up the given messaging framework?
What were your thoughts on how you wanted this implemented?
The text was updated successfully, but these errors were encountered: