-
Notifications
You must be signed in to change notification settings - Fork 21
Implement Quota/Rate Limiting #56
Comments
So this would require giving kanali direct access to etcd, or spinning up a separate etcd cluster? Currently, k8s (by default, but can be encrypted in 1.7 using the Provider plugins) stores all secrets unencrypted in etcd. You might mention that it is generally a terrible idea to give anything outside of k8s access to etcd and spin up a separate one for this perhaps using the etcd-operator from CoreOS. |
I should clarify on some aspects. For all k8s resources (secrets, services, apiproxies, etc), Kanali would continue to watch the apiserver. However, to persist minimal details about a request, etcd is being proposed. An example protobuf message might look like this: message TrafficPoint {
required string namespace = 1;
required string proxyName = 2;
required string keyName = 3;
} Concerning giving Kanali access to the same etcd that Kubernetes uses, Kanali would use a separate etcd key. This is a common practice seen amongst other Kubernetes add-ons like Flannel and Calico. Or course nothing prevents someone from bootstrapping another etcd deployment that is Kanali specific as Kanali just needs to know the endpoints and potential tls connection details All Kanali instances would watch this etcd key using the etcd grpc v3 client by coreos. Each time a Kanali instance receives traffic and writes it to etcd, all other Kanali instances will know about it and can update their in memory data structures so that rate limiting and quota policies can be properly enforced. |
@frankgreco it is actually discouraged per the kubernetes documentation.
Also, Calico is actually in the process of moving to storing all of their data via the apiserver via Typha directly via the kubernetes apiserver. This has been on their roadmap for awhile now. Regarding Flannel, their documentation explicitly says using the kubernetes apiserver or a discrete etcd is preferable:
Please consider using CRDs in the kubernetes apiserver instead of etcd directly if feasible. I love the way this project is shaping up, but we use RBAC heavily and would not give an ingress access to all secrets in plaintext (by giving it direct access to etcd). |
@SEJeff I did not know that Calico was moving to this! Great find! I see two main options here: Use a CRDPros
Cons
Use a separate instance of etcdPros
Cons
I'm leaning towards etcd option but would love to get your opinion. |
Regarding json+http, protobuf support, comms with the apiserver via protobuf landed in kubernetes 1.3. The magic is passing the Also I'm curious about the Cons for lightweight details about a traffic point not lending itsself to a CRD. I'd say Endpoint resources in K8S should never be used editable (except when using headless services to front non-k8s services). Forgive me for being dense, but I don't see how this is terribly different from endpoints. That said, if every single incoming request needs to update / create a CRD, that is indeed sad. I guess batching updates wouldn't be good enough would it? Regarding complexity of using an external etcd, the etcd-operator is really good, and would be how I would suggest you recommend users to go if they use kanali + etcd. Would you consider making the storage mechanism pluggable? Then you could implement both eventually, giving the user more flexibility. Just add decent unit tests so you don't break things! |
I have been pulling my hair out for a while over why protobuf wouldn't work - if all I was missing was a header..... :) I agree that the endpoints object does account for a lot of noise from the apiserver however there'd be exponentially more noise if there was one for every request. If the requests were batched, then you wouldn't have consensus amongst the different Kanali instances. I like the idea of having it pluggable. I can make everything conform to an interface and then you could have multiple implementations that conform that a common interface. |
Use the source luke!^Wfrank. That should work a bit better for you, so glad to help where I can. In hindsight, it doesn't make much sense to use a CRD for this just like you said, my only real suggestion is to document setting up a separate etcd cluster. If you do indeed make the storage backend pluggable, also consider nats which we use at Keep kicking ass. |
+1 for pluggability. Depending on what infrastructure you already have in place, you might use etcd, Redis or whatever makes sense. On the initial raft train of thought, it should be relatively trivial to have one of those backends use www.serf.io so that things worked out of the box. |
Quota and Rate Limiting features are currently in alpha. In order to make them stable, the following things need to hold true:
I was beginning to go down a path where Kanali would basically implement the raft algorithm to accomplish this. This however would be a great amount effort and I was worried I'd be reinventing the wheel when I could just use something already there. Etcd, the default Kubernetes backend, already uses raft for it's consensus algorithm. Here is the current design approach:
The text was updated successfully, but these errors were encountered: