Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Support for Managing the Envoy Proxy Service #648

Closed
danehans opened this issue Oct 26, 2022 · 10 comments · Fixed by #1115
Closed

Add Support for Managing the Envoy Proxy Service #648

danehans opened this issue Oct 26, 2022 · 10 comments · Fixed by #1115
Assignees
Labels
area/api API-related issues area/config Issues related to config management, e.g. Config Manager, Config Sources, etc. area/infra-mgr Issues related to the provisioner used for provisioning the managed Envoy Proxy fleet. kind/enhancement New feature or request provider/kubernetes Issues related to the Kubernetes provider
Milestone

Comments

@danehans
Copy link
Contributor

As of v0.2.0, the Kube Infra Manager manages a Service of type LoadBalancer to perform L4 load-balancing across the managed Envoy proxy fleet. Some users may wish to use Envoy for internal-only proxying, configuring various load-balancer settings, etc. Envoy Gateway provides the EnvoyProxy API type for configuring the Envoy proxy infrastructure. This API should be expanded to support common Service configuration requirements.

@danehans danehans added kind/enhancement New feature or request help wanted Extra attention is needed area/api API-related issues area/config Issues related to config management, e.g. Config Manager, Config Sources, etc. area/infra-mgr Issues related to the provisioner used for provisioning the managed Envoy Proxy fleet. provider/kubernetes Issues related to the Kubernetes provider labels Oct 26, 2022
@danehans danehans added this to the Backlog milestone Oct 26, 2022
@bmetzdorf
Copy link

Thanks @danehans. I'm specifically interested in the integration with 3rd party cloud providers, beginning with AWS and their NLB. Something like these annotations will be needed on the Service:

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-type: "external"

Since Envoy Gateway will handle TLS and HTTP(S) itself, we just need TCP and UDP traffic forwarded by the NLB. And in order to not pay the node port double hop penalty, we could use NLB IP based targeting and use Proxy Protocol to retain client IP information (UDP/QUIC TBD).

@arkodg
Copy link
Contributor

arkodg commented Oct 26, 2022

Thanks @danehans. I'm specifically interested in the integration with 3rd party cloud providers, beginning with AWS and their NLB. Something like these annotations will be needed on the Service:

service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
service.beta.kubernetes.io/aws-load-balancer-type: "external"

Since Envoy Gateway will handle TLS and HTTP(S) itself, we just need TCP and UDP traffic forwarded by the NLB. And in order to not pay the node port double hop penalty, we could use NLB IP based targeting and use Proxy Protocol to retain client IP information (UDP/QUIC TBD).

@bmetzdorf this should be addressed with #377

@danehans
Copy link
Contributor Author

Currently, an ELB gets created for the Envoy service when running EG in AWS. We should probably set service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp to put the ELB in TCP mode and service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*" to enable proxy protocol by default (when running in AWS). Or should we default to NLB with the annotations specified by @bmetzdorf? As I mentioned in #377, I prefer creating an API for managing the Envoy network endpoints instead of passing service annotations. @LukeShu @skriss @youngnick @AliceProxy @Xunzhuo thoughts?

@danehans
Copy link
Contributor Author

I've created a design doc to help faciliate discussion for supporting this use case. @bmetzdorf @arkodg @youngnick @skriss @LukeShu @AliceProxy @Xunzhuo PTAL.

@arkodg
Copy link
Contributor

arkodg commented Nov 3, 2022

I've created a design doc to help faciliate discussion for supporting this use case. @bmetzdorf @arkodg @youngnick @skriss @LukeShu @AliceProxy @Xunzhuo PTAL.

@danehans took a look at this review, was unable to leave comments in there due to limited permissions, so leaving them here - the doc proposes creating high level load balancer providers such as aws

  • I worry such level of vendor specificity will trigger the project supporting and maintaining many
    such loadbalancer types/providers and a lot of time might get spent managing such driver code
  • taking the aws case, there are many user inputted annotations such as service.beta.kubernetes.io/load-balancer-source-ranges that EG will not be able to compute beforehand

@danehans
Copy link
Contributor Author

danehans commented Dec 1, 2022

took a look at this review, was unable to leave comments in there due to limited permissions

Anyone should now be able to comment in the doc.

I worry such level of vendor specificity will trigger the project supporting and maintaining many
such loadbalancer types/providers and a lot of time might get spent managing such driver code

It does require more maintenance but provides a more consistent UX. Otherwise, users can freely pass annotations that may be invalid. We could validate the annotations but that can turn into a difficult matrix to support.

taking the aws case, there are many user inputted annotations such as service.beta.kubernetes.io/load-balancer-source-ranges that EG will not be able to compute beforehand

EG can perform some validation for this use case. For example, if a user specifies an invalid range, e.g. 1.2.3.4/34. In this instance, EG would not apply the config and surface an EnvoyProxy status condition that states the provided CIDR is invalid.

@danehans
Copy link
Contributor Author

danehans commented Dec 1, 2022

@bmetzdorf do you have any thoughts or preferences on the approach to resolving this issue?

@bmetzdorf
Copy link

Hi @danehans and @arkodg,

I would be leaning a bit towards not managing and validating these vendor specific configurations. There's a good argument to be made that having an API for attributes of vendor specific LoadBalancer types is technically the cleaner approach, but there's also a very good argument to be made that this could (and probably would) lead to proliferation and a lot of managing effort for maybe not so much gain. If a vendor decided to support a new annotation, then envoy gateway users would have to wait for a new release if we did not allow passing annotations through.

I can imagine (although I don't know) that the original designers of the K8s API (in general but also specifically for Ingress and Service) probably faced the same question: Should we include every configuration possible from every vendor as a first class citizen in the API and maintain and version all of them or should we use annotations as a generic way instead?

@youngnick
Copy link
Contributor

I think there's a tradeoff here that we should be explicit about.

As @bmetzdorf and @arkodg noted above, having fully-fledged fields in a CRD has the disadvantage that adding support for new features requires an EG release. The advantage of having fully fledged fields is that EG can validate the information or use it to trigger other behavior. Blindly copying annotations around places a support burden on users.

On the other hand, having a map[string]string of annotations to add to loadbalancer services has a couple of downsides:

  • It becomes difficult to check for annotations that may interact poorly with other EG features
  • It ties the implementations strongly to LB services. What happens if, in the future, you can create a Gateway with a TCP route to replicate the functionality of a cloud-provider LB service? We have no insight into what the fields are, so we are unable to help.

The advantages here are the speed of development and the level of control given to users.

I don't have strong feelings either way here, except for the note that annotations are very sticky. Once people are using them, we will never be able to remove them. They're forever.

@arkodg
Copy link
Contributor

arkodg commented Feb 6, 2023

@youngnick EG would provide an API for the user to append an annotation to the managed Envoy Proxy service annotation field, so EG is not inferring any meaning from the actual value of the annotation.
The only thing we need to be careful about here, is to ensure user cannot overwrite an EG generated annotations (annotations is one example, also can apply to other non scalar fields like labels)

@arkodg arkodg modified the milestones: Backlog, 0.4.0-rc.1 Feb 15, 2023
arkodg added a commit to arkodg/gateway that referenced this issue Mar 8, 2023
This PR allows the user to add annotations to the
managed Envoy service and well as the Envoy pods using
the EnvoyProxy resource

* Fixes envoyproxy#377

* Closes envoyproxy#648

Signed-off-by: Arko Dasgupta <arko@tetrate.io>
@arkodg arkodg self-assigned this Mar 9, 2023
@arkodg arkodg removed the help wanted Extra attention is needed label Mar 9, 2023
@Xunzhuo Xunzhuo closed this as completed in 38470e0 Mar 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api API-related issues area/config Issues related to config management, e.g. Config Manager, Config Sources, etc. area/infra-mgr Issues related to the provisioner used for provisioning the managed Envoy Proxy fleet. kind/enhancement New feature or request provider/kubernetes Issues related to the Kubernetes provider
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants