-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EKS] [request]: allow to configure metricsBindAddress for kube-proxy #657
Comments
Additional context illustrating the problem: https://github.com/helm/charts/tree/master/stable/prometheus-operator#kubeproxy |
Quick hotfix:
|
So what exactly prevents you from changing this configmap? |
Nothing, except that it's entirely possible that'll get overwritten at some point in the future. EKS has no documentation on what's supported and unsupported when it comes to this kind of modification, so I'd like to know what kind of guarantees there would be that it won't be stomped on in the future. |
The EKS cluster upgrade documentation explicitly instructs to manually update kube-proxy, coredns and aws-node after a control plane upgrade. I highly doubt that AWS will touch these components in an automated fashion. We have been using custom configuration for all three (kube-proxy, coredns and aws-node) since we started using EKS and never had a problem with this. |
That's the daemonset, not the configuration. While I agree it's unlikely to be an issue, having previously worked on commercial Kubernetes distributions, making assumptions on what's going to be safe to modify isn't always the best choice. I suppose I can always open a ticket. |
I really need this feature! |
|
Manually editing configurations is a work around, not a solution. All configs should be coded and recorded in source control. Unless I'm missing something, it appears EKS users have no control over kube-proxy configuration, except for manual edits. |
I think @devkid is correct, but we could use confirmation from the EKS team that automated changes will not be made to the Here is the problem I encountered when setting metricsBindAddress: When I tried to edit metricsBindAddress I found that I did not have a Here is my proposed solution: Have a git repo with all of the baseline EKS manifests (including
Update: Copying the DaemonSet and ConfigMaps from the newer cluster to the old one works. It was necessary to change one line in the server: https://<foo>.<bar>.<region>.eks.amazonaws.com |
Just to add, I have a cluster that was last upgraded in late Jan to version 1.14 but initially created in October at 1.10. It does not seem to have the |
100% agree with @damscott. Good write up. There's a second |
Note that in my investigation of the clusters in my org (originally made in a wide range of EKS versions from 1.11 to 1.15), any EKS clusters created from 1.12 onwards use the Any EKS clusters created up to 1.12 also have a slightly different If you created your cluster pre-EKS 1.12, to be able pass metricsBindAddress you either need to "upgrade" your daemonset to take in the kube-proxy-config configmap (which you'll need top create manually), or pass it in as a command line argument. Unfortunately this "upgrade" of kube-proxy to the new way is not documented. |
@damscott https://github.com/aws/eks-charts/tree/master/stable/aws-vpc-cni has some helm charts for keeping aws-vpc-cni up to date. Would be great if they added similar charts of kube-proxy and coredns. |
I haven't verified this, but CoreDNS already has a Helm chart, so it should just be a matter of getting the right parameters for that to match the default-installed version, then any changes you need. |
I also have the same issue with most recent EKS version (Kubernetes 1.16) and prometheus-operator stable helm chart. The root cause of the problem is that EKS likely uses an out-dated config value metricsBindAddress.
See also |
This is still an issue with EKS 1.17. The Would it be much of an issue to move |
Original creator of the issue here. I thought I'd chime in on how I handle this these days. At this point I just maintain a kustomize base for kube-proxy that I apply to all clusters immediately after deploying and it's managed via ArgoCD. You could do the same with a helm chart or similar. At this point I think AWS has made it quite clear, it's unlikely they'll manage components inside the cluster, at least in the short to medium term. If anything, I'd bet that if they were to begin managing "core" components, like kube-proxy, aws-node/cni, etc, that you would opt-in to EKS managing these components for you, and even then, it's likely to only be in new EKS clusters. This is just a guess. Their documentation provides instructions on managing the versions of various components in the cluster, so it makes me think that it's unlikely there will be any automated management of this stuff for quite some time, so there's little risk to managing the versions, and configuration of these resources yourself. As others suggested, I think a more likely avenue for supporting this feature is aws/eks-charts#156, #923, and the other issues where you would fully manage kube-proxy yourself. |
Good news! EKS now manages components inside the cluster via EKS cluster addons. However, they have zero config. Can we please have addon config that would allow for things like this? |
I've made a dumb move and enabled the kube-proxy addon. It failed with installing due to the modified So to have the cluster back, I've re-enabled the addon back with the config overwriting option checked. That installed kube-proxy back but restored I hope AWS will add anytime soon some option to keep the configmap untouched or just will change the default setting of Otherwise I'm putting kube-proxy manifests into the gitops repository like I should have done already and forget about all those addons. |
Same experience here. Deploying an addon without overwrite enable results in breaking everything. |
Same boat here - switched to the add-on, lost my monitoring :( The add-ons are a great step forward to keep these sort of resources up-to-date, so I don't want to revert back to maintaining them manually. It would be really good to have |
I'll second @Pluies here. Recently I tried this on my project and lost my monitoring. I see a couple of comments saying that AWS EKS Addons doesn't update kube-proxy config on your cluster if you modify them but unfortunately it does. It would be really good if AWS can make Update as of 03/08/2021: I was checking this with the AWS support team and it seems this issue is now in the proposed stage so I think, we might get some solution on this from AWS soon. |
I have to echo the above. Especially as there is no was to roll back this change in production clusters without downtime and significant remedial action. I the meantime if AWS are reading this - please remove this addon as the default way to upgrade a cluster, or at least include a warning. It's possible, as happened with us, that upgrade appears fine on test cluster only to break later on production, and rolling back isn't possible/practical. |
Correct, kube-proxy on EKS v1.22 clusters (either unmanaged kube-proxy installed with new 1.22 cluster, or managed add-on) now expose metrics by default. If you are upgrading an older cluster that still used the unmanaged add-on, the manifest will not change. |
What about |
Thanks @stevehipwell and mikestef9 for the update! |
@mikestef9 I was going to say that the documentation for this was confusing as it called out "Amazon Managed Service for Prometheus" but you've already fixed that so thank you. |
@mikestef9 (and relevant to @julianxhokaxhiu 's question): If you manually update the configmap on an older cluster (and/or an older-but-upgraded-to-1.22 cluster) will it still revert the configmap change back to the localhost bind address? |
@TBBle for unmanaged add-ons unless you know any differently the manifests aren't modified once the cluster has been provisioned. Or are you asking about managed add-ons? |
I was referring to #657 (comment), #657 (comment), and #657 (comment). They're talking about managed addons. Reading #657 (comment) again, perhaps I misparsed it, and the change applies to the kube-proxy managed addon on all clusters, it's only the unmanaged add-ons where the cluster version is relevant because it chooses which config map to go with initially and doesn't touch it again. |
That's right. If you upgrade to the 1.22 kube proxy managed add-on on an existing cluster upgraded to 1.22, it will contain the change to expose metrics |
What about old versions? I've version 1.21 do I've to upgrade in order to monitor kube-proxy? |
I tried editing the configMap |
@zekena2 did you check if the field was being managed by EKS? AFAIK you can't use the managed kube-proxy add-on and change this value. You either need to manage it yourself or upgrade to v1.22. |
Yes it seems all the config related stuff are managed by EKS which basically means that nothing is configurable. regardless of the metrics problem I should be able to configure the addon as I want. |
@zekena2 you could install the managed add-on for v1.21 and then disable it before patching. When you upgrade to v1.22 you can just re enable the managed add-on and it should just work. |
Thanks @stevehipwell I got it working that way. I removed the plugin with checking preserve on cluster and then edited the configMap and it preserved. I still think it's not an ideal way to configure the addon. |
According to https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html starting from K8s 1.22+ on EKS, the
However, in both our clusters (prod is on 1.22 while staging is on 1.23) the
I tried installing the EKS add-on manually but that didn't change anything either. |
Were those clusters installed at an earlier version and upgraded? If they were upgraded from a version before the change, then they won't have changed the setting as part of the upgrade. See #657 (comment) |
Yes, they were upgraded from 1.21, however, we have only been using the managed add-on. |
When I did upgrade my clusters, I removed the extension first and then I re-added it via the AWS EKS Web console, and it worked perfectly. Prometheus is able to fetch metrics from kube-proxy now. If you upgrade the add-on probably it won't touch the config, so try the remove and re-add approach. |
I have managed addon, and upgrading to 1.22 the resulting in |
@andrew-pickin-epi see the reply just above yours. |
My point is you don't need to remove and re-apply. Using |
Quick fix : |
Indeed this resolved also my issue, when updating from eks 1.22 to 1.23 the proxy-config still pointed to 127.0.0.1:10249, so after updating the |
Community Note
Tell us about your request
I want to be able to monitor kube-proxy with Prometheus, but cannot because, by default the
metricsBindAddress
is set to127.0.0.1:10249
meaning it isn't accessible outside of the pod.Which service(s) is this request for?
EKS
Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
I want to monitor kube-proxy. I cannot do this unless I can reconfigure kube-proxy in some way.
Are you currently working around this issue?
I am not.
Additional context
You can see the kube-proxy config created by EKS in the kube-system namespace via
k get cm -n kube-system kube-proxy-config -o yaml
.The text was updated successfully, but these errors were encountered: