-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Kubernetes objects to be reported just once in a cluster #3274
Conversation
I think this is acceptable and not worse than the other solutions |
A few comments:
|
@@ -320,7 +322,7 @@ func setupFlags(flags *flags) { | |||
flag.StringVar(&flags.probe.kubernetesClientConfig.User, "probe.kubernetes.user", "", "The name of the kubeconfig user to use") | |||
flag.StringVar(&flags.probe.kubernetesClientConfig.Username, "probe.kubernetes.username", "", "Username for basic authentication to the API server") | |||
flag.StringVar(&flags.probe.kubernetesNodeName, "probe.kubernetes.node-name", "", "Name of this node, for filtering pods") | |||
flag.UintVar(&flags.probe.kubernetesKubeletPort, "probe.kubernetes.kubelet-port", 10255, "Node-local TCP port for contacting kubelet") | |||
flag.UintVar(&flags.probe.kubernetesKubeletPort, "probe.kubernetes.kubelet-port", 10255, "Node-local TCP port for contacting kubelet (zero to disable)") |
This comment was marked as abuse.
This comment was marked as abuse.
Sorry, something went wrong.
This comment was marked as abuse.
This comment was marked as abuse.
Sorry, something went wrong.
If I understood properly, after this PR:
If that's correct I think it would be more intuitive to something like:
(we could optionally drop the I think that exposing whether it tags or not is an implementation detail which may be confusing for the end user. Indicating what the probe is used for would more useful. |
@@ -552,7 +559,7 @@ func (r *Reporter) podTopology(services []Service, deployments []Deployment, dae | |||
} | |||
|
|||
var localPodUIDs map[string]struct{} | |||
if r.nodeName == "" { | |||
if r.nodeName == "" && r.kubeletPort != 0 { | |||
// We don't know the node name: fall back to obtaining the local pods from kubelet | |||
var err error | |||
localPodUIDs, err = GetLocalPodUIDs(fmt.Sprintf("127.0.0.1:%d", r.kubeletPort)) |
This comment was marked as abuse.
This comment was marked as abuse.
Sorry, something went wrong.
This enables us to run Kubernetes probing on one node for the whole cluster.
This gives us the option of disabling the function
So they can be reported centrally, find the pod host ID from the child containers.
bb7ca55
to
ae2d7de
Compare
I have rebased and updated the flag settings in line with what @2opremio suggested. Now it is:
Talking directly to kubelet is now disabled in both modes, although the code remains - removing it would be a separate PR. |
Now you specify a role instead of controlling the internal behaviour
ae2d7de
to
78eaf93
Compare
I tested this again in our staging cluster; backwards-compatibility is fine. Impact on our staging cluster was about a 10% reduction in CPU usage by probes. In bigger clusters the net impact should be better. I think this is good to go now. |
Are there followup issues/PRs for:
|
#3242 is the latter. Yes we are very keen to update the cloud.weave.works config, but need to do a Scope release first, which was waiting on a review here. |
We stop the per-host probes talking to Kubernetes and run an extra Deployment with one more probe process to collect all information for the cluster, which is less resource-intensive overall. This feature was added at #3274
We stop the per-host probes talking to Kubernetes and run an extra Deployment with one more probe process to collect all information for the cluster, which is less resource-intensive overall. This feature was added at #3274
This set of changes allows you to configure the probe on each node with Kubernetes probing disabled, and run one extra probe with Kubernetes enabled and processes, containers, etc., disabled. This is quite simple to arrange with one DaemonSet and one Deployment.
Benefits:
Disappointingly the CPU-usage benefit wasn't huge when I tried it in our staging cluster, but I didn't spend long looking at why.
It has one disruptive change: pods never get tagged with a host-id. The rendering code is changed to find this on a container node. When reporting Kubernetes on just one node in the cluster, it doesn't know what hostID has been given to any other nodes.
Alternatives considered to the above change: