-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Removed kubelet port flag. Node name now always need from env/flag #3754
Conversation
97cbdd3
to
19fae5b
Compare
Any pointers on why the integration tests aren't going through? Seems it's not able to resolve some files in vendor and it's not able to resolve a symbol in the docker package "namespaceIPAddress" |
Thanks for the PR! Integration tests are turned off on forks; see #2192. I have pushed your fork to the weaveworks/scope repo so the tests will now re-run. |
In the issue #3242 I noted:
Could you say what you did on those points? What does happen if they don't supply a node name? There are now two modes to run the Scope probe with Kubernetes: the default every-node-captures-everything, or via |
The nodeName flag can be provided manually and if not scope will attempt to pull it from the KUBERNETES_NODENAME env variable. I think right now the reporter that calls the podTopology method will only get added if the kubernetes role is cluster. |
probe/kubernetes/reporter.go
Outdated
log.Warnf("No node name and cannot obtain local pods, reporting all (which may impact performance): %v", err) | ||
} | ||
// filter out non-local pods: we only want to report local ones for performance reasons. | ||
if r.nodeName == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think, on rereading this, that the nodeName=="" check should be inside rather than outside the WalkPods function call. right now this is not allowing a cluster agent to get the node for any pod. If inside it will move on to the pods.AddNode() call which will allow it to attach a node to a pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the order of this check in latest commit.
It checks that the mode is not host, but there are three possibilities: host, cluster or neither (for backwards-compatibility, since it requires changing the way you deploy the probe). |
@bboreham I'm assuming that the "neither" role is the out-of-cluster deployment? Or is there a separate deployment mode for Kubernetes? If it's the out of cluster mode then this check for kubelet won't have any impact. The code as is will only fire for kubernetesRole=Cluster, and when it does it will add a node name for each pod in the list. |
The three possibilities are:
I believe the change you have made will impact the first two. Still, we don't need to argue about theoreticals; my question is which of these have you tested and what does it do in the case that the node name is not supplied? |
I've manually tested the named kubernetesRoles. Like you said in the host role case this method isn't called and has no effect. In the cluster role case the WalkPods will run the addNode method for every pod if no nodeName is passed to it. |
I'm not certain of how to approach the first Role - i.e. no role given. Is it a case that this is supported as a run-mode even with new changes to the code such as this one? In the other thread there seem to be explicit calls for removing the kubelet code. If this is still supposed to support per-node reporting maybe we can do a check for the flags and a warning at set up time? Do you think this is a reasonable way to signal to the user - or should there be something harder than a log.Warn()? |
Thanks for doing those tests. I think we have to expect people are still running Scope as a single DaemonSet with blank role. Currently it works, without calling kubelet, so long as a node name is supplied. Logging a warning would be fine, together with either reporting no pods or all pods. The main thing I want to ensure is that it doesn't crash. |
I've pushed a commit to your branch adding a warning. |
Fix to issue #3242. Kubelet flag is removed from the spec and tests relying on mocked versions of the getlocalpods method have been changed to no longer rely on this method. Kubelet API files, which were using the unsecure methods, have been removed.