Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no connections between containers when probe.processes=false #2586

Open
rade opened this issue Jun 12, 2017 · 9 comments
Open

no connections between containers when probe.processes=false #2586

rade opened this issue Jun 12, 2017 · 9 comments
Labels
accuracy Incorrect information is being shown to the user; usually a bug bug Broken end user or developer functionality; not working as the developers intended it k8s Pertains to integration with Kubernetes

Comments

@rade
Copy link
Member

rade commented Jun 12, 2017

In a kubernetes cluster, containers do not have an associated IP address, as far as docker is concerned. We rely purely on connection tracking at the process level to determine connectivity for containers, so when such tracking is disabled with probe.processes=false, no container connections are show, which is unfortunate and surprising.

Note that we do track pod IPs and use those to figure out connections at the pod level.

NB: this is all based on my reading of the code and some scope reports from a k8s cluster. TODO: verify by running probes in such a cluster with probe.processes=false.

@rade rade added accuracy Incorrect information is being shown to the user; usually a bug bug Broken end user or developer functionality; not working as the developers intended it k8s Pertains to integration with Kubernetes labels Jun 12, 2017
@rade
Copy link
Member Author

rade commented Jun 12, 2017

Possible fixes:

  • map pod IPs "downward" to containers
  • determine container IPs by inspecting the networking interfaces in the container netns.

In either case we need to be mindful of the fact that pods typically have two containers - a pause container and the "real" container - which would trip over our "if two containers have the same IP we can't use that IP for determining connectivity" logic. So we'd have to special-case the pause container, perhaps, e.g. only associate IPs with that or filter it out. And obviously if a pod has more than one "real" container then all bets are off.

@rade
Copy link
Member Author

rade commented Jun 12, 2017

This issue won't manifest in some pod networks, e.g.

  • for weave net, scope has special code to determine container IPs
  • minikube appears to be using docker networking, so containers do end up with docker IPs

@unitymind
Copy link

unitymind commented Jun 17, 2017

@rade, additionally with probe.processes=false - containers do not stacking by Services (Docker Engine swarm mode). On Weave Cloud UI - don't see related section and columns

@rade
Copy link
Member Author

rade commented Nov 17, 2017

In either case we need to be mindful of the fact that pods typically have two containers - a pause container and the "real" container - which would trip over our "if two containers have the same IP we can't use that IP for determining connectivity" logic. So we'd have to special-case the pause container, perhaps, e.g. only associate IPs with that or filter it out.

That is already happening; probes annotate pause containers with does_not_make_connections, and Container2IP excludes such containers.

@rade
Copy link
Member Author

rade commented Dec 10, 2017

In #2943 (comment) @bboreham suggested that the probes could obtain container IPs in a similar fashion how they determine weave IPs.

If we can get that to work it would become less debilitating for users to disable process tracking in the interest of performance.

@rade
Copy link
Member Author

rade commented Dec 11, 2017

the probes could obtain container IPs in a similar fashion how they determine weave IPs.

weaveutil container-addrs cbr0 weave:allids seems to do the trick.

@bboreham
Copy link
Collaborator

That raises the question of how to know cbr0 is the correct bridge to look at.

You could simplify to just report any IP address in any namespace, without checking what bridge it belongs to.

@bboreham
Copy link
Collaborator

bboreham commented Jun 8, 2018

Although #3207 enables some connections to be shown, lots are not. Believed to include: containers in host namespace.

@rade
Copy link
Member Author

rade commented Jun 8, 2018

The problem with host netns connections is that we cannot tie them to a container based on IP. Instead we use the pid. Specifically, we tag nodes in the process topology with a container node id based on looking up the pid in a pid->container map produced by the docker reporter. Crucially this also deals with situations where the process is a child of the primary container pid.

We could instead associate a set of pids with a container - the primary pid an all its children (recursively) - and in the renderer use that info to associate connections - which do carry pids - with containers. This does still require some proc walking, to find the child pids.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accuracy Incorrect information is being shown to the user; usually a bug bug Broken end user or developer functionality; not working as the developers intended it k8s Pertains to integration with Kubernetes
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants