-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bootstrapper in Kubernetes not able to to get local containers #53
Comments
Hello, I'm working with @o0n1x trying to understand the issue. After looking to the source code, it seems dependent on finding local containers using the docker python api. I saw evidence of this both in the bootstrapper as well as in the dashboard. However, because we're using kubernetes, containers deployed by kubernetes are not visible to the docker api, at least not by default. We can't seem to find a way to have them visible. Can you please help us make them visible, or propose an alternative? |
Hi @o0n1x @Nandinski sorry for the late response, we are currently on holiday. |
We are not running the experiment with Minikube, we are trying to use a regular Kubernetes cluster by following the other documented suggestion with The difference is that Minikube exposes the Kubernetes containers through the docker API, but this does not happen with a normal Kubernetes cluster. By default, docker does not have access to Kubernetes deployed containers. From my understanding, Minikube seems to make them available through docker as a convenience. And unfortunately, from what I can tell, Kollaps' Kubernetes deployment seems to depend on docker to get access to Kubernetes deployed containers. An example in the code where this is needed, is in the bootstrapper to bootstrap the dashboard. Before doing the dashboard bootstrap, this call:
( self.low_level_client.containers() ), tries to get all local containers using Docker in order to find the dashboard container. And while this works in Minikube, returning all the local containers, it does not work with a normal cluster. In a normal cluster, because docker has no view into the Kubernetes containers, that function call always returns an empty list, as the issue begins by saying.
When connected to Minikube's docker in a Minikube deployment,
When connected to a master node's docker in a kubeadm deployment, with the pods running,
I'm not sure if this container visibility is a setting we can configure in docker, but from what I could find, it does not seem to be possible to have docker show the Kubernetes containers as it happens with Minikube. Are you using a docker setting that allows it to see the Kubernetes pods? To give more details on our setup, we are running Docker version 24.0.5, Kubernetes client/server version 1.27. We are trying to deploy the iper3 example in a Kubernetes cluster started with kubeadm.
At this point we try to reach the dashboard but can't because it is waiting for the bootstrapper to start it using
Not sure what this error at the end of the log is, but it only happens once and resumes normal behavior in the while true loop, as seen in the logs of the initial message. To summarize our issue, does the current Kubernetes support only work with Minikube? If not, how can we make the Kubernetes deployment work with kubeadm? Please let us know if you need extra information. Thank you for the help. |
Thanks for the detailed description! As it stands we need minikube for the reasons you mentioned. I will look into this and try to find a solution. |
Thank you for the reply. |
Hello again,
I have moved to another orchestrator which is Kubernetes, but I have ran into another problem. it seems the bootstrapper does not find the other containers using self.low_level_client.containers() in KubernetesBootstrapper.py, but it was able to get the pods using self.high_level_client.list_namespaced_pod('default') in KubernetesBootstrapper.py.
I have attempted the same steps to start Kubernetes as shown in the orchestrators.md file and build the yaml file needed from the iperf3 example topology file. the problem persists after multiple attempts and on different devices. the experiment is done on single device every time, so the network shouldn't be an issue.
the root issue to this problem is that the dashboard does not start, and can't be accessed in the local device.
logs in the bootstrapper pod::
and stops at that point.
logs in bootstrapper pod with debug logs I placed in KubernetesBootstrapper.py :
one note: it does not loop in the first for loop for some reason, as I placed a debug log there and it's not displayed.
The text was updated successfully, but these errors were encountered: