-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cluster api having networking issue while using docker provider #7330
Comments
@umesh168: This issue is currently awaiting triage. If CAPI contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This isn't necessarily a networking error - when the client is timing out like that it means the Cluster isn't contactable. That could be a network error, or it could mean that the target cluster isn't actually running. Can you provide the output of It would be great to nail down the issue impacting you and see if there's steps that could be added to the troubleshooting guide to help with future debugging. |
@killianmuldoon here is the output for docker ps
|
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Welcome to Ubuntu 22.04.1 LTS! Queued start job for default target Graphical Interface. |
Can you check the status of the Machine objects? Can you see if there are errors in the logs of CAPD? |
@killianmuldoon here are some more details from docker desktop hxproxy not running |
containerd.log |
I'm not sure what the issue is with the haproxy container - I've seen it fail due to resource exhaustion on Linux systems at times. Can you share what resources you're dedicating to Docker Desktop? Can you see what memory usage is like when you create a new cluster and haproxy fails? It would also be really helpful if you could see if there's anything relevant in the capd infrastructure provider logs to help debug this issue. |
The issue is ip which is used 172.18.0.2 is not accessible.
|
I really think the networking issue is the symptom rather than the cause here. The IP isn't reachable because the container isn't up. The first places I'd look for information are in resource usage and the CAPD logs as it really seems as if CAPD or docker are having a hard time bringing up haproxy. To drill down on the issue it could be a good idea to run the same version of haproxy using |
/kind support from a quick look CAPD is failing to update the configuration inside the HA proxy image, CAPD logs should have an error if this is what is happening but this is a very uncommon error |
@umesh168 did you manage to resolve this? |
@killianmuldoon above issue has been resolved. |
Happy to hear the problem has been fixed |
@fabriziopandini: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@umesh168 How did you end up fixing it? I'm encountering something similar. |
@nilsanderselde try to pull haproxy image manually. |
I hit the same issue of |
Steps to reproduce the issue
Follow cluster api quick start here
What did you expect to happen:
It show 3 replica after running command
kubectl get kubeadmcontrolplane
Anything else you would like to add:
NAMESPACE↑ NAME CLUSTER NODENAME PROVIDERID PHASE VERSION AGE │
│ default capi-quickstart-47d7b-vk7tg capi-quickstart docker:////capi-quickstart-47d7b-vk7tg Provisioned v1.25.0 3h6m │
│ default capi-quickstart-md-0-fj7rr-7d46d6c57f-6gjl4 capi-quickstart Pending v1.25.0 3h6m │
│ default capi-quickstart-md-0-fj7rr-7d46d6c57f-7c5jq capi-quickstart Pending v1.25.0 3h6m │
│ default capi-quickstart-md-0-fj7rr-7d46d6c57f-p4jb8 capi-quickstart Pending v1.25.0 3h6m
Environment:
kubekctl version
): Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.0", GitCommit:"a866cbe2e5bbaa01cfd5e969aa3e033f3282a8a2", GitTreeState:"clean", BuildDate:"2022-08-23T17:44:59Z", GoVersion:"go1.19", Compiler:"gc", Platform:"darwin/arm64"}Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.2", GitCommit:"5835544ca568b757a8ecae5c153f317e5736700e", GitTreeState:"clean", BuildDate:"2022-09-22T05:28:27Z", GoVersion:"go1.19.1", Compiler:"gc", Platform:"linux/arm64"}
/etc/os-release
):Montery 12.5/bug
When we check logs for capi-kubeadm-control-plane-controller-manager
E1003 10:22:18.784244 1 controller.go:182] "Failed to update KubeadmControlPlane Status" err="failed to create remote cluster client: error creating client and cache for remote cluster: error creating dynamic rest mapper for remote cluster \"default/capi-quickstart\": Get \"https://172.18.0.2:6443/api?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" controller="kubeadmcontrolplane" controllerGroup="controlplane.cluster.x-k8s.io" controllerKind="KubeadmControlPlane" kubeadmControlPlane="default/capi-quickstart-47d7b" namespace="default" name="capi-quickstart-47d7b" reconcileID=e2010afc-dac6-4acd-b60e-dc08944fb296 cluster="capi-quickstart"
The text was updated successfully, but these errors were encountered: