Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to add a master node from different region in a cluster #6297

Closed
MohamedShaj opened this issue Oct 19, 2022 · 3 comments
Closed

Unable to add a master node from different region in a cluster #6297

MohamedShaj opened this issue Oct 19, 2022 · 3 comments

Comments

@MohamedShaj
Copy link

MohamedShaj commented Oct 19, 2022

Hi, I am trying to create a multi-master cluster with a different region, for example, one master is from North Virginia and the other is from Mumbai

Environmental Info:
Both Master versions
K3s Version: v1.24.6+k3s1

Node(s) CPU architecture, OS, and Version:
Both - ubuntu 22.04 version
N.Virginia master - Linux ip-xxxxxxx-aws #25-Ubuntu SMP Fri Sep 23 12:20:42 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux -->
Mumbai master - Linux ip-xxxxxxx -aws #23-Ubuntu SMP Wed Aug 17 18:33:13 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:
2 master

Describe the bug:
I am not able to add the another master from different region, I have opened all the ports in firewall aswell

Oct 19 06:21:27 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:27Z" level=info msg="Module iptable_nat was already loaded"
Oct 19 06:21:27 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:27Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
Oct 19 06:21:27 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:27Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agen>
Oct 19 06:21:28 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:28Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: c>
Oct 19 06:21:29 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:29Z" level=info msg="Containerd is now running"
Oct 19 06:21:29 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:29Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=systemd --client-ca-file=/var/lib/ran>
Oct 19 06:21:29 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:29Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Oct 19 06:21:29 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:29Z" level=info msg="Handling backend connection request [ip-172-31-31-228]"
Oct 19 06:21:29 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:29Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
Oct 19 06:21:30 ip-172-31-31-228 k3s[5903]: time="2022-10-19T06:21:30Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"

Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: {"level":"warn","ts":"2022-10-19T06:34:13.901Z","logger":"etcd-client","caller":"v3@v3.5.3-k3s1/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00061fdc0/172.31.73.9:2379">
Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: time="2022-10-19T06:34:13Z" level=error msg="Failed to get member list from etcd cluster. Will assume this member is already added"
Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: time="2022-10-19T06:34:13Z" level=info msg="Starting etcd to join cluster with members [ip-172-31-73-9-845db696=https://172.31.73.9:2380 ip-172-31-31-228-83ce5771=https://172.31.31.228:2380]"
Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: {"level":"info","ts":"2022-10-19T06:34:13.903Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://127.0.0.1:2380","https://172.31.31.228:2380"]}
Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: {"level":"info","ts":"2022-10-19T06:34:13.904Z","caller":"embed/etcd.go:479","msg":"starting with peer TLS","tls-info":"cert = /var/lib/rancher/k3s/server/tls/etcd/peer-server-client.crt, key = /var/lib/rancher/k3s/serv>
Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: {"level":"info","ts":"2022-10-19T06:34:13.904Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.31.31.228:2379"]}
Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: {"level":"info","ts":"2022-10-19T06:34:13.945Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.3","git-sha":"Not provided (use ./build instead of go build)","go-version":"go1.18.6","go->
Oct 19 06:34:13 ip-172-31-31-228 k3s[6401]: {"level":"info","ts":"2022-10-19T06:34:13.946Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","took":"216.453µs"}
Oct 19 06:34:14 ip-172-31-31-228 k3s[6401]: time="2022-10-19T06:34:14Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
Oct 19 06:34:14 ip-172-31-31-228 k3s[6401]: time="2022-10-19T06:34:14Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"

Steps To Reproduce:
N.V master installation - curl -sfL https://get.k3s.io | sh -s - --cluster-init
Mumbai master installation - export K3S_TOKEN="node secret"
curl -sfL https://get.k3s.io | sh -s - --server https://<n.v-masterip>:6443

Note: within the same vpc it is working erfect

@brandond
Copy link
Member

Etcd is not designed to be operated over high latency links. It should be used over local LAN links only. For this reason, k3s's embedded etcd cluster is configured to use the nodes private address for peering. If the nodes cannot reach each other's private addresses, they will not be able to join the cluster.

@MohamedShaj
Copy link
Author

MohamedShaj commented Oct 19, 2022

Etcd is not designed to be operated over high latency links. It should be used over local LAN links only. For this reason, k3s's embedded etcd cluster is configured to use the nodes private address for peering. If the nodes cannot reach each other's private addresses, they will not be able to join the cluster.

So what is workaround for multi master cluster with different region, any workaround is there or we cannot achieve this in k3s ?

with embedded etcd is this possible ?

what is the possible way and share the docs or steps on this !!!

@brandond
Copy link
Member

brandond commented Oct 19, 2022

Etcd is not designed to be operated over high latency links.

So what is workaround for multi master cluster with different region

There is no workaround, etcd is not designed to be used like that.

If you have a site-to-site VPN or other overlay network between the two regions that allows the nodes to connect to each other at their private addresses, K3s should work - but I expect that etcd will not perform well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants