Replies: 1 comment
-
I'm not the only one who has this issue, I lost a hole weekend trying to solve this issue without success 😅. It's possible de have multi masters but in the same proxmox node ✅ but not if your VMs are distributed across proxmox cluster nodes. IMHO, it's could be a networking issue, in the first all the VMs are sharing the same NIC and the first master node got the VIP by default. Hello 👋 @timothystewart6, may be you have an idea 💡 about it ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Expected Behavior
I am trying to initially create a cluster with single master and three workers and then add two more masters in order to turn the cluster into HA mode. The reason is that I need to use a simple cluster at first and then expand it. I'm testing all this stuff using a vagrant environment.
I do not know if this approach is correct, but (especially for me) it would be very convenient to be able to switch from a cluster with a single master to one in HA by simply re-running the playbook.
Current Behavior
So I initially created a cluster with single master and three workers and it works perfectly. Trying to run the process again by adding two masters to the
hosts.ini
file I get an error when checking that all the nodes are joined. After 20 retries the playbooks stops.So I checked the
k3s-init.service
in the master machines and I noticed that I constantly have this error:Steps to Reproduce
vagrant up
ansible-playbook site.yml -i inventory/my-cluster/hosts.ini
hosts.ini
Context (variables)
Variables Used
all.yml
Hosts
host.ini
Vagrantfile
Vagrantfile
Possible Solution
I found on the k3s doc that for an existing cluster installation Is is necessary to restart the k3s server with the
--cluster-init
. I tried this but seems not work for me, probably I'm doing something wrong.Beta Was this translation helpful? Give feedback.
All reactions