Using this playbook in AWS #254
-
Configuring a single master with 2 nodes in host.ini hangs at the task Expected BehaviorRunning the playbook should run all tasks through to completion, and worker nodes should be able to join the cluster successfully. Current BehaviorAs described above, and in steps to reproduce, running the playbook hangs on task Steps to Reproduce
Context (variables)Operating system: Ubuntu 20.04 Hardware (Master): Total Memory: 2.0GiB Hardware (Nodes): Total Memory: 1.0GiB Variables Used
k3s_version: " v1.24.10+k3s1"
ansible_user: NA
systemd_dir: "/etc/systemd/system"
flannel_iface: "eth0"
apiserver_endpoint: "172.31.8.100"
k3s_token: "NA"
extra_server_args: >-
{{ extra_args }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
--tls-san {{ apiserver_endpoint }}
--disable servicelb
--disable traefik
extra_agent_args: >-
{{ extra_args }}
kube_vip_tag_version: "v0.5.7"
metal_lb_speaker_tag_version: "v0.13.7"
metal_lb_controller_tag_version: "v0.13.7"
metal_lb_ip_range: "172.31.8.80-172.31.8.90" Hosts
[master]
172.31.8.144
[node]
172.31.2.207
172.31.12.34
[k3s_cluster:children]
master
node Possible Solution
|
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 1 reply
-
This has happened to me and my cluster of raspberry pi 4's w/ Ubuntu 22.04. My painstaking effort to solve this is to take all ten pi's external boot SSDs and put raspian os on them. I believe this failure is caused by an sftp transfer failure. I changed the ansible config setting from 'smart' to 'true'. After hours of troubleshooting, I have to wipe all of the drives because they each had unique usernames. All my attempts to ansible new users have failed. Thus, new operating systems for everyone. |
Beta Was this translation helpful? Give feedback.
-
No joy. Not gonna get those hours back... Fresh raspian install and change the config for ansible ssh back to 'smart'. Back to the 'sftp' failures and freeze at 'Enable and check service". |
Beta Was this translation helpful? Give feedback.
-
After some additional research perhaps my issue is related to MetalLB, because I am running on AWS and not bare metal? |
Beta Was this translation helpful? Give feedback.
-
That's most likely it. Metal LB simulates a cloud load balancer and in the cloud, you have a cloud load balancer :) MetalLB says to just used EKS, so I assume that means no. |
Beta Was this translation helpful? Give feedback.
That's most likely it. Metal LB simulates a cloud load balancer and in the cloud, you have a cloud load balancer :)
MetalLB says to just used EKS, so I assume that means no.
https://metallb.universe.tf/installation/clouds/