Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Release-1.28] - k3s etcd-snapshot save fails on host with IPv6 only #9368

Closed
brandond opened this issue Feb 7, 2024 · 4 comments
Closed
Assignees
Milestone

Comments

@brandond
Copy link
Member

brandond commented Feb 7, 2024

Backport fix for Fix on-demand snapshots on ipv6-only nodes

@brandond brandond changed the title [Release-1.28] - Fix on-demand snapshots on ipv6-only nodes [Release-1.28] - k3s etcd-snapshot save fails on host with IPv6 only Feb 7, 2024
@brandond brandond self-assigned this Feb 7, 2024
@brandond brandond added this to the v1.28.7+k3s1 milestone Feb 7, 2024
@fmoral2
Copy link
Contributor

fmoral2 commented Feb 16, 2024

Validated on Version:

-$k3s version v1.28.7+k3s-f19db855 (f19db855)

Environment Details

Infrastructure
Cloud EC2 instance

Node(s) CPU architecture, OS, and Version:
SUSE Linux Enterprise Server 15 SP4

Cluster Configuration:
1 server node

Steps to validate the fix

  1. Install k3s with node ipv6 only with args on config, not CLI
 k3s.io/node-args: '["server","--cluster-cidr","2001:cafe:42::/56","--service-cidr","2001:cafe:43::/108","--cluster-init","true","--node-ip","2600:1f1c:ab4:ee32:c44c:a8b3:4319:dad7","--write-kubeconfig-mode","644"]'
  1. Validate that etcd snapshot is working fine
  2. Validate nodes and pods

Reproduction Issue:


Validation Results:

       
 
 

 k3s -v
k3s version v1.28.7+k3s-f19db855 (f19db855)
go version go1.21.7


 kubectl get node -o yaml | grep node-args
      k3s.io/node-args: '["server","--cluster-cidr","2001:cafe:42::/56","--service-cidr","2001:cafe:43::/108","--cluster-init","true","--node-ip","2600:1f1c:ab4:ee32:c44c:a8b3:4319:dad7","--write-kubeconfig-mode","644"]'

$ kubectl get nodes,pods -A
NAME                       STATUS   ROLES                       AGE   VERSION
node/i-041ae49edb4c36e85   Ready    control-plane,etcd,master   43s   v1.28.7+k3s-f19db855

NAMESPACE     NAME                                          READY   STATUS      RESTARTS   AGE
kube-system   pod/coredns-6799fbcd5-n9v74                   1/1     Running     0          28s
kube-system   pod/helm-install-traefik-crd-479dm            0/1     Completed   0          28s
kube-system   pod/helm-install-traefik-q24kw                0/1     Completed   1          28s
kube-system   pod/local-path-provisioner-6c86858495-gw8pk   1/1     Running     0          28s
kube-system   pod/metrics-server-67c658944b-bb44j           0/1     Running     0          28s
kube-system   pod/svclb-traefik-8ef9be32-xfdht              2/2     Running     0          9s
kube-system   pod/traefik-f4564c4f4-cl4b5                   1/1     Running     0          9s




 $ sudo k3s etcd-snapshot save
WARN[0000] Unknown flag --cluster-cidr found in config.yaml, skipping 
WARN[0000] Unknown flag --service-cidr found in config.yaml, skipping 
WARN[0000] Unknown flag --cluster-init found in config.yaml, skipping 
WARN[0000] Unknown flag --node-ip found in config.yaml, skipping 
WARN[0000] Unknown flag --write-kubeconfig-mode found in config.yaml, skipping 
INFO[0000] Saving etcd snapshot to /var/lib/rancher/k3s/server/db/snapshots/on-demand-i-041ae49edb4c36e85-1708100498 
{"level":"info","ts":"2024-02-16T16:21:37.859227Z","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-i-041ae49edb4c36e85-1708100498.part"}
{"level":"info","ts":"2024-02-16T16:21:37.861482Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:212","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":"2024-02-16T16:21:37.861588Z","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://[::1]:2379"}
{"level":"info","ts":"2024-02-16T16:21:37.926281Z","logger":"client","caller":"v3@v3.5.9-k3s1/maintenance.go:220","msg":"completed snapshot read; closing"}
{"level":"info","ts":"2024-02-16T16:21:37.936413Z","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://[::1]:2379","size":"3.0 MB","took":"now"}
{"level":"info","ts":"2024-02-16T16:21:37.936512Z","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-i-041ae49edb4c36e85-1708100498"}
INFO[0000] Reconciling ETCDSnapshotFile resources       
INFO[0000] Reconciliation of ETCDSnapshotFile resources complete 


@fmoral2
Copy link
Contributor

fmoral2 commented Feb 16, 2024

Working as expected using config but not with args on CLI, talking with @brandond we are letting this behind now to release the whole fix

@brandond
Copy link
Member Author

Moving out to next release to extend fix to CLI args, not just config.

@fmoral2
Copy link
Contributor

fmoral2 commented Apr 16, 2024

Validated on Version:

-$ k3s-128 version v1.28.8+k3s-feb211d3 (feb211d3)

Environment Details

Infrastructure
Cloud EC2 instance

Node(s) CPU architecture, OS, and Version:
SUSE Linux Enterprise Server 15 SP4

Cluster Configuration:

  • 1 server

Steps to validate the fix

  1. start k3s on single ipv6 only node
  2. start with ipv6 args on cli not on config
  3. Validate etcd save snapshot cmd and new outpus

Reproduction Issue:


 
 

Validation Results:


 sudo ./k3s-128 -v
k3s-128 version v1.28.8+k3s-feb211d3 (feb211d3)
go version go1.21.8

 
sudo ./k3s server --cluster-init \
  --cluster-cidr=2001:cafe:42::/56 \
  --service-cidr=2001:cafe:43::/108 \
  --write-kubeconfig-mode=644 \
  --node-ip=2600:1f1c:ab4:ee10:dbaf:215c:aaf4:ef8d


 $ sudo ./k3s-128 kubectl get nodes
NAME                  STATUS   ROLES                       AGE   VERSION
i-09a5f34b0e81e7fae   Ready    control-plane,etcd,master   56s   v1.28.8+k3s-feb211d3



$ sudo  ./k3s-128 etcd-snapshot save
INFO[0000] Snapshot on-demand-i-09a5f34b0e81e7fae-1713296571 saved. 


 sudo  ./k3s-128 etcd-snapshot save --etcd-s3
FATA[0000] see server log for details: s3 bucket name was not set 



~$ sudo  ./k3s-128 etcd-snapshot save --etcd-s3 --etcd-s3-bucket foo
FATA[0000] see server log for details: failed to test for existence of bucket foo: 404 Not Found 



~$ sudo ./k3s-128 kubectl get pods  -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-6799fbcd5-jdwzj                   1/1     Running     0          6m
kube-system   helm-install-traefik-44xpk                0/1     Completed   2          6m1s
kube-system   helm-install-traefik-crd-nqwlb            0/1     Completed   0          6m1s
kube-system   local-path-provisioner-6c86858495-sd4tp   1/1     Running     0          6m
kube-system   metrics-server-54fd9b65b-ww87l            1/1     Running     0          6m
kube-system   svclb-traefik-1004102f-jmw9j              2/2     Running     0          5m29s
kube-system   traefik-7d5f6474df-fktcm                  1/1     Running     0          5m30s


@fmoral2 fmoral2 closed this as completed Apr 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

No branches or pull requests

4 participants