Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

etcd-snapshot loading config fails with "flag provided but not defined" #4449

Closed
ShylajaDevadiga opened this issue Nov 10, 2021 · 2 comments
Closed
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@ShylajaDevadiga
Copy link
Contributor

Creating issue to track rancher/rke2#2103

$ sudo k3s etcd-snapshot
Incorrect Usage. flag provided but not defined: -token
@ShylajaDevadiga
Copy link
Contributor Author

ShylajaDevadiga commented Nov 15, 2021

Using k3s version v1.22.3+k3s-f18b3252, snapshot is created and successfully uploaded onto s3 bucket using the params passed in the config.yaml but gives a Unknown flag warning for token arg passed through config while running k3s etcd-snapshot command as shown in the output below.

Values passed onto non etcd-snapshot flags are exposed in the warning

$ cat /etc/rancher/k3s/config.yaml 
token: <REDACTED>
etcd-s3: true
etcd-s3-bucket: <REDACTED>
etcd-s3-access-key: <REDACTED>
etcd-s3-secret-key: <REDACTED>
etcd-s3-region: us-east-2
$ sudo k3s etcd-snapshot
WARN[0000] Unknown flag --token=secret found in config.yaml, skipping 
INFO[0000] Managed etcd cluster bootstrap already complete and initialized 
INFO[0000] Applying CRD addons.k3s.cattle.io            
INFO[0000] Applying CRD helmcharts.helm.cattle.io       
INFO[0000] Applying CRD helmchartconfigs.helm.cattle.io 
INFO[0000] Saving etcd snapshot to /var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-172-31-8-234-1636963766 
{"level":"info","msg":"created temporary db file","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-172-31-8-234-1636963766.part"}
{"level":"info","logger":"client","msg":"opened snapshot stream; downloading"}
{"level":"info","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","logger":"client","msg":"completed snapshot read; closing"}
{"level":"info","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"2.1 MB","took":"now"}
{"level":"info","msg":"saved","path":"/var/lib/rancher/k3s/server/db/snapshots/on-demand-ip-172-31-8-234-1636963766"}
INFO[0000] Saving etcd snapshot on-demand-ip-172-31-8-234-1636963766 to S3 
INFO[0000] Checking if S3 bucket sonobuoy-results exists 
INFO[0000] S3 bucket sonobuoy-results exists            
INFO[0000] S3 upload complete for on-demand-ip-172-31-8-234-1636963766 
INFO[0000] Saving current etcd snapshot set to k3s-etcd-snapshots ConfigMap 

Restoring from the snapshot is successful

$ sudo k3s server  --cluster-reset --cluster-reset-restore-path=on-demand-ip-172-31-8-234-1636963766
...
INFO[0013] Reconciling bootstrap data between datastore and disk 
INFO[0013] Etcd is running, restart without --cluster-reset flag now. Backup and delete ${datadir}/server/db on each peer etcd server and rejoin the nodes 

$ sudo systemctl start k3s

$ kubectl get nodes
NAME              STATUS   ROLES                       AGE    VERSION
ip-172-31-8-234   Ready    control-plane,etcd,master   6m7s   v1.22.3+k3s-f18b3252

$ kubectl get pods -A
NAMESPACE     NAME                                     READY   STATUS      RESTARTS   AGE
kube-system   coredns-5484f6b4bb-z5244                 1/1     Running     0          6m2s
kube-system   helm-install-traefik--1-xmlmx            0/1     Completed   1          6m2s
kube-system   helm-install-traefik-crd--1-lwrnf        0/1     Completed   0          6m2s
kube-system   local-path-provisioner-64ffb68fd-j44ps   1/1     Running     0          6m2s
kube-system   metrics-server-9cf544f65-gsq4z           1/1     Running     0          6m2s
kube-system   svclb-traefik-n2clw                      2/2     Running     0          5m39s
kube-system   traefik-74dd4975f9-xfcbb                 1/1     Running     0          5m39s
ubuntu@ip-172-31-8-234:~$ 

@ShylajaDevadiga
Copy link
Contributor Author

Validated fix using k3s version v1.22.3+k3s-f1b429f9.
As discussed internally, non etcd-snapshot flags in the config file will continue to display warning as unknown flag. Values for the flag is not displayed in the warning. Snapshots are uploaded successfully using the params passed in the config file.

$ sudo k3s etcd-snapshot
WARN[0000] Unknown flag --token found in config.yaml, skipping 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants