Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to uninstall single-node k3s cluster #1148

Closed
pixiake opened this issue Mar 23, 2022 · 10 comments
Closed

Unable to uninstall single-node k3s cluster #1148

pixiake opened this issue Mar 23, 2022 · 10 comments
Labels
bug Something isn't working good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.

Comments

@pixiake
Copy link
Collaborator

pixiake commented Mar 23, 2022

What is version of KubeKey has the issue?

v2.0.0

What is your os environment?

Ubuntu20.04

KubeKey config file

No response

A clear and concise description of what happend.

I installed k3s cluster with ./kk create cluster --with-kubernetes v1.21.6-k3s. But if I don't specify the configuration file, I can't uninstall k3s cluster with ./kk delete cluster.

I think we can check whether k3s-uninstall.sh exists in the delete cluster pipeline, and if so, uninstall k3s first.

Relevant log output

No response

Additional information

No response

@pixiake pixiake added the bug Something isn't working label Mar 23, 2022
@24sama
Copy link
Collaborator

24sama commented May 18, 2022

/good-first-issue

@ks-ci-bot
Copy link
Collaborator

@24sama:
This request has been marked as suitable for new contributors.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.

In response to this:

/good-first-issue

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ks-ci-bot ks-ci-bot added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels May 18, 2022
@xiaods
Copy link
Contributor

xiaods commented Jul 31, 2022

This is designed to be like this, not a bug. we need discussion here and give a good choice then do anythings.

please read the file:

/kubekey/pkg/pipelines/delete_cluster.go

func DeleteCluster(args common.Argument) error {
	var loaderType string
	if args.FilePath != "" {
		loaderType = common.File
	} else {
		loaderType = common.AllInOne
	}

	runtime, err := common.NewKubeRuntime(loaderType, args)
	if err != nil {
		return err
	}

	switch runtime.Cluster.Kubernetes.Type {
	case common.K3s:  <--- follow this logic, runtime read kubeconfig and get runtime, it will know the type is k3s.
		if err := NewK3sDeleteClusterPipeline(runtime); err != nil {
			return err
		}
	case common.Kubernetes:
		if err := NewDeleteClusterPipeline(runtime); err != nil {
			return err
		}
	default:  
		if err := NewDeleteClusterPipeline(runtime); err != nil {
			return err
		}
	}
	return nil
}

are you sure uninstall k3s failed?

@xiaods
Copy link
Contributor

xiaods commented Aug 2, 2022

@pixiake any update on this issue.

@pixiake
Copy link
Collaborator Author

pixiake commented Aug 4, 2022

@xiaods
There should be no problem if we use the configuration file,because the Kubernetes.Type is specified in the configuration file.

If we use allinone mode, kk will automatically generate a default configuration, in which no Kubernetes.Type is specified.

@xiaods
Copy link
Contributor

xiaods commented Aug 7, 2022

I have reproduce this bug on aws clean server, yes, the delete cluster step only perform kubeadm delete steps not k3s-uninstall.sh.

[root@ip-172-31-27-251 ~]# kk delete cluster


 _   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
                                    __/ |
                                   |___/

01:00:21 UTC [GreetingsModule] Greetings
01:00:21 UTC message: [ip-172-31-27-251.ap-northeast-2.compute.internal]
Greetings, KubeKey!
01:00:21 UTC success: [ip-172-31-27-251.ap-northeast-2.compute.internal]
01:00:21 UTC [DeleteClusterConfirmModule] Display confirmation form
Are you sure to delete this cluster? [yes/no]: yes
01:00:24 UTC success: [LocalHost]
01:00:24 UTC [ResetClusterModule] Reset the cluster using kubeadm
01:00:24 UTC stdout: [ip-172-31-27-251.ap-northeast-2.compute.internal]
/bin/bash: /usr/local/bin/kubeadm: No such file or directory
01:00:24 UTC success: [ip-172-31-27-251.ap-northeast-2.compute.internal]
01:00:24 UTC [ClearOSModule] Reset os network config
01:00:24 UTC stdout: [ip-172-31-27-251.ap-northeast-2.compute.internal]
/bin/bash: ipvsadm: command not found
01:00:24 UTC stdout: [ip-172-31-27-251.ap-northeast-2.compute.internal]
Cannot find device "nodelocaldns"
01:00:24 UTC success: [ip-172-31-27-251.ap-northeast-2.compute.internal]
01:00:24 UTC [ClearOSModule] Uninstall etcd
01:00:24 UTC success: [ip-172-31-27-251.ap-northeast-2.compute.internal]
01:00:24 UTC [ClearOSModule] Remove cluster files
01:00:24 UTC stdout: [ip-172-31-27-251.ap-northeast-2.compute.internal]
rm: cannot remove ‘/var/lib/kubelet/pods/1dd82477-f7f7-414e-ad81-34f5863e03aa/volumes/kubernetes.io~projected/kube-api-access-6gp2k’: Device or resource busy
rm: cannot remove ‘/var/lib/kubelet/pods/798ba461-2142-4bb8-8513-69b76520b0f3/volumes/kubernetes.io~projected/kube-api-access-wcqbb’: Device or resource busy
rm: cannot remove ‘/var/lib/kubelet/pods/2360d265-3a18-4391-a4a9-b46c941b13bd/volumes/kubernetes.io~projected/kube-api-access-fzgzf’: Device or resource busy
01:00:25 UTC success: [ip-172-31-27-251.ap-northeast-2.compute.internal]
01:00:25 UTC [ClearOSModule] Systemd daemon reload
01:00:25 UTC success: [ip-172-31-27-251.ap-northeast-2.compute.internal]
01:00:25 UTC [UninstallAutoRenewCertsModule] UnInstall auto renew control-plane certs
01:00:25 UTC skipped: [ip-172-31-27-251.ap-northeast-2.compute.internal]
01:00:25 UTC Pipeline[DeleteClusterPipeline] execute successfully

@xiaods
Copy link
Contributor

xiaods commented Aug 7, 2022

I add log in deletecluster func,

[root@ip-172-31-27-251 amd64]# ./kk delete cluster
v1.23.9
============================================================
kubernetes

the kubekey init runtime on mem, it dont read any k8s version from anywhere. so it give a default k8s version and type on runtime. it redirect wrong path to exec delete cluster pipeline.

@xiaods
Copy link
Contributor

xiaods commented Aug 7, 2022

for consistent design, the kk delete should add --with-kubernetes args to support specifed k8s version. it will resolve this concerns.

xiaods added a commit to xiaods/kubekey that referenced this issue Aug 7, 2022
fix issue: kubesphere#1148

let below cmd works
```
./kk delete cluster --with-kubenetes v1.21.6-k3s
```

Signed-off-by: Deshi Xiao <xiaods@gmail.com>
@xiaods
Copy link
Contributor

xiaods commented Aug 7, 2022

@pixiake hope my pr can fix your case.

@24sama
Copy link
Collaborator

24sama commented Aug 11, 2022

Fixed by: #1426

@24sama 24sama closed this as completed Aug 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines.
Projects
None yet
Development

No branches or pull requests

4 participants