-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deploy_minikube.sh: Adding support for Centos8 #6073
deploy_minikube.sh: Adding support for Centos8 #6073
Conversation
sudo dnf -y update && sudo dnf -y install socat conntrack | ||
dnf install -y dnf-plugins-core | ||
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo | ||
sudo dnf install -y docker-ce docker-ce-cli containerd.io --nobest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why docker-ce and not podman and/or buildah?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
originally it was on deploying Ubuntu (as it for travis runs).
We can change to podman when running on Centos
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the look of it, minikube support for podman is still WIP.
We can see 2 things:
- Using the podman driver is experimental
- even though the driver is podman, the run-time is docker and changing it to cri-o still looks for docker (Fedora 31 vm-driver=podman fail to start trying to start docker service kubernetes/minikube#6795).
We are using MINIKUBE_VERSION=v1.8.2 and KUBERNETES_VERSION=v1.17.3 due to kubernetes/minikube#7828
running with podman results in:
+ main@./1.sh:70 cat /root/.minikube/config/config.json
{
"WantNoneDriverWarning": false,
"WantUpdateNotification": false,
"container-runtime": "cri-o",
"driver": "podman",
"vm-driver": "none"
}+ main@./1.sh:72 minikube version
minikube version: v1.8.2
commit: eb13446e786c9ef70cb0a9f85a633194e62396a1
+ main@./1.sh:74 minikube start --kubernetes-version=v1.17.3
😄 minikube v1.8.2 on Centos 8.1.1911
▪ MINIKUBE_VERSION=v1.8.2
✨ Using the podman (experimental) driver based on user configuration
E0705 15:30:53.797867 10377 cache.go:106] Error downloading kic artifacts: error loading image: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
once podman support in minikube will be better, and we will be able to upgrade the minikube version we can switch to podman on Centos8.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have opened a new issue to track this: #6075
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have seen similar problems with minikube+podman, it does not really work well yet.
SELinux_status=$(sestatus | grep "SELinux status" | awk -F ":" '{print $2}' | xargs ) | ||
if [ "${SELinux_status}" == "enabled" ] | ||
then | ||
sudo setenforce 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oy :(
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, kubeadm on CentOS needs SELinux to be disabled:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does that happen with podman as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I mean, somehow, magically it is working in RHEL, no?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should look into it one we can switch into podman (see previous comment #6073 (comment))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some locations where minikube stores executables, and these try to access configuration files. SElinux prevents that. I am not sure if the location that minikube uses is dynamic, or predictable. In the 2nd case, creating the directories in advance and setting appropriate labels might work.
This obviously need some more research. Moving to Permissive mode is acceptable for the moment.
Adding dbg to the subject to debug the failed deploy on travis |
9860f67
to
ca28bcc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should probably put kubectl in /usr/bin
like minikube.
When running in a Vagrant VM (as user 'vagrant'), kubectl
does not work:
[vagrant@localhost vagrant]$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
[vagrant@localhost vagrant]$ sudo /usr/local/bin/kubectl cluster-info
Kubernetes master is running at https://192.168.121.2:8443
KubeDNS is running at https://192.168.121.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
There is a /root/.kube
directory, but not one in /home/vagrant
.
This is does not need to be blocker in case everything is intended to be run as root.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for now, we will assume we are running as root
with that being the case, things look good, but you'll have to place kubectl
in a location where sudo kubectl
works in CentOS (not /usr/local/bin
)
ca28bcc
to
4e7fefc
Compare
deploy_minikube.sh: - Adding support for Centos8 - Moving minikube and kubectl from /usr/local/bin to /usr/bin
4e7fefc
to
a8a792e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the corrections, it works for me in a Vagrant VM 👍
Explain the changes
deploy_minikube.sh: