Skip to content

Latest commit

 

History

History
101 lines (83 loc) · 3.99 KB

install_k8s_with_kubeadm.md

File metadata and controls

101 lines (83 loc) · 3.99 KB

【按官方样例】kubeadm install k8s 单机集群

@see kubeadm install

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl

install kubelet kubeadm kubectl

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

@see kubeadm init a k8s cluster

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

执行 kubeadm init

# // 10.244.0.0/16 是网络插件 flannel 的默认 cidr 网段
kubeadm init --pod-network-cidr 10.244.0.0/16

肯定遇到 k8s.gcr.io 网络不通问题

  1. 手动下载以下镜像,具体版本号 kubeadm config images list 确认下,持续有更新 kubeadm 新版必定有新依赖,自己修改成对应版本号哈。
k8s.gcr.io/kube-apiserver:v1.21.2
k8s.gcr.io/kube-controller-manager:v1.21.2
k8s.gcr.io/kube-scheduler:v1.21.2
k8s.gcr.io/kube-proxy:v1.21.2
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
  1. 因为 k8s.gcr.io 网络不通,所以从 registry.aliyuncs.com/google_containers 拉镜像后打 tag
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2
docker pull registry.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
  1. 打 tag
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.21.2 k8s.gcr.io/kube-apiserver:v1.21.2
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.21.2 k8s.gcr.io/kube-controller-manager:v1.21.2
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.21.2 k8s.gcr.io/kube-scheduler:v1.21.2
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.21.2 k8s.gcr.io/kube-proxy:v1.21.2
docker tag registry.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
  1. (可选) 若 coredns/coredns:v1.8.0 不存在,直接去 dokcer hub 搜索便可,注意发现版本号没有 'v' 开头
docker pull coredns/coredns:1.8.0
docker tag coredns/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0

按照官方提示再继续

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#more-information

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • 上面或许直接下不来文件,那就先 wget 再 apply 吧
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
  • for example for a single-machine Kubernetes cluster for development, run:
  • 单机集群版允许 k8s master 节点也作为 worker,可调度 pod
kubectl taint nodes --all node-role.kubernetes.io/master-
  • 大功告成,看看吧
kubectl get nodes

kubectl get pods --all-namespaces