Skip to content

Commit

Permalink
remove ABAC and set default RBAC, also updated k8s version 1.11.3, in…
Browse files Browse the repository at this point in the history
…clude keepalived and haproxy for API HA
  • Loading branch information
pawankkamboj committed Oct 5, 2018
1 parent a73b6dc commit 7e56c14
Show file tree
Hide file tree
Showing 50 changed files with 1,376 additions and 518 deletions.
45 changes: 23 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,41 @@
# HA-kubernetes-ansible
Ansible playbook to create a Highly Available kubernetes cluster using latest release 1.10.3 on Bare metal system(CentOS-7.x).
# kubernetes-ansible
Ansible playbook to create a Highly Available kubernetes cluster using latest release 1.11.3 on Bare metal system(CentOS-7.x).
Ansible version "2.4" is require to use this playbook.

There are 8 roles define in this ansible playbook.
- addon - to create addon service, kube-proxy, kube-dns, kube-dashboard and cluster-monitoring using heapster and grafana/infuxdb
- docker - install latest docker release on all cluster nodes
- etcd - setup etcd cluster
- haproxy - setup haproxy for API service HA, ignore it if LB already available.
- master - setup kubernetes master service - kube-apiserver, kube-controller, kube-scheduler, kubectl client
- node - setup kubernetes node service - kubelet
- sslcert - create all ssl certificates require to run secure K8S cluster
- yum-repo - create epel and kubernetes package repo
- containerd - IF want to use containerd runtime instead of Dockerg, use this role and enable this in group variable file
Requirements:-
- Need Ansible
- CentsOS7 installes system

Following the below steps to create Kubernetes HA setup on Centos-7.
- Prerequisite
- Ansbile
- All kubernetes master/node should have password-less access from Ansible host

Download the Kubernetes-Ansible playbook and set up variable according to need in group variable file
all.yml. Please read this file carefully and modify according to your need.
all.yml. Please read this file carefully and modify according to your need.

Note - Addon roles should be run after cluster is fully operational. Addons are in addons.yml playbook.
Here you go
```
git clone https://github.com/mhmxs/HA-kubernetes-ansible.git
cd HA-kubernetes-ansible
git clone https://github.com/pawankkamboj/kubernetes-ansible.git
cd kubernetes-ansible
ansible-playbook -i inventory cluster.yml
```

Ansible roles
- yum-repo - install epel repo
- sslcert - create all ssl certificates require to run secure K8S cluster
- docker - install latest docker release on all cluster nodes
- containerd - IF want to use containerd runtime instead of Dockerg, use this role and enable this in group variable file
- etcd - setup etcd cluster, running as container.
- haproxy - setup haproxy for API service HA, ignore it if LB already available.
- keepalived - using keepalive for HA of IP address for kube-api server.
- master - setup kubernetes master service - kube-apiserver, kube-controller, kube-scheduler, kubectl client
- node - setup kubernetes node service - kubelet
- addon - to create addon service, kube-proxy, kube-dns, kube-dashboard and cluster-monitoring using heapster and grafana/infuxdb

Note - Addon roles should be run after cluster is fully operational. Addons are in addons.yml playbook.
```
after cluster is up and running then run addon.yml to deploy add-on.
included addon are, flannel network, kube-proxy, kube-dns and kube-dashboard
ansible-playbook -i inventory addon.yml
```



## kubernetes HA architecture
Below is a sample Kubernetes cluster architecture after successfully building it using playbook, It is just a sample, so number of servers/node may vary according to your setup.

Expand Down
15 changes: 13 additions & 2 deletions Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
node.vm.provision :shell, inline: "cat /vagrant/ssh-key.pub >> .ssh/authorized_keys"

config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "1024"]
vb.customize ["modifyvm", :id, "--memory", "2024"]
end
end

Expand All @@ -30,10 +30,21 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
node.vm.provision :shell, inline: "cat /vagrant/ssh-key.pub >> .ssh/authorized_keys"

config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "1024"]
vb.customize ["modifyvm", :id, "--memory", "2024"]
end
end

config.vm.define "master3" do |node|
node.vm.network "private_network", ip: "192.168.50.13"
node.vm.hostname = "master3"

node.vm.provision :shell, inline: "cat /vagrant/ssh-key.pub >> .ssh/authorized_keys"

config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "2024"]
end
end

config.vm.define "node1" do |node|
node.vm.network "private_network", ip: "192.168.50.21"
node.vm.hostname = "node1"
Expand Down
1 change: 0 additions & 1 deletion addon.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,3 @@
- addon
- dns
- proxy
- dashboard
18 changes: 13 additions & 5 deletions cluster.yml
Original file line number Diff line number Diff line change
@@ -1,23 +1,24 @@
---
# This playbook deploys a kubernetes cluster with the default addons.

# Install yum-repo
- hosts: all
roles:
- yum-repo
tags:
- yum-repo

# install ssl cert
- hosts: sslhost
# Install sslcert
- hosts: masters
roles:
- sslcert
- sslcert
tags:
- sslcert
- sslcert

# Install etcd
- hosts: etcd
roles:
- etcd
- etcd
tags:
- etcd

Expand All @@ -34,6 +35,13 @@
tags:
- haproxy

# install keepalived
- hosts: masters
roles:
- keepalived
tags:
- keepalived

# install kubernetes master services
- hosts: masters
roles:
Expand Down
90 changes: 41 additions & 49 deletions group_vars/all.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,55 +8,52 @@ kube_addon_dir: /etc/kubernetes/addon
flannel_dir: /etc/sysconfig

#- get k8s version
k8s_version: "v1.10.3"
k8s_version: "v1.11.3"

#- image and other variable
api_image: gcr.io/google_containers/kube-apiserver-amd64:v1.10.3
controller_image: gcr.io/google_containers/kube-controller-manager-amd64:v1.10.3
scheduler_image: gcr.io/google_containers/kube-scheduler-amd64:v1.10.3
kube_proxy_image: gcr.io/google_containers/kube-proxy-amd64:v1.10.3
api_image: gcr.io/google_containers/kube-apiserver-amd64:v1.11.3
controller_image: gcr.io/google_containers/kube-controller-manager-amd64:v1.11.3
scheduler_image: gcr.io/google_containers/kube-scheduler-amd64:v1.11.3
kube_proxy_image: gcr.io/google_containers/kube-proxy-amd64:v1.11.3
etcd_image: k8s.gcr.io/etcd:3.2.24

#- cluster service ip range
service_ip_range: 10.96.0.0/24
kubernetes_service_ip: 10.96.0.1 #- IP of default kubernetes service, it is the first IP of network CIDR range

#- all certifactes for cluster
account_key: /etc/kubernetes/pki/apiserver-key.pem
ca_cert: /etc/kubernetes/pki/ca.pem
ca_key: /etc/kubernetes/pki/ca-key.pem
api_cert: /etc/kubernetes/pki/apiserver.pem
api_key: /etc/kubernetes/pki/apiserver-key.pem
admin_key: /etc/kubernetes/pki/admin-key.pem
admin_cert: /etc/kubernetes/pki/admin.pem
node_cert: /etc/kubernetes/pki/node.pem
node_key: /etc/kubernetes/pki/node-key.pem
controller_cert: /etc/kubernetes/pki/controller.pem
controller_key: /etc/kubernetes/pki/controller-key.pem
scheduler_cert: /etc/kubernetes/pki/scheduler.pem
scheduler_key: /etc/kubernetes/pki/scheduler-key.pem

#- api secure port and api loadbalancer IP
api_secure_port: 5443
api_lb_ip: https://192.168.50.11 # it should be haproxy host IP or network load balancer IP # if using onle one api server then setup IP of it
lb_ip: 192.168.50.11
api_secure_port: 6443
api_lb_ip: https://10.1.0.100 # it should be haproxy host IP or network load balancer IP # if using onle one api server then setup IP of it
lb_ip: 10.1.0.100

#-use keepalived for api HA if only one master node then set it to false
keepalived: true

#- kubeconfig file
kubeletconfig: /etc/kubernetes/kubeletconfig
kubeadminconfig: /etc/kubernetes/kubeadminconfig

# all master fqdn name - it require to create ssl certificate
# set it to only available api server
masters_fqdn: ['master1', 'master2']
masters_fqdn: ['tkube1.test.com', 'tkube2.test.com', 'tkube3.test.com']
domain_name: test.com #- use to create wildcard ssl certificate for api and etcd

#- cluster dns name and IP
cluster_name: cluster.local

#- api authorization ABAC and RBAC
auth_mode: RBAC,ABAC
auth_file: /etc/kubernetes/auth.csv
policy_file: /etc/kubernetes/policy.json

#- admin/readonly user password
admin_user: kube-admin
admin_pass: AdminPass
readonly_user: readonly
readonly_pass: readonly
#- api authorization RBAC
auth_mode: Node,RBAC

# kube-proxy addon
kube_proxy: true # set true only if cluster is fully operation and running
Expand All @@ -69,29 +66,10 @@ dns_replicas: 1

#flannel network # only one network plugin should be enable either weave or flannel
flannel: true
flannel_network: "10.244.0.0/18"
flannel_key: "/atomic.io/network"
flannel_subnet_len: 24
flannel_backend: vxlan

# A list of insecure registrys you might need to define
insecure_registrys:
flannel_network: "10.244.0.0/16"

# Turn to false to disable cluster monitoring with heapster and influxdb
cluster_monitoring: false # set true only if cluster is fully operation and running
#- if true then set following
heapster_ip: 10.96.0.11 # it should be from cluster service_ip_range
heapster_port: 80
grafana_ip: 192.168.50.11 # it should be one of cluster node IP
grafana_port: 100
influx_ip: 192.168.50.11 # it should be one of cluster node IP
influx_port: 8086

# Turn to false to disable the kube-dash addon for this cluster
kube_dash: true # set true only if cluster is fully operation and running
#- if true then set following
kube_dash_ip: 192.168.50.11 # it should be one of cluster node IP
kube_dash_port: 80
#-metrics-server
metrics_server: true

#- setup haproxy for loadbalancing
haproxy: true # set false when already physical loadbalancer available
Expand All @@ -100,9 +78,23 @@ haproxy_monitor_port: 9090
haadmin_user: admin
haadmin_password: admin123

#- below require interface name, most system it has eth0, but if there is bonding then it bond, or using vagrant with nat with it is eth1, set accrodingly
#- for keepalived
keepalived_shared_iface: eth0
#- for etcd
etcd_interface: eth0
#- for kube-api server
bind_interface: eth0

#- etcd config
domain_name: test.com #- use to create wildcard ssl certificate for api and etcd
etcd_peer_url_scheme: http #- for http or https
etcd_client_port: 2379
etcd_peer_port: 2380
etcd_peers_group: etcd
etcd_conf_dir: /etc/etcd
etcd_initial_cluster_state: new
etcd_initial_cluster_token: etcd-k8-cluster
etcd_data_dir: /var/lib/etcd
etcd_peer_url_scheme: https
etcd_ca_file: "/etc/kubernetes/pki/ca.pem"
etcd_cert_file: "/etc/kubernetes/pki/etcd.pem"
etcd_key_file: "/etc/kubernetes/pki/etcd-key.pem"
Expand All @@ -115,8 +107,8 @@ etcd_peer_key_file: "/etc/kubernetes/pki/etcd-key.pem"
#yum_proxy: "http://192.168.50.99:3128"
yum_proxy: ""

##- use containerd instead of docker
use_containerd: "true"
#- use containerd instead of docker
use_containerd: "false"
containerd_release_version: 1.1.0-rc.0
cni_bin_dir: /opt/cni/bin/
cni_conf_dir: /etc/cni/net.d/
Expand Down
19 changes: 9 additions & 10 deletions inventory
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
[etcd]
192.168.61.71
192.168.61.72
192.168.61.73
10.1.0.5
10.1.0.6
10.1.0.7
[masters]
192.168.61.71
192.168.61.72
192.168.61.73
10.1.0.5
10.1.0.6
10.1.0.7
[sslhost]
192.168.61.71 # should be ansible host
10.1.0.5
[node]
192.168.61.75
192.168.61.76
192.168.61.77
10.1.0.8
10.1.0.9
Loading

0 comments on commit 7e56c14

Please sign in to comment.