Skip to content
This repository has been archived by the owner on Jan 5, 2022. It is now read-only.

aws-eks deployment readiness #82

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,3 @@
*.tgz
repository
.idea/
52 changes: 49 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ This is a collection of [Helm](https://github.com/kubernetes/helm) [Charts](http
- [telegraf-s](/telegraf-s/README.md)
- [telegraf-ds](/telegraf-ds/README.md)

### Deploy the whole stack!
### Manual deploy of the whole stack

- Have your `kubectl` tool configured for the cluster where you would like to deploy the stack.
- Have `helm` and `tiller` installed and configured
Expand All @@ -32,9 +32,55 @@ $ kubectl get svc -w --namespace tick -l app=dash-chronograf
- Open chronograf in your browser and configure it
- InfluxDB URL: `http://data-influxdb.tick:8086`
- Kapacitor URL: `http://alerts-kapacitor.tick:9092`

Or, just run `./create.sh` and let the shell script do it for you! You can also tear down the installation with `./destroy.sh`

### Automated deploy of the whole stack

#### Minikube

Local kubernetes single-node distribution inside a VM. If you want to change the VM driver add the appropriate
`--vm-driver=xxx` flag to minikube start. Corresponding script changes service type to `NodeType` together with helm init and tiller deployment.

##### Requirements:
- helm binary already installed in path
- To enable API bearer tokens (including service account tokens) to be used to authenticate to the kubelet’s HTTPS endpoint:
`minikube start --extra-config=kubelet.authentication-token-webhook=true`
- kubectl tool in path with working configuration

#### AWS EKS:

EKS requires external Loadbalancers to expose your service. Corresponding script changes service type to `LoadBalancer`
together with helm init and tiller deployment.
As a result the EKS Control Plane creates external LoadBalancer(s) in public subnet and additional costs
to your account may be incurred.

##### Requirements:
- helm binary already installed in path
- EKS cluster with available workers
- kubectl tool in path with working configuration

#### Execution
just run `./run.sh` and let the shell script do it for you!

`./run.sh -s $services -a $action -p $provider`
- Options:
-s services: The name of the component.
Valid options are `influxdb`, `kapacitor`, `telegraf-s`, `telegraf-ds`, `chronograf` and `all`. Default is `all`
-a action: Valid options are `create`, `destroy` and `prune_resources`. Default is `create`
-p provider: Valid options are `minikube`, `aws-eks`. Default is `minikube`

##### Examples:
- To execute all components from `single command`:

./run.sh (by default it runs with: -s all -a create -p minikube)
./run.sh -s all -a create -p aws-eks
./run.sh -s all -a destroy -p aws-eks
./run.sh -a prune_resources -p aws-eks

- To execute `individual command`:

./run.sh -s influxdb -s kapacitor -s ... -a create -p aws-eks
./run.sh -s influxdb -s kapacitor -s ... -a destroy -p aws-eks

### Usage

To package any of the charts for deployment:
Expand Down
8 changes: 0 additions & 8 deletions create.sh

This file was deleted.

4 changes: 0 additions & 4 deletions destroy.sh

This file was deleted.

218 changes: 218 additions & 0 deletions run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,218 @@
#!/bin/bash
## in case your custom PATH (to helm) is defined in .rc files
# source ~/.bashrc
# source ~/.zshrc

function usage {
cat <<EOF

Usage: run.sh [-a ACTION] [-s SERVICE] [-p PROVIDER]
-p PROVIDER: Valid options are minikube, aws-eks
-a ACTION: Valid options are create, destroy, prune_resources
-s SERVICE: The name of the component. Valid options are influxdb, kapacitor, telegraf-s, telegraf-ds, chronograf or all
Examples:
./run.sh Run with defaults (-a create -s all -p minikube)
./run.sh -s influxdb -a create Deploy InfluxDB helm chart do default k9s provider (minikube)
./run.sh -s influxdb -a destroy Remove InfluxDB deployment from default k8s provider (minikube)
./run.sh -a prune_resources Remove all metadata resources from default k8s provider (minikube)
./run.sh -s all -a create -p aws-eks Deploy all components to AWS EKS
./run.sh -s all -a destroy -p aws-eks Remove all components from AWS EKS
EOF
}

function initScript {
PROVIDER="minikube"
ACTION="create"
NAMESPACE="tick"
DIR="$(dirname "$(readlink -f "$0")")"

while getopts ha:p:s: opt; do
case "$opt" in
p) PROVIDER=$OPTARG
;;
a) ACTION=$OPTARG
;;
s) USER_SERVICES+=($OPTARG)
;;
h|*)
usage
exit 1
;;
esac
done

USER_SERVICES=(${USER_SERVICES[@]/all/})
SERVICES=(${USER_SERVICES[@]:-influxdb telegraf-s telegraf-ds kapacitor chronograf})
}

function main {
initScript "$@"

echo "Services:" ${SERVICES[@]}
echo "Action:" ${ACTION}
echo "Provider:" ${PROVIDER}

case ${PROVIDER} in
minikube)
SERVICE_TYPE="NodePort"
;;
aws-eks)
SERVICE_TYPE="LoadBalancer"
;;
*)
echo "Provider ${PROVIDER} is not valid !!!"
exit 1
;;
esac

case ${ACTION} in
create)
# create kube state metrics
kubectl create namespace "${NAMESPACE}"
kubectl apply -f "${DIR}/scripts/resources/default.yaml"
kubectl config set-context $(kubectl config current-context) --namespace="${NAMESPACE}"
# Initialize the helm in the cluster
helm init --wait --service-account tiller --kube-context $(kubectl config current-context)
# create charts
for s in ${SERVICES[@]}; do
create_chart ${s}
done
;;
destroy)
# destroy charts
for s in ${SERVICES[@]}; do
destroy_chart ${s}
done
;;
prune_resources)
helm reset --kube-context $(kubectl config current-context)
kubectl delete -f "${DIR}/scripts/resources/default.yaml"
kubectl delete namespaces "${NAMESPACE}"
kubectl config set-context $(kubectl config current-context) --namespace=kube-system
;;
*)
echo "Action ${ACTION} is not valid !!!"
exit 1
;;
esac

}

function print_service_url {
local service=$1
local service_alias=$2
local service_ports=()
local service_ip=""
local service_urls=()

case ${PROVIDER} in
minikube)
service_ports+=($(kubectl describe svc ${service_alias}-${service} | grep "NodePort:" | awk '{print $3}' | tr -d /TCP))
service_ip=$(minikube ip)
for port in ${service_ports[@]}; do
service_urls+=("${service_ip}:${port}")
done
;;
aws-eks)
service_urls+=($(kubectl describe svc ${service_alias}-${service} | grep "Ingress" | awk '{print $3}'))
;;
esac

printf "\n\n=======================================================================\n"
for url in ${service_urls[@]}; do
echo "${service} Endpoint URL:" ${url}
done
printf "=======================================================================\n\n"
}

function replace_service_type {
local service=$1

sed -i "s/\(type:\s\)\(.*\)/\1${SERVICE_TYPE}/" "${DIR}/${service}/values.yaml"
}

function create_chart {
local service="$1"
local service_alias=""

echo "Creating chart for" "${service}"
case ${service} in
influxdb)
service_alias="data"
replace_service_type ${service}
deploy_service ${service_alias} ${service}
sleep 120;
print_service_url ${service} ${service_alias}
;;
kapacitor)
service_alias="alerts"
replace_service_type ${service}
deploy_service ${service_alias} ${service}
sleep 120;
print_service_url ${service} ${service_alias}
;;
chronograf)
service_alias="dash"
replace_service_type ${service}
deploy_service ${service_alias} ${service}
sleep 120;
print_service_url ${service} ${service_alias}
;;
telegraf-s)
service_alias="polling"
deploy_service ${service_alias} ${service}
;;
telegraf-ds)
service_alias="hosts"
deploy_service ${service_alias} ${service}
;;
*)
echo "Service ${service} is not valid !!!"
exit 1
;;
esac
}

function deploy_service {
local service_alias="$1"
local service="$2"

echo "Deploying ${service} ....."
helm install --name "${service_alias}" --namespace "${NAMESPACE}" "${DIR}/${service}"
}

function destroy_chart {
local service="$1"
local service_alias=""

echo "Destroying chart of" "${service}"
case ${service} in
influxdb)
service_alias="data"
helm delete ${service_alias} --purge
;;
kapacitor)
service_alias="alerts"
helm delete ${service_alias} --purge
;;
chronograf)
service_alias="dash"
helm delete ${service_alias} --purge
;;
telegraf-s)
service_alias="polling"
helm delete ${service_alias} --purge
;;
telegraf-ds)
service_alias="hosts"
helm delete ${service_alias} --purge
;;
*)
echo "Service ${service} is not valid !!!"
exit 1
;;
esac
sleep 60;
}

main "$@"
32 changes: 32 additions & 0 deletions scripts/resources/default.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: default-to-tick
subjects:
- kind: ServiceAccount
name: default
namespace: tick
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
18 changes: 18 additions & 0 deletions telegraf-ds/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -62,3 +62,21 @@ config:
total: false
docker_label_exclude:
- "annotation.kubernetes.io/*"
- cpu:
percpu: false
totalcpu: true
- disk:
- ignore_fs:
- tmpfs
- devtmpfs
- devfs
- overlay
- aufs
- squashfs
- diskio:
- kernel:
- mem:
- processes:
- swap:
- system:
- net:
13 changes: 13 additions & 0 deletions telegraf-s/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,19 @@ spec:
- name: {{ template "fullname" . }}
image: "{{ .Values.image.repo }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ default "" .Values.image.pullPolicy | quote }}
env:
# This pulls HOSTNAME from the node, not the pod.
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# In test clusters where hostnames are resolved in /etc/hosts on each node,
# the HOSTNAME is not resolvable from inside containers
# So inject the host IP as well
- name: HOSTIP
valueFrom:
fieldRef:
fieldPath: status.hostIP
resources:
{{ toYaml .Values.resources | indent 10 }}
volumeMounts:
Expand Down
Loading