The guide is based on official AWS EKS documentation
and assumes usage of eksctl
tool. For other methods of provisioning
EKS cluster, please refer to the official documentation.
This guide assumes you have the following tools installed:
-
Make sure you have your credentials set up in
~/.aws/credentials
file. You can useaws configure
to set them up. -
Define the cluster name, region and IAM role name in the environment variables:
export NAME="${USER}-promscale-$(date +%F-%H-%M)" export REGION="us-east-1" export ROLE="${NAME}-eks-ebs-csi" export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
-
Start a cluster with
eksctl
:eksctl create cluster --name "$NAME" --region "$REGION" --without-nodegroup
-
Wait until cluster is up and running. This can be checked with
eksctl
:eksctl get cluster --name "$NAME" --region "$REGION"
-
Verify cluster access with
kubectl
:kubectl cluster-info
-
Associate OIDC provider with cluster:
eksctl utils associate-iam-oidc-provider --region "$REGION" --cluster "$NAME" --approve
-
Create AWS IAM service account used to create volumes (PVC) on the cluster:
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster "$NAME" --region "$REGION" --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name "$ROLE"
-
Create a nodegroup:
eksctl create nodegroup --cluster "$NAME" --region "$REGION" --node-type m5.xlarge --nodes 3 --nodes-min 1 --nodes-max 3 --managed
-
Install the CSI Volume driver needed to create PVC's:
eksctl create addon --name aws-ebs-csi-driver --cluster "$NAME" --region "$REGION" --service-account-role-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:role/$ROLE" --force
Note: You can obtain current list of node types with aws ec2 describe-instance-types --region "$REGION" | jq '.InstanceTypes[].InstanceType'
or here.
export NAME="${USER}-promscale-$(date +%F-%H-%M)"
export REGION="us-east-1"
export ROLE="${NAME}-eks-ebs-csi"
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
eksctl create cluster --name "$NAME" --region "$REGION" --without-nodegroup
eksctl get cluster --name "$NAME" --region "$REGION"
kubectl cluster-info
eksctl utils associate-iam-oidc-provider --region "$REGION" --cluster "$NAME" --approve
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster "$NAME" --region "$REGION" --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy --approve --role-only --role-name "$ROLE"
eksctl create nodegroup --cluster "$NAME" --region "$REGION" --node-type m5.xlarge --nodes 3 --nodes-min 1 --nodes-max 3 --managed
eksctl create addon --name aws-ebs-csi-driver --cluster "$NAME" --region "$REGION" --service-account-role-arn "arn:aws:iam::${AWS_ACCOUNT_ID}:role/$ROLE" --force
By default we are deploying EKS cluster using a quite small setup of 3 nodes of
type m5.xlarge
. If you want to change the number of nodes or their type, you
can do so by running the following:
-
Get the nodegroup name:
eksctl get nodegroup --cluster "$NAME" --region "$REGION"
-
Change the nodegroup:
eksctl scale nodegroup --cluster "$NAME" --region "$REGION" --nodes 5 --nodes-min 1 --nodes-max 5 --name <nodegroup-name>
-
Wait until the nodes are up and running:
kubectl get nodes
To use gp3 storage class follow a guide from here. For your comfort, make sure storageClass is created before starting the tobs stack.
Cluster deletion will remove all associated resources. To do this, execute the following command:
eksctl delete cluster --name "$NAME" --region "$REGION"
If you have Service objects of LoadBalancer type, you will need to delete them before deleting the cluster.
To prevent stale VPC resources, you will need to delete the Service objects. This should be done for you with stack uninstallation via make uninstall
. However if you have not done this, you can do so with the following to remove all Service objects:
kubectl delete svc -A --all