If you use WSL2 and get the following errors when creating a cluster with kind create cluster
✗ Starting control-plane :joystick:
ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged kind-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
the solution is to change the command:
Specify the node image:
kind create cluster --image kindest/node:v1.23.0
In this section, we'll deploy a simple web application to a kubernates cluster. For that, we'll implement the following steps:
- Create a simple ping application in Flask
- For this, we'll create a directory
ping
and forpipenv
enironment, we'll also create a seperatePipfile
to avoid conflict. Then we need to installflask
andgunicorn
. - We'll use the app that we built in session 5 by copying
ping.py
andDockerfile
with slight changes and then build the image.
# ping.py from flask import Flask app = Flask('ping-app') @app.route('/ping', methods=['GET']) def ping(): return 'PONG' if __name__=="__main__": app.run(debug=True, host='0.0.0.0', port=9696)
# Dockerfile FROM python:3.9-slim RUN pip install pipenv WORKDIR /app COPY ["Pipfile", "Pipfile.lock", "./"] RUN pipenv install --system --deploy COPY "ping.py" . EXPOSE 9696 ENTRYPOINT ["gunicorn", "--bind=0.0.0.0:9696", "ping:app"]
- To build the image, we need to specify app name along with the tag, otherwise the local kubernates setup
kind
will cause problems,docker build -t ping:v001 .
. Now we can run on docker container and on separate terminal use the commandcurl localhost:9696/ping
to test the application.
- For this, we'll create a directory
- Install
kubectl
andkind
to build and test cluster locally- We'll install kubectl from AWS because later we deploy our application on AWS:
curl -o kubectl https://s3.us-west-2.amazonaws.com/amazon-eks/1.24.7/2022-10-31/bin/linux/amd64/kubectl
. - To install
kind
to setup local kubernetes setup (executable binaries):wget https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64 -O kind
>chmod +x ./kind
. Once the utility is installed we need to place this into our$PATH
at our preferred binary installation directory.
- We'll install kubectl from AWS because later we deploy our application on AWS:
- Setup kubernates cluster and test it
- First thing we need to do is to create a cluster:
kind create cluster
(default cluster name is kind) - Configure kubectl to interact with kind:
kubectl cluster-info --context kind-kind
- Check the running services to make sure it works:
kubectl get service
- First thing we need to do is to create a cluster:
- Create a deployment
- Kubernates requires a lot of configuration and for that VS Code has a handy extension that can take a lot of hussle away.
- Create
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: # name of the deployment name: ping-deployment spec: replicas: 1 # number of pods to create selector: matchLabels: # all pods that have the label app name 'ping' are belonged to 'ping-deployment' app: ping template: # template of pods (all pods have same configuration) metadata: labels: # each app gets the same label (i.e., ping in our case) app: ping spec: containers: # name of the container - name: ping-pod image: ping:v001 # docker image with tag resources: limits: memory: "128Mi" cpu: "500m" ports: - containerPort: 9696 # port to expose
- We can now apply the
deployment.yaml
to our kubernetes cluster:kubectl apply -f deployment.yaml
- Next we need to load the docker image into our cluster:
kind load docker-image ping:v001
- Excuting the command
kubectl get pod
should give the pod status running. - To test the pod by specifying the ports:
kubectl port-forward pod-name 9696:9696
and executecurl localhost:9696/ping
to get the response.
- We can now apply the
- Create service for deployment
- Create
service.yaml
apiVersion: v1 kind: Service metadata: # name of the service ('ping') name: ping spec: type: LoadBalancer # type of the service (external in this case) selector: # which pods qualify for forwarding requests app: ping ports: - port: 80 # port of the service targetPort: 9696 # port of the pod
- Apply
service.yaml
:kubectl apply -f service.yaml
- Running
kubectl get service
will give us the list of external and internal services along with their service type and other information. - Test the service by port forwarding and specifying the ports:
kubectl port-forward service/ping 8080:80
(using 8080 instead to avoid permission requirement) and executingcurl localhost:8080/ping
should give us the output PONG.
- Apply
- Create
- Setup and use
MetalLB
as external load-balancer- Apply MetalLB manifest
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
- Wait until the MetalLB pods (controller and speakers) are ready
kubectl wait --namespace metallb-system \ --for=condition=ready pod \ --selector=app=metallb \ --timeout=90s
- Setup address pool used by loadbalancers:
- Get range of IP addresses on docker kind network
docker network inspect -f '{{.IPAM.Config}}' kind
- Create Ip address pool using
metallb-config.yaml
apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example namespace: metallb-system spec: addresses: - 172.20.255.200-172.20.255.250 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: empty namespace: metallb-system
- Apply deployment and service for updates
kubectl apply -f deployment.yaml kubectl apply -f service.yaml
- Get external LB_IP
kubectl get service
- Test using load-balancer ip address
curl <LB_IP>:80/ping
- Apply MetalLB manifest
Add notes from the video (PRs are welcome)
- kind = local kubernetes cluster https://github.com/kubernetes-sigs/kind
- kubectl = tool for interacting with kubernetes cluster https://kubernetes.io/docs/reference/kubectl/
- yaml kubernetes configuration: allocating resources (RAM, CPU), templates, port labels
- kubernetes ports/pods: requests, responses, forwarding, connection refusal
The notes are written by the community. If you see an error here, please create a PR with a fix. |