In this section:
- Service - A load balancer for Pods
- Ingress - Expose Kubernetes Services outside a Kubernetes cluster
- Network Policy - Software based firewall around Kubernetes Pods
Kubernetes Namespace (ns) - Logical isolation for your application
kubernetes.io bookmark: Namespaces
kubectl create namespace ns-bootcamp-networking
kubectl config set-context --current --namespace=ns-bootcamp-networking
Kubernetes Service (svc) - A load balancer for Pods
Problem Statement: I want a stable network entry point into my application
tl;dr – Think Load balancer for individual microservices
kubernetes.io bookmark: Service
Notes
- The default kube-proxy mode for rule-based IP management is
iptables
- The
iptables
mode native method for load distribution is random selection - In English - No round robin load balancing for Kubernetes Service, it is random selection
Create a Pod
kubectl run service-pod --image=nginx --port=80 --labels="tier=web"
Create the Service
kubectl expose pod service-pod --port=8080 --target-port=80 --name=my-service --type=ClusterIP
clear
# Check your work - run a diagnostics pod
kubectl run remote-run --image=busybox --restart=Never --rm -it
# Repeat this command to see different responses
wget -qO- my-service:8080
Kubernetes Service (svc) - Types of Service
tl;dr – Kubernetes always respects the Law of Three, sometimes Four
There are four types of Kubernetes service:
Kubernetes Service (svc) - ClusterIP (default)
- ClusterIP: #👈👈👈 Part of CKAD exam
- Exposes the Service on a cluster-internal IP
- Choosing this value makes the Service only reachable from within the cluster
- This is the default ServiceType
Kubernetes Service (svc) - NodePort (insecure)
- NodePort: #👈👈👈 Part of CKAD exam
- A NodePort is an open port on every node of your cluster
- When traffic is received on that open port, it directs it to a specific port on the ClusterIP for the service it is representing
- You will be able to contact the NodePort Service, from outside the cluster, by requesting
NodeIP:NodePort
- Do NOT do this in Production
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
namespace: ns-bootcamp-networking
spec:
type: NodePort #👈👈👈
selector:
tier: web
ports:
- port: 8080
targetPort: 80
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007 #👈👈👈
EOF
# NodeIP:NodePort
# NodeIP = kubectl get nodes -o wide
# NodePort = nodePort: 30007
wget -qO- localhost:30007
Kubernetes Service (svc) - LoadBalancer (expensive)
- LoadBalancer:
- Expensive if you deploy a Cloud Load Balancer for each Service
- Exposes the Service externally using a cloud provider's load balancer
- Quickly went out of fashion and was addressed by Ingress which consolidates services and routes to a single Cloud Load Balancer
Kubernetes Service (svc) - ExternalName (DNS)
- ExternalName:
- Services of type ExternalName map a Service to a DNS name
- Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value
- No proxying of any kind is set up
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: my-externalname-service
namespace: ns-bootcamp-networking
spec:
type: ExternalName #👈👈👈
externalName: www.google.com
EOF
Kubernetes Ingress (ing) - Expose Kubernetes Services outside a Kubernetes cluster
Problem Statement: I want a way to expose my application outside the Kubernetes cluster
tl;dr – Think Layer 7 Load balancer for individual microservices
kubernetes.io bookmark: Ingress
- Ingress operates using three constructs:
- Ingress Controller
- Control Plane for Ingress
- Ingress Resources
- Ingress Traffic Rules #👈👈👈 These are the YAML files that you will work with
- Ingress DaemonSet
- Execution Plane for Ingress
- Cluster wide pods that apply the traffic rules
- Ingress Controller
Prerequisite Software for this example to work:
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress #👈👈👈 Ingress Name
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: / #👈👈👈 Change
pathType: Prefix
backend:
service:
name: my-service #👈👈👈 Service Name
port:
number: 8080 #👈👈👈 Change: --port=8080
EOF
curl localhost
Notes on rewrite-target
Future Direction:
- It is expected that Ingress with evolve into Gateway API
Kubernetes NetworkPolicy (netpol) - Software based firewall around Kubernetes Pods
Problem Statement: I want a way to deny all network traffic around pods unless explicitly allowed
tl;dr – Trust no one, explicitly define who talks to who with my software based firewall
GUI for explaining and generating Network Policies: editor.cilium.io
kubernetes.io bookmark: Declare Network Policy
Notes
- Network policies do not conflict; they are additive
- If any policy or policies select a pod, the pod is restricted to what is allowed by the union of those policies' ingress/egress rules
- Thus, order of evaluation does not affect the policy result
Please NOTE:
- Docker Desktop does not support CNI (container network interface) so the NetworkPolicy's define are ignored.
- The commands work but the NetworkPolicy's are not enforced
- Perform this on any cluster that enforces Network Policies
Kubernetes NetworkPolicy (netpol) - Types of Selector
tl;dr – Kubernetes always respects the Law of Three
- podSelector
- This selects particular Pods in the same namespace as the NetworkPolicy which should be allowed as ingress sources or egress destinations
- namespaceSelector
- This selects particular namespaces for which all Pods should be allowed as ingress sources or egress destinations
- ipBlock
- This selects particular IP CIDR ranges to allow as ingress sources or egress destinations
- These should be cluster-external IPs, since Pod IPs are ephemeral and unpredictable.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector: #👈👈👈 To which pod does this Network Policy apply: label = role: db
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock: #👈👈👈This selects particular IP CIDR ranges
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector: #👈👈👈This selects particular namespaces
matchLabels:
project: myproject
- podSelector: #👈👈👈This selects particular Pods
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978
Kubernetes NetworkPolicy (netpol) - AND & OR Rules
OR Rule
ingress:
- from:
- ipBlock: #👈👈👈- = First Rule OR
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector: #👈👈👈- = Second Rule OR
matchLabels:
project: myproject
- podSelector: #👈👈👈- = Third Rule
matchLabels:
role: frontend
AND Rule
ingress:
- from:
- ipBlock: #👈👈👈- = First Rule First Element AND
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
namespaceSelector: #👈👈👈 First Rule Second Element OR
matchLabels:
project: myproject
- podSelector: #👈👈👈- = Second Rule
matchLabels:
role: frontend
DNS Rule
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: internal-policy
namespace: default
spec:
podSelector:
matchLabels:
name: internal #👈👈👈To which Pod does this Policy apply
policyTypes:
- Egress
- Ingress
ingress:
- {}
egress:
- to: #👈👈👈 First Rule Egress to mysql on port 3306
- podSelector:
matchLabels:
name: mysql
ports:
- protocol: TCP
port: 3306
- to: #👈👈👈 Second Rule Egress to payroll on port 8080
- podSelector:
matchLabels:
name: payroll
ports:
- protocol: TCP
port: 8080
- ports: #👈👈👈 Third Rule Egress to DNS on port 53
- port: 53
protocol: UDP
- port: 53
protocol: TCP
Clean Up
cd
yes | rm -R ~/ckad/
kubectl delete ns ns-bootcamp-networking --now
Next Kubernetes Tutorial - Kubernetes Storage
End of Section