Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

247 add dev env #45

Merged
merged 12 commits into from
Dec 4, 2020
3 changes: 3 additions & 0 deletions templates/.circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -248,6 +248,9 @@ jobs:
kubectl -n $NAMESPACE describe pod -l app=$DEPLOYMENT
exit 1
fi
MANIFEST=$(aws ecr batch-get-image --region << parameters.region >> --repository-name << parameters.repo >> --image-ids imageTag=latest --query 'images[].imageManifest' --output text)
aws ecr put-image --region << parameters.region >> --repository-name << parameters.repo >> --image-tag last-deployed --image-manifest "$MANIFEST"

workflows:
version: 2
# The main workflow. Check out the code, build it, push it, deploy to staging, test, deploy to production
Expand Down
16 changes: 16 additions & 0 deletions templates/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,22 @@ kubectl -n <% .Name %> get pods
### Configuring
You can update the resource limits in the [kubernetes/base/deployment.yml][base-deployment], and control fine-grain customizations based on environment and specific deployments such as Scaling out your production replicas from the [overlays configurations][env-prod]

### Dev Environment
This project is set up with a local/cloud hybrid dev environment. This means you can do fast local development of a single service, even if that service depends on other resources in your cluster.
Make a change to your service, run it, and you can immediately see the new service in action in a real environment. You can also use any tools like your local IDE, debugger, etc. to test/debug/edit/run your service.

Usually when developing you would run the service locally with a local database and any other dependencies running either locally or in containers using `docker-compose`, `minikube`, etc.
Now your service will have access to any dependencies within a namespace running in the EKS cluster, with access to resources there.
[Telepresence](https://telepresence.io) is used to provide this functionality.

Development workflow:

1. Run `start-dev-env.sh` - You will be dropped into a shell that is the same as your local machine, but works as if it were running inside a pod in your k8s cluster
2. Change code and run the server - As you run your local server, using local code, it will have access to remote dependencies, and will be sent traffic by the load balancer
3. Test on your cloud environment with real dependencies - `https://<your name>-<% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>`
4. git commit & auto-deploy to Staging through the build pipeline


## Circle CI
Your repository comes with a end-to-end CI/CD pipeline, which includes the following steps:
1. Checkout
Expand Down
17 changes: 17 additions & 0 deletions templates/kubernetes/overlays/dev/deployment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: <% .Name %>
spec:
template:
spec:
containers:
- name: <% .Name %>
image: <% index .Params `accountId` %>.dkr.ecr.<% index .Params `region` %>.amazonaws.com/<% .Name %>:last-deployed
resources:
requests:
memory: 64Mi
cpu: 0.1
limits:
memory: 256Mi
cpu: 1.0
53 changes: 53 additions & 0 deletions templates/kubernetes/overlays/dev/ingress.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: <% .Name %>
annotations:
# nginx ingress
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
# cert-manager
ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: clusterissuer-letsencrypt-production
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
## to support both frontend origin and 'localhost', need 'configuration-snippet' implementation here, because 'cors-allow-origin' field doesn't support multiple originss yet.
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($http_origin ~* "^https?://((?:<% index .Params `stagingFrontendSubdomain` %><% index .Params `stagingHostRoot` %>)|(?:localhost))") {
set $cors "true";
}
if ($request_method = 'OPTIONS') {
set $cors "${cors}options";
}

if ($cors = "true") {
add_header 'Access-Control-Allow-Origin' "$http_origin" always;
add_header 'Access-Control-Allow-Methods' 'GET, PUT, POST, DELETE, PATCH, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
}

if ($cors = "trueoptions") {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, PUT, POST, DELETE, PATCH, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}

spec:
rules:
- host: <% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>
http:
paths:
- path: /(.*)
backend:
serviceName: <% .Name %>
servicePort: http
tls:
- hosts:
- <% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>
secretName: <% .Name %>-tls-secret
15 changes: 15 additions & 0 deletions templates/kubernetes/overlays/dev/kustomization.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

patchesStrategicMerge:
- deployment.yml

resources:
- ../../base
- ingress.yml

configMapGenerator:
- name: <% .Name %>-config
behavior: merge
literals:
- ENVIRONMENT=staging
2 changes: 0 additions & 2 deletions templates/kubernetes/overlays/production/ingress.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ metadata:
# cert-manager
ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: clusterissuer-letsencrypt-production
# external-dns
external-dns.alpha.kubernetes.io/hostname: <% index .Params `productionBackendSubdomain` %><% index .Params `productionHostRoot` %>
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://<% index .Params `productionFrontendSubdomain` %><% index .Params `productionHostRoot` %>/"
Expand Down
2 changes: 0 additions & 2 deletions templates/kubernetes/overlays/staging/ingress.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@ metadata:
# cert-manager
ingress.kubernetes.io/ssl-redirect: "true"
cert-manager.io/cluster-issuer: clusterissuer-letsencrypt-production
# external-dns
external-dns.alpha.kubernetes.io/hostname: <% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://<% index .Params `stagingFrontendSubdomain` %><% index .Params `stagingHostRoot` %>/"
Expand Down
1 change: 1 addition & 0 deletions templates/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ func main() {

r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
log.Printf("Hello, %q", html.EscapeString(r.URL.Path))
})

serverAddress := fmt.Sprintf("0.0.0.0:%s", os.Getenv("SERVER_PORT"))
Expand Down
148 changes: 148 additions & 0 deletions templates/start-dev-env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
#!/bin/bash

#
# This script is to create a dev namespace on Staging environment
#
PROJECT_NAME=<% .Name %>
ENVIRONMENT=stage
ACCOUNT_ID=<% index .Params `accountId` %>
REGION=<% index .Params `region` %>

# common functions
function usage() {
echo
echo "Usage:"
echo " $0 <project id>"
echo " - project id: can be 001, 002, or whatever id without space"
exit 1
}

function command_exist() {
command -v ${1} >& /dev/null
}

function error_exit() {
echo "ERROR : $1"
exit 2
}

function can_i() {
commands=$1
IFS=',' read -r -a array <<< "$commands"
err=0
for command in "${array[@]}"
do
kubectl --context ${CLUSTER_CONTEXT} auth can-i $command >& /dev/null || (echo "No permission to '$command'" && let "err+=1")
done

[[ $err -gt 0 ]] && error_exit "Found $err permission errors. Please check with your administrator."

echo "Permission checks: passed"
return 0
}

# Start
# Validate current iam user
MY_USERNAME=$(aws sts get-caller-identity --output json | jq -r .Arn | cut -d/ -f2)
DEV_USERS=$(aws iam get-group --group-name ${PROJECT_NAME}-developer-${ENVIRONMENT} | jq -r .Users[].UserName)
bmonkman marked this conversation as resolved.
Show resolved Hide resolved
[[ "${DEV_USERS[@]}" =~ "${MY_USERNAME}" ]] || error_exit "You (${MY_USERNAME}) are not in the ${PROJECT_NAME}-developer-${ENVIRONMENT} IAM group."

DEV_PROJECT_ID=${1:-""}

echo '[Dev Environment]'

# Validate cluster
CLUSTER_CONTEXT=${PROJECT_NAME}-${ENVIRONMENT}-${REGION}
echo " Cluster context: ${CLUSTER_CONTEXT}"

# Validate secret
NAMESPACE=${PROJECT_NAME}
SECRET_NAME=${PROJECT_NAME}
DEV_SECRET_NAME=devenv${PROJECT_NAME}
DEV_SECRET_JSON=$(kubectl --context ${CLUSTER_CONTEXT} get secret ${DEV_SECRET_NAME} -n ${NAMESPACE} -o json)
[[ -z "${DEV_SECRET_JSON}" ]] && error_exit "The secret ${DEV_SECRET_NAME} is not existing in namespace '${NAMESPACE}'."

# Check installations
if ! command_exist kustomize || ! command_exist telepresence; then
if ! command_exist kustomize; then
error_exit "command 'kustomize' not found: please visit https://kubectl.docs.kubernetes.io/installation/kustomize/"
fi
if ! command_exist kubectl; then
error_exit "command 'telepresence' not found. You can download it at https://www.telepresence.io/reference/install"
fi
fi

# Setup dev namepsace
DEV_NAMESPACE=${MY_USERNAME}${DEV_PROJECT_ID}
kubectl --context ${CLUSTER_CONTEXT} get namespace ${DEV_NAMESPACE} >& /dev/null || \
(can_i "create namespace,create deployment,create ingress,create service,create secret,create configmap" && \
kubectl --context ${CLUSTER_CONTEXT} create namespace ${DEV_NAMESPACE})
echo " Namespace: ${DEV_NAMESPACE}"

# Setup dev secret from pre-configed one
kubectl --context ${CLUSTER_CONTEXT} get secret ${SECRET_NAME} -n ${DEV_NAMESPACE} >& /dev/null || \
echo ${DEV_SECRET_JSON} | jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' | sed "s/${DEV_SECRET_NAME}/${SECRET_NAME}/g" | kubectl --context ${CLUSTER_CONTEXT} apply -n ${DEV_NAMESPACE} -f -
echo " Secret: ${SECRET_NAME}"

# Setup dev service account from pre-configured one
SERVICE_ACCOUNT=backend-service
kubectl --context ${CLUSTER_CONTEXT} get sa ${SERVICE_ACCOUNT} -n ${DEV_NAMESPACE} >& /dev/null || \
kubectl --context ${CLUSTER_CONTEXT} get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o json | jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' | kubectl --context ${CLUSTER_CONTEXT} apply -n ${DEV_NAMESPACE} -f -

# Setup dev k8s manifests, configuration, docker login etc
CONFIG_ENVIRONMENT="dev"
EXT_HOSTNAME=<% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>
MY_EXT_HOSTNAME=${DEV_NAMESPACE}-${EXT_HOSTNAME}
ECR_REPO=${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${PROJECT_NAME}
VERSION_TAG=latest
DATABASE_NAME=<% index .Params `databaseName` %>
DEV_DATABASE_NAME=$(echo "dev${MY_USERNAME}" | tr -dc 'A-Za-z0-9')
echo " Domain: ${MY_EXT_HOSTNAME}"
echo " Database Name: ${DEV_DATABASE_NAME}"

# Apply manifests
(cd kubernetes/overlays/${CONFIG_ENVIRONMENT} && \
kustomize build . | \
sed "s|${EXT_HOSTNAME}|${MY_EXT_HOSTNAME}|g" | \
sed "s|DATABASE_NAME: ${DATABASE_NAME}|DATABASE_NAME: ${DEV_DATABASE_NAME}|g" | \
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} apply -f - ) || error_exit "Failed to apply kubernetes manifests"

# Confirm deployment
if ! kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} rollout status deployment/${PROJECT_NAME} -w --timeout=180s ; then
echo "${PROJECT_NAME} rollout check failed:"
echo "${PROJECT_NAME} deployment:"
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} describe deployment ${PROJECT_NAME}
echo "${PROJECT_NAME} replicaset:"
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} describe rs -l app=${PROJECT_NAME}
echo "${PROJECT_NAME} pods:"
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} describe pod -l app=${PROJECT_NAME}
error_exit "Failed deployment. Leaving namespace ${DEV_NAMESPACE} for debugging"
fi

# Verify until the ingress DNS gets ready
echo
if nslookup ${MY_EXT_HOSTNAME} >& /dev/null; then
echo " Notice: your domain is ready to use."
else
echo " Notice: the first time you use this environment it may take up to 5 minutes for DNS to propagate before the hostname is available."
bash -c "while ! nslookup ${MY_EXT_HOSTNAME} >& /dev/null; do sleep 30; done; echo && echo \" Notice: your domain ${MY_EXT_HOSTNAME} is ready to use.\";" &
fi

# Starting telepresence shell
echo
echo "Now you are ready to access your service at:"
echo
echo " https://${MY_EXT_HOSTNAME}"
echo
echo -n "Your telepresence dev environment is now loading which will proxy all the requests and environment variables from the cloud EKS cluster to the local shell.\nNote that the above URL access will get a \"502 Bad Gateway\" error until you launch the service in the shell, at which point it will start receiving traffic."
echo

# Starting dev environment with telepresence shell
echo
telepresence --context ${CLUSTER_CONTEXT} --swap-deployment ${PROJECT_NAME} --namespace ${DEV_NAMESPACE} --expose 80 --run-shell

# Ending dev environment
echo
kubectl --context ${CLUSTER_CONTEXT} delete namespaces/${DEV_NAMESPACE}
echo "Your dev environment on Staging has been deleted completely"
echo