Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

247 add dev env #45

Merged
merged 12 commits into from
Dec 4, 2020
16 changes: 16 additions & 0 deletions templates/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,22 @@ kubectl -n <% .Name %> get pods
### Configuring
You can update the resource limits in the [kubernetes/base/deployment.yml][base-deployment], and control fine-grain customizations based on environment and specific deployments such as Scaling out your production replicas from the [overlays configurations][env-prod]

### Dev Environment
You can do fast local development of a single service, even if that service depends on other services in your cluster. Make a change to your service, save, and you can immediately see the new service in action. And, you can use any tools like IDE installed locally to test/debug/edit your service.

Supposing your backend service (`<% .Name %>`, a web service, output `Hello`) is running on Staging cluster, you can `curl htts://<% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>` and see `Hello`. Now, you want to change the service and verify it locally.

Usually, you will test the service in local that accesses your local database. However, today, you need access data on Staging database and you are not allowed to access that database directly from your local machine.

- Regular development workflow:
a. change code --> b. lite test on local --> c. git commit & auto-deploy to Staging --> d. verify the changes on Staging --> e. repeat a~d until done

- New development workflow:
a. run `start-dev-env.sh` --> b. change code --> c. test on Staging with cloud DB --> d. repeat b~c until done --> e. git commit & auto-deploy to Staging

Note: this script is powered by Telepresence (http://telepresence.io) and Kustomize. You may customize the script.
sshi100 marked this conversation as resolved.
Show resolved Hide resolved


## Circle CI
Your repository comes with a end-to-end CI/CD pipeline, which includes the following steps:
1. Checkout
Expand Down
26 changes: 25 additions & 1 deletion templates/kubernetes/overlays/staging/ingress.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,31 @@ metadata:
external-dns.alpha.kubernetes.io/hostname: <% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>
# CORS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "https://<% index .Params `stagingFrontendSubdomain` %><% index .Params `stagingHostRoot` %>/"
nginx.ingress.kubernetes.io/configuration-snippet: |
sshi100 marked this conversation as resolved.
Show resolved Hide resolved
if ($http_origin ~* "^https?://((?:<% index .Params `stagingFrontendSubdomain` %><% index .Params `stagingHostRoot` %>)|(?:localhost))") {
set $cors "true";
}
if ($request_method = 'OPTIONS') {
set $cors "${cors}options";
}

if ($cors = "true") {
add_header 'Access-Control-Allow-Origin' "$http_origin" always;
add_header 'Access-Control-Allow-Methods' 'GET, PUT, POST, DELETE, PATCH, OPTIONS' always;
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization' always;
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
}

if ($cors = "trueoptions") {
add_header 'Access-Control-Allow-Origin' "$http_origin";
add_header 'Access-Control-Allow-Methods' 'GET, PUT, POST, DELETE, PATCH, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
add_header 'Access-Control-Max-Age' 1728000;
add_header 'Content-Type' 'text/plain charset=UTF-8';
add_header 'Content-Length' 0;
return 204;
}

spec:
rules:
Expand Down
1 change: 1 addition & 0 deletions templates/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ func main() {

r.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, %q", html.EscapeString(r.URL.Path))
log.Printf("Hello, %q", html.EscapeString(r.URL.Path))
})

serverAddress := fmt.Sprintf("0.0.0.0:%s", os.Getenv("SERVER_PORT"))
Expand Down
131 changes: 131 additions & 0 deletions templates/start-dev-env.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
#!/bin/bash

#
# This script is to create a dev namespace on Staging environment
#
PROJECT_NAME=<% .Name %>
ENVIRONMENT=stage
ACCOUNT_ID=<% index .Params `accountId` %>
REGION=<% index .Params `region` %>

# common functions
function usage() {
echo
echo "Usage:"
echo " $0 <project id>"
echo " - project id: can be 001, 002, or whatever id without space"
exit 1
}

function command_exist() {
command -v ${1} >& /dev/null
}

function error_exit() {
echo "ERROR : $1"
exit 2
}

function can_i() {
commands=$1
IFS=',' read -r -a array <<< "$commands"
err=0
for command in "${array[@]}"
do
kubectl --context ${CLUSTER_CONTEXT} auth can-i $command >& /dev/null || (echo "No permission to '$command'" && let "err+=1")
done

[[ $err -gt 0 ]] && error_exit "Found $err permission errors. Please check with your administrator."

echo "Permission checks: passed"
return 0
}

# Start
# Validate current iam user
MY_USERNAME=$(aws sts get-caller-identity --output json | jq -r .Arn | cut -d/ -f2)
DEV_USERS=$(aws iam get-group --group-name ${PROJECT_NAME}-developer-${ENVIRONMENT} | jq -r .Users[].UserName)
bmonkman marked this conversation as resolved.
Show resolved Hide resolved
[[ "${DEV_USERS[@]}" =~ "${MY_USERNAME}" ]] || error_exit "You (${MY_USERNAME}) are not in the ${PROJECT_NAME}-developer-${ENVIRONMENT} IAM group."

DEV_PROJECT_ID=${1:-"001"}
sshi100 marked this conversation as resolved.
Show resolved Hide resolved

echo '[Dev Environment]'

# Validate cluster
CLUSTER_CONTEXT=${PROJECT_NAME}-${ENVIRONMENT}-${REGION}
echo " Cluster context: ${CLUSTER_CONTEXT}"

# Validate secret
NAMESPACE=${PROJECT_NAME}
SECRET_NAME=${PROJECT_NAME}
DEV_SECRET_NAME=devenv${PROJECT_NAME}
DEV_SECRET_JSON=$(kubectl --context ${CLUSTER_CONTEXT} get secret ${DEV_SECRET_NAME} -n ${NAMESPACE} -o json)
[[ -z "${DEV_SECRET_JSON}" ]] && error_exit "The secret ${DEV_SECRET_NAME} is not existing in namespace '${NAMESPACE}'."

# Check installations
if ! command_exist kustomize || ! command_exist telepresence; then
if ! command_exist kustomize; then
error_exit "command 'kustomize' not found: please visit https://kubectl.docs.kubernetes.io/installation/kustomize/"
fi
if ! command_exist kubectl; then
error_exit "command 'telepresence' not found. You can download it at https://www.telepresence.io/reference/install"
fi
fi

# Setup dev namepsace
DEV_NAMESPACE=${MY_USERNAME}-${DEV_PROJECT_ID}
kubectl --context ${CLUSTER_CONTEXT} get namespace ${DEV_NAMESPACE} >& /dev/null || \
(can_i "create namespace,create deployment,create ingress,create service,create secret,create configmap" && \
kubectl --context ${CLUSTER_CONTEXT} create namespace ${DEV_NAMESPACE})
echo " Namespace: ${DEV_NAMESPACE}"

# Setup dev secret from pre-configed one
kubectl --context ${CLUSTER_CONTEXT} get secret ${SECRET_NAME} -n ${DEV_NAMESPACE} >& /dev/null || \
echo ${DEV_SECRET_JSON} | jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' | sed "s/${DEV_SECRET_NAME}/${SECRET_NAME}/g" | kubectl --context ${CLUSTER_CONTEXT} apply -n ${DEV_NAMESPACE} -f -
echo " Secret: ${SECRET_NAME}"

# Setup dev service account from pre-configured one
SERVICE_ACCOUNT=backend-service
kubectl --context ${CLUSTER_CONTEXT} get sa ${SERVICE_ACCOUNT} -n ${DEV_NAMESPACE} >& /dev/null || \
kubectl --context ${CLUSTER_CONTEXT} get sa ${SERVICE_ACCOUNT} -n ${NAMESPACE} -o json | jq 'del(.metadata["namespace","creationTimestamp","resourceVersion","selfLink","uid"])' | kubectl --context ${CLUSTER_CONTEXT} apply -n ${DEV_NAMESPACE} -f -

# Setup dev k8s manifests, configuration, docker login etc
CONFIG_ENVIRONMENT="staging"
EXT_HOSTNAME=<% index .Params `stagingBackendSubdomain` %><% index .Params `stagingHostRoot` %>
MY_EXT_HOSTNAME=${DEV_NAMESPACE}-${EXT_HOSTNAME}
ECR_REPO=${ACCOUNT_ID}.dkr.ecr.${REGION}.amazonaws.com/${PROJECT_NAME}
VERSION_TAG=latest
DATABASE_NAME=<% index .Params `databaseName` %>
DEV_DATABASE_NAME=$(echo "dev${MY_USERNAME}" | tr -dc 'A-Za-z0-9')
echo " Domain: ${MY_EXT_HOSTNAME}"
echo " Database Name: ${DEV_DATABASE_NAME}"

# Apply manifests
(cd kubernetes/overlays/${CONFIG_ENVIRONMENT} && \
kustomize build . | \
sed "s|image: fake-image|image: ${ECR_REPO}:${VERSION_TAG}|g" | \
sshi100 marked this conversation as resolved.
Show resolved Hide resolved
sed "s|${EXT_HOSTNAME}|${MY_EXT_HOSTNAME}|g" | \
sed "s|DATABASE_NAME: ${DATABASE_NAME}|DATABASE_NAME: ${DEV_DATABASE_NAME}|g" | \
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} apply -f - ) || error_exit "Failed to apply kubernetes manifests"

# Confirm deployment
if ! kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} rollout status deployment/${PROJECT_NAME} -w --timeout=180s ; then
echo "${PROJECT_NAME} rollout check failed:"
echo "${PROJECT_NAME} deployment:"
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} describe deployment ${PROJECT_NAME}
echo "${PROJECT_NAME} replicaset:"
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} describe rs -l app=${PROJECT_NAME}
echo "${PROJECT_NAME} pods:"
kubectl --context ${CLUSTER_CONTEXT} -n ${DEV_NAMESPACE} describe pod -l app=${PROJECT_NAME}
error_exit "Failed deployment. Leaving namespace ${DEV_NAMESPACE} for debugging"
fi

# Starting dev environment with telepresence shell
echo
telepresence --context ${CLUSTER_CONTEXT} --swap-deployment ${PROJECT_NAME} --namespace ${DEV_NAMESPACE} --expose 80 --run-shell

# Ending dev environment
echo
kubectl --context ${CLUSTER_CONTEXT} delete namespaces/${DEV_NAMESPACE}
echo "Your dev environment on Staging has been deleted completely"
echo