- One node is enough, with minimum 2 CPU and 6 GB memory
- You will need more capacity if you plan to put a delegate in the cluster
Add kubernetes cloud provider in Harness (docs)
- In Harness, add kubernetes cloud provider for the cluster, called
cv-demo
- This could use service account credentials and master endpoint if you want to use a delegate from outside the cluster, or
- Otherwise, install a delegate into the cluster first, then create the cloud provider
Add Dockerhub artifact repo connector (docs)
- Add Dockerhub artifact repo connector called
public-docker
- Use URL
https://index.docker.io/v2/
- The images are public but docker credentials are needed for the connector
- Note: if you don’t name the connector the same the artifact source won’t sync.
If that happens you can add the artifact source yourself.
cv-demo service:harness/cv-demo
cv-demo-ui service:harness/cv-demo-ui
- Note: if you don’t name the connector the same the artifact source won’t sync.
Clone app from git (docs)
- Create a new app called
cv-demo
- Setup bi-directional git sync while the app is still empty
- Add git connector and create webhook for bidirectional
- In git, add the webhook and change content type to
application/json
- Clone our cv-demo repo:
git clone https://github.com/wings-software/cv-demo.git
- Copy cv-demo configuration into your own sync repo:
cp -R cv-demo/Setup/Applications/cv-demo/* my-cv-demo/Setup/Applications/cv-demo/
cd my-cv-demo
- Remove Service Guard config for now (we’ll add it back later):
rm -rf Setup/Applications/cv-demo/Environments/cv-demo/Service\ Verification/
git add -A
git commit -a
git push
- Check that your app was synced in the Harness UI
- Check for errors in Configuration as Code
- Reserve static IPs (see instructions for AWS, GCP, or Azure) for:
- Ingress controller
- Prometheus
- Elastic Search
- Optionally reserve DNS name for the ingress controller static IP
- Edit values YAML override for
nginx-ingress-controller
service and override:
loadBalancerIP: <ingress-static-ip>
- Edit values YAML override for cv-demo service and override:
env:
config:
ALLOWED_ORIGINS: http://<ingress-static-ip or dns>
- Edit service variable override for cv-demo-ui service and override:
baseUrl: http://<ingress-static-ip or dns>
- Edit values YAML override for prometheus service and override:
loadBalancerIP: <prometheus-static-ip>
- Edit values YAML override for elastic-search service and override:
loadBalancerIP: <elk-static-ip>
- Execute the pipeline CV Demo - Cluster Setup
- Select
stable
artifact for cv-demo service andlatest
for cv-demo-ui - This deploys the Nginx ingress controller, prometheus, and elastic search, as well as a baseline cv-demo backend and the cv-demo-ui for controlling it
Create verification connectors (docs)
- Create
ELK-cv-demo
with URLhttp://<elastic-search-ip>
- Create
Prometheus-cv-demo
with URLhttp://<prometheus-ip>:8080/
- In Prometheus step, select your Prometheus connector
- In ELK step, select your ELK connector
- Execute workflow
cv-demo-canary
- Select
verify_canary = yes
- You can either select the unstable image tag, or:
- In browser, visit
http://<ingress-static-ip or dns>
- Adjust the canary log and metric error rates and values via the UI
- In browser, visit
- Observe Verification steps detecting the anomalies
24/7 docs
prometheus docs
Elasticsearch docs
- Copy Service Guard configuration into your own sync repo:
cp -R cv-demo/Setup/Applications/cv-demo/Environments/cv-demo/Service\ Verification my-cv-demo/Setup/Applications/cv-demo/Environments/cv-demo
git add -A
git commit -a
git push
- Check that 24/7 Service Guard is now configured in Continuous Verification
- Check for errors in Configuration as Code
- Open the ELK configuration
- Select baseline as last 30 minutes
- Submit
- In browser, visit
http://<ingress-static-ip or dns>
- Adjust the primary log and metric error rates and values via the UI
- Wait some time and observe 24/7 Service Guard detecting anomalies