Skip to content

Commit

Permalink
snapshotEngine: Switch chart from AWS to Digitalocean (#576)
Browse files Browse the repository at this point in the history
* add value to skip snap web generation

* add configurable value for s3 bucket

* web build condition on domain name instead

* add secret and configurable s3 bucket override

* switch name and mountpath to match format

* update secret name and use in zip and upload job

* use export instead of temp var

* secret name change

* expect correct names on secret volume mount

* correct path to secret mount

* rework credential override to provide logs and error messages

* use double quotes for early expansion

* remove variable checking since we are feeding in files

* bug: container is gone so we cant delete a volume

* show commands for debug

* wrong default s3 bucket var

* turn of tar output for debug

* undo command verbosity

* Verbose variables

* Enable interactive for alias to work

* More useful alias message and rm debug messages

* Need space after !

* expand aliases instead of interactive

* add public-read and move index.html

* Website redirects stay in AWS

* Set alias only for filesystem artifact upload

* rolling redirects working

* fix volume indexing

* helpful messages

* Useful comments for new indexing format

* Omit alias functionality in lieu of variable parameters

* Fix rolling tarball filename

* configmap needs fqdn

* cdn isnt working so we're using bucket url

* unsilence lz4 logs

* wrong aws bucket name

* get all snapshot metadata from do spaces

* upload metadatas to alt s3 bucket

* fix metadata related to website build

* redirect is not on DO for expected size
  • Loading branch information
orcutt989 authored Jul 6, 2023
1 parent 39ef10c commit 136167a
Show file tree
Hide file tree
Showing 6 changed files with 186 additions and 127 deletions.
3 changes: 3 additions & 0 deletions charts/snapshotEngine/templates/configmap.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,9 @@ data:
ARCHIVE_SLEEP_DELAY: {{ $.Values.artifactDelay.archive }}
ROLLING_SLEEP_DELAY: {{ $.Values.artifactDelay.rolling }}
SCHEMA_URL: {{ $.Values.schemaUrl }}
S3_BUCKET: {{ $.Values.s3BucketOverride }}
CLOUD_PROVIDER: {{ $.Values.cloudProvider }}
FQDN: {{ $.Values.fqdn }}
kind: ConfigMap
metadata:
name: snapshot-configmap
Expand Down
2 changes: 2 additions & 0 deletions charts/snapshotEngine/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -99,3 +99,5 @@ artifactDelay:

# URL to schema.json file to validate generated metadata against
schemaUrl: "https://oxheadalpha.com/tezos-snapshot-metadata.schema.1.0.json"

s3BucketOverride: ""
4 changes: 2 additions & 2 deletions snapshotEngine/getAllSnapshotMetadata.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

schemaURL = os.environ["SCHEMA_URL"]
allSubDomains = os.environ["ALL_SUBDOMAINS"].split(",")
snapshotWebsiteBaseDomain = os.environ["SNAPSHOT_WEBSITE_DOMAIN_NAME"]
s3Endpoint = "nyc3.digitaloceanspaces.com"
filename = "tezos-snapshots.json"

# Write empty top-level array to initialize json
Expand All @@ -20,7 +20,7 @@
# Get each subdomain's base.json and combine all artifacts into 1 metadata file
for subDomain in allSubDomains:
baseJsonUrl = (
"https://" + subDomain + "." + snapshotWebsiteBaseDomain + "/base.json"
"https://" + subDomain + "-shots" + "." + s3Endpoint + "/base.json"
)
try:
with urllib.request.urlopen(baseJsonUrl) as url:
Expand Down
6 changes: 6 additions & 0 deletions snapshotEngine/mainJob.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,8 @@ spec:
name: snapshot-cache-volume
- mountPath: /rolling-tarball-restore
name: rolling-tarball-restore
- mountPath: /cloud-provider
name: cloud-provider
env:
- name: HISTORY_MODE
value: ""
Expand All @@ -242,4 +244,8 @@ spec:
- name: rolling-tarball-restore
persistentVolumeClaim:
claimName: rolling-tarball-restore
- name: cloud-provider
secret:
secretName: cloud-provider
optional: true
backoffLimit: 0
26 changes: 23 additions & 3 deletions snapshotEngine/snapshot-maker.sh
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ if [ "${HISTORY_MODE}" = rolling ]; then
printf "%s PVC Exists.\n" "$(date "+%Y-%m-%d %H:%M:%S" "$@")"
kubectl delete pvc rolling-tarball-restore
sleep 5
fi
fi
fi

if [ "$(kubectl get pvc "${HISTORY_MODE}"-snapshot-cache-volume)" ]; then
Expand Down Expand Up @@ -165,11 +165,31 @@ VOLUME_NAME="${VOLUME_NAME}" yq e -i '.spec.template.spec.volumes[0].persistentV
PVC="${PVC}" yq e -i '.spec.template.spec.volumes[1].persistentVolumeClaim.claimName=strenv(PVC)' mainJob.yaml
PVC="${PVC}" yq e -i '.spec.template.spec.volumes[1].name=strenv(PVC)' mainJob.yaml

# get rid of rolling container if this is an archive job
# Gets rid of rolling job-related containers and volume/mounts.
if [ "${HISTORY_MODE}" = archive ]; then
# Removes create-tezos-rolling-snapshot container from entire job
yq eval -i 'del(.spec.template.spec.containers[0])' mainJob.yaml
yq eval -i 'del(.spec.template.spec.containers[0].volumeMounts[2])' mainJob.yaml
# Removes rolling-tarball-restore volume from entire job (second to last volume)
yq eval -i 'del(.spec.template.spec.volumes[2])' mainJob.yaml
# Removes rolling-tarball-restore volumeMount from zip-and-upload container (second to last volume mount)
yq eval -i "del(.spec.template.spec.containers[0].volumeMounts[2])" mainJob.yaml
fi

# Switch alternate cloud provider secret name based on actual cloud provider
if [[ -n "${CLOUD_PROVIDER}" ]]; then
# Need to account for dynamic volumes removed above. For example if not rolling node then rolling volume is deleted.
SECRET_NAME="${NAMESPACE}-secret"
# Index of zip-and-upload container changes depending on if rolling job or archive job
NUM_CONTAINERS=$(yq e '.spec.template.spec.containers | length' mainJob.yaml)
# Index of mounts also changes depending on history mode
NUM_CONTAINER_MOUNTS=$(yq e ".spec.template.spec.containers[$(( NUM_CONTAINERS - 1 ))].volumeMounts | length" mainJob.yaml )
# Secret volume mount is last item in list of volumeMounts for the zip and upload container
SECRET_NAME="${SECRET_NAME}" yq e -i ".spec.template.spec.containers[$(( NUM_CONTAINERS - 1 ))].volumeMounts[$(( NUM_CONTAINER_MOUNTS - 1 ))].name=strenv(SECRET_NAME)" mainJob.yaml
# Index of job volumes change depending on history mode
NUM_JOB_VOLUMES=$(yq e '.spec.template.spec.volumes | length' mainJob.yaml )
# Setting job secret volume to value set by workflow
SECRET_NAME="${SECRET_NAME}" yq e -i ".spec.template.spec.volumes[$(( NUM_JOB_VOLUMES - 1 ))].name=strenv(SECRET_NAME)" mainJob.yaml
SECRET_NAME="${SECRET_NAME}" yq e -i ".spec.template.spec.volumes[$(( NUM_JOB_VOLUMES - 1 ))].secret.secretName=strenv(SECRET_NAME)" mainJob.yaml
fi

# Service account to be used by entire zip-and-upload job.
Expand Down
Loading

0 comments on commit 136167a

Please sign in to comment.