Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jobs/garbage-collection: add containers #1029

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions gc-policy.yaml
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably match the policy done for our OSTree repo for now.

For our production streams we don't prune at all (maybe we should, but this would affect our extended upgrade testing):

https://github.com/coreos/fedora-coreos-releng-automation/blob/d18a30c23ac1853cec7ce60c26574be508760666/fedora-ostree-pruner/fedora-ostree-pruner#L81-L84

For our non production streams we prune at 90 days:

https://github.com/coreos/fedora-coreos-releng-automation/blob/d18a30c23ac1853cec7ce60c26574be508760666/fedora-ostree-pruner/fedora-ostree-pruner#L42-L43

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For our production streams we don't prune at all (maybe we should, but this would affect our extended upgrade testing):

ahh I see you have special handling in the code for barrier releases.

Original file line number Diff line number Diff line change
@@ -1,9 +1,21 @@
branched:
containers: 2w
cloud-uploads: 1y
rawhide:
containers: 2w
cloud-uploads: 1y
bodhi-updates:
cloud-uploads: 1y
images: 58m
images-keep: [qemu, live-iso]
build: 62m
testing-devel:
containers: 2w
next-devel:
containers: 2w
next:
containers: 2m
testing:
containers: 2m
stable:
containers: 2m
68 changes: 44 additions & 24 deletions jobs/garbage-collection.Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -56,38 +56,58 @@ lock(resource: "gc-${params.STREAM}") {
def originalTimestamp = originalBuildsJson.timestamp
def acl = pipecfg.s3.acl ?: 'public-read'

withCredentials([file(variable: 'GCP_KOLA_TESTS_CONFIG', credentialsId: 'gcp-image-upload-config')]) {
stage('Garbage Collection') {
pipeutils.shwrapWithAWSBuildUploadCredentials("""
cosa cloud-prune --policy ${new_gc_policy_path} \
--stream ${params.STREAM} ${dry_run} \
--gcp-json-key=\${GCP_KOLA_TESTS_CONFIG} \
--acl=${acl} \
--aws-config-file \${AWS_BUILD_UPLOAD_CONFIG}
""")
}
}

def currentBuildsJson = readJSON file: 'builds/builds.json'
def currentTimestamp = currentBuildsJson.timestamp

// If the timestamp on builds.json after the 'Garbage Collection' step
// is the same as before, that means, there were no resources to be pruned
// and hence, no need to update the builds.json.
if (originalTimestamp != currentTimestamp) {
// Nested lock for the Upload Builds JSON step
lock(resource: "builds-json-${params.STREAM}") {
stage('Upload Builds JSON') {
// containers tags and cloud artifacts can be GCed in parallel
def parallelruns = [:]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm with @jlebon here. I think we should run things serially since the job is 100% not time critical. If things fail in either stage of this pipeline I'm hoping it is very clear what went wrong when and what did and did not happen.

parallelruns['Cloud artifacts'] = {
withCredentials([file(variable: 'GCP_KOLA_TESTS_CONFIG', credentialsId: 'gcp-image-upload-config')]) {
stage('Cloud artifacts GC') {
pipeutils.shwrapWithAWSBuildUploadCredentials("""
cosa cloud-prune --policy ${new_gc_policy_path} \
--stream ${params.STREAM} \
--upload-builds-json ${dry_run} \
--stream ${params.STREAM} ${dry_run} \
--gcp-json-key=\${GCP_KOLA_TESTS_CONFIG} \
--acl=${acl} \
--aws-config-file \${AWS_BUILD_UPLOAD_CONFIG}
""")
}
}

def currentBuildsJson = readJSON file: 'builds/builds.json'
def currentTimestamp = currentBuildsJson.timestamp

// If the timestamp on builds.json after the 'Garbage Collection' step
// is the same as before, that means, there were no resources to be pruned
// and hence, no need to update the builds.json.
if (originalTimestamp != currentTimestamp) {
// Nested lock for the Upload Builds JSON step
lock(resource: "builds-json-${params.STREAM}") {
stage('Upload Builds JSON') {
pipeutils.shwrapWithAWSBuildUploadCredentials("""
cosa cloud-prune --policy ${new_gc_policy_path} \
--stream ${params.STREAM} \
--upload-builds-json ${dry_run} \
--acl=${acl} \
--aws-config-file \${AWS_BUILD_UPLOAD_CONFIG}
""")
}
}
}
}
parallelruns['Container tags'] = {
// get repo url from pipecfg
def registry = pipecfg.registry_repos.oscontainer.repo
withCredentials([file(variable: 'REGISTRY_SECRET',
credentialsId: 'oscontainer-push-registry-secret')]) {
pipeutils.shwrap("""
cosa container-prune --policy ${new_gc_policy_path} \
--registry-auth-file=\${REGISTRY_SECRET} \
--stream ${params.STREAM} ${dry_run} \
${registry}
""")
}
}

parallel parallelruns

currentBuild.result = 'SUCCESS'
currentBuild.description = "${build_description} ✓"

Expand Down