-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged resources are stored in the wrong CR object #276
Comments
The global memory limit property was accidentially applied as the CPU limit. Fixes #276.
Preventing the modification of the Schedule resource during reconciliation was already implemented in #260. |
I was just able to verify, based on this PR branch, that the schedule gets the memory from the global default (76M): spec:
backend:
repoPasswordSecretRef:
key: password
name: backup-repo
s3:
accessKeyIDSecretRef:
key: username
name: backup-credentials
bucket: k8up
endpoint: 'http://10.144.1.224:9000'
secretAccessKeySecretRef:
key: password
name: backup-credentials
prune:
resources:
limits:
cpu: 10m
retention:
keepDaily: 14
keepLast: 5
schedule: '*/1 * * * *'
resourceRequirementsTemplate:
requests:
cpu: 250m
memory: 76M |
ok, restart from clean:
with this schedule: ---
apiVersion: backup.appuio.ch/v1alpha1
kind: Schedule
metadata:
name: schedule-test
spec:
resourceRequirementsTemplate:
requests:
cpu: "250m"
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
prune:
schedule: '*/1 * * * *'
retention:
keepLast: 5
keepDaily: 14
resources:
limits:
cpu: "500m"
memory: 128M I get the following properties in prune object: spec:
backend:
repoPasswordSecretRef:
key: password
name: backup-repo
s3:
accessKeyIDSecretRef:
key: username
name: backup-credentials
bucket: k8up
endpoint: 'http://10.144.1.224:9000'
secretAccessKeySecretRef:
key: password
name: backup-credentials
resources:
limits:
cpu: 500m
memory: 128M
requests:
cpu: 250m
memory: 76M The 76M should not appear at all if set via env var. The current commit doesn't fully resolve the bug. And somehow the Schedule object still gets it assigned in the resource Requirements template |
Describe the bug
With #175 we have added a feature that allows setting default resources. However, the resources that are merged with the template from the Schedule object and also specified in the job spec end up being stored in the Schedule CR, instead of being updated in the job CR.
this will remove the possibility to alter global default resources for cluster admins after the first reconciliation.
Additional context
See discussion here #260
Logs
The schedule gets updated with the following spec after reconcile:
and prune:
Expected behavior
.spec.prune.resources
should be empty inSchedule
, also the.spec.resourceRequirementsTemplate
should not be touchedTo Reproduce
Steps to reproduce the behavior:
kubectl set env deployment/k8up-operator BACKUP_GLOBALMEMORY_LIMIT=76M
Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: