Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged resources are stored in the wrong CR object #276

Closed
ccremer opened this issue Jan 11, 2021 · 4 comments · Fixed by #297
Closed

Merged resources are stored in the wrong CR object #276

ccremer opened this issue Jan 11, 2021 · 4 comments · Fixed by #297
Labels
bug Something isn't working
Milestone

Comments

@ccremer
Copy link
Contributor

ccremer commented Jan 11, 2021

Describe the bug

With #175 we have added a feature that allows setting default resources. However, the resources that are merged with the template from the Schedule object and also specified in the job spec end up being stored in the Schedule CR, instead of being updated in the job CR.

this will remove the possibility to alter global default resources for cluster admins after the first reconciliation.

Additional context

See discussion here #260

Logs

The schedule gets updated with the following spec after reconcile:

apiVersion: backup.appuio.ch/v1alpha1
kind: Schedule
metadata:
  name: schedule-test
  namespace: default
spec:
...
  prune:
    resources:
      requests:
        cpu: 250m
    retention:
      keepDaily: 14
      keepLast: 5
    schedule: '*/1 * * * *'
  resourceRequirementsTemplate:
    requests:
      cpu: 250m

and prune:

apiVersion: backup.appuio.ch/v1alpha1
kind: Prune
metadata:
  name: prunejob-1609946760
  namespace: default
spec:
  resources:
    requests:
      cpu: 250m

Expected behavior

  • .spec.prune.resources should be empty in Schedule, also the .spec.resourceRequirementsTemplate should not be touched

To Reproduce

Steps to reproduce the behavior:

  1. Schedule Specs
---
apiVersion: backup.appuio.ch/v1alpha1
kind: Schedule
metadata:
  name: schedule-test
spec:
  resourceRequirementsTemplate:
    requests:
      cpu: "250m"
  prune:
    schedule: '*/1 * * * *'
  1. Set global default memory limit
kubectl set env deployment/k8up-operator BACKUP_GLOBALMEMORY_LIMIT=76M

Environment (please complete the following information):

  • Image Version: v1.0.0-rc2
  • K8s Version: v1.18
  • K8s Distribution: KIND
@ccremer ccremer added the bug Something isn't working label Jan 11, 2021
@ccremer ccremer added this to the v1.0.0-rc4 milestone Jan 11, 2021
cimnine added a commit that referenced this issue Jan 12, 2021
The global memory limit property was accidentially
applied as the CPU limit. Fixes #276.
@cimnine
Copy link
Contributor

cimnine commented Jan 12, 2021

Preventing the modification of the Schedule resource during reconciliation was already implemented in #260.

@ccremer
Copy link
Contributor Author

ccremer commented Jan 12, 2021

I was just able to verify, based on this PR branch, that the schedule gets the memory from the global default (76M):

spec:
  backend:
    repoPasswordSecretRef:
      key: password
      name: backup-repo
    s3:
      accessKeyIDSecretRef:
        key: username
        name: backup-credentials
      bucket: k8up
      endpoint: 'http://10.144.1.224:9000'
      secretAccessKeySecretRef:
        key: password
        name: backup-credentials
  prune:
    resources:
      limits:
        cpu: 10m
    retention:
      keepDaily: 14
      keepLast: 5
    schedule: '*/1 * * * *'
  resourceRequirementsTemplate:
    requests:
      cpu: 250m
      memory: 76M

@ccremer
Copy link
Contributor Author

ccremer commented Jan 12, 2021

ok, restart from clean:

make clean e2e-test
kubectl -n k8up-system set env deployment/k8up-operator BACKUP_GLOBALMEMORY_LIMIT=76M
k apply -f <the following file>

with this schedule:

---
apiVersion: backup.appuio.ch/v1alpha1
kind: Schedule
metadata:
  name: schedule-test
spec:
  resourceRequirementsTemplate:
    requests:
      cpu: "250m"
  backend:
    repoPasswordSecretRef:
      name: backup-repo
      key: password
    s3:
      endpoint: http://10.144.1.224:9000
      bucket: k8up
      accessKeyIDSecretRef:
        name: backup-credentials
        key: username
      secretAccessKeySecretRef:
        name: backup-credentials
        key: password
  prune:
    schedule: '*/1 * * * *'
    retention:
      keepLast: 5
      keepDaily: 14
    resources:
      limits:
        cpu: "500m"
        memory: 128M

I get the following properties in prune object:

spec:
  backend:
    repoPasswordSecretRef:
      key: password
      name: backup-repo
    s3:
      accessKeyIDSecretRef:
        key: username
        name: backup-credentials
      bucket: k8up
      endpoint: 'http://10.144.1.224:9000'
      secretAccessKeySecretRef:
        key: password
        name: backup-credentials
  resources:
    limits:
      cpu: 500m
      memory: 128M
    requests:
      cpu: 250m
      memory: 76M

The 76M should not appear at all if set via env var. The current commit doesn't fully resolve the bug. And somehow the Schedule object still gets it assigned in the resource Requirements template

@cimnine
Copy link
Contributor

cimnine commented Jan 12, 2021

Thank you @ccremer ! I'm now able to reproduce the issue. In the meantime, I believe #280 can be seen/merged as an independent fix.

@ccremer ccremer changed the title Merged resources are stored in the wrong CR object and not applied to Pods Merged resources are stored in the wrong CR object Jan 12, 2021
@ccremer ccremer reopened this Jan 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants