Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stash does not work with Flux #1334

Closed
Legion2 opened this issue Mar 28, 2021 · 26 comments
Closed

Stash does not work with Flux #1334

Legion2 opened this issue Mar 28, 2021 · 26 comments

Comments

@Legion2
Copy link
Contributor

Legion2 commented Mar 28, 2021

I manually delete the CronJob of a BackupConfiguration , because for some reason the ServiceAccount for the Job did not exist and failed to start pods. It is best practice for a Kubernetes Operators, to periodically reconcile the actual state in the cluster to remove inconsistencies, but the CronJob was not recreated after 30 minutes by the Stash Operator and also the status of the BackupConfiguration was not updated and indicated that the CronJob exists, but it did not.

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

OK after some investigation, I found that the stash deployment does not exist after the latest helm upgrade. Then I discovered, that the stash helm version 2021.3.17 is empty, there is only the _helpers.tpl. So there is something broken in the helm release process I think. Version 0.11.9 contains the deployment and the rest of stash.

@Legion2 Legion2 changed the title Reconciliation of BackupConfiguration does not work Helm Chart 2021.3.11 is empty Mar 28, 2021
@Legion2 Legion2 changed the title Helm Chart 2021.3.11 is empty Helm Chart 2021.3.11 and 2021.3.17 are empty Mar 28, 2021
@hossainemruz
Copy link
Contributor

@Legion2 In version 2021.3.17, we have moved into single helm chart for all the components. We use helm dependency to handle the component charts. The parent chart appscode/stash is a wrapper chart that wraps the stash-community, stash-enterprise, and stash-catalogs charts using helm dependency.

You can learn more about the changes here: https://blog.byte.builders/post/stash-v2021.03.17/

Please, follow the setup guide from here: https://stash.run/docs/v2021.03.17/setup/

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

Ok I see, Is there any easy way to disable the webhooks registered by stash, because I can now not deploy to my cluster, because kubernetes tries to call the validation webhook but the stash deployment does not exit.

@hossainemruz
Copy link
Contributor

Can you show the out put of the following commands?

kubectl get validatingwebhookconfiguration
kubectl get mutatingwebhookconfiguration

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

I used the helm uninstall stash command to delete them and then reinstalled stash. The stash deployment is now running. But still the CronJob is not recreated.

@Legion2 Legion2 changed the title Helm Chart 2021.3.11 and 2021.3.17 are empty Reconciliation of BackupConfiguration does not work Mar 28, 2021
@hossainemruz
Copy link
Contributor

Can you describe the respective BackupConfiguration ?

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

Name:         mongodb-backup
Namespace:    app
Labels:       kustomize.toolkit.fluxcd.io/checksum=db8a28a33d3fa7350b82a59d5a69e42d6a6cdddf
              kustomize.toolkit.fluxcd.io/name=flux-system
              kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations:  <none>
API Version:  stash.appscode.com/v1beta1
Kind:         BackupConfiguration
Metadata:
  Creation Timestamp:  2021-03-25T22:09:11Z
  Finalizers:
    stash.appscode.com
  Generation:  1
  Managed Fields:
    API Version:  stash.appscode.com/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
      f:spec:
        f:runtimeSettings:
        f:task:
          .:
          f:name:
        f:tempDir:
      f:status:
        .:
        f:conditions:
        f:observedGeneration:
    Manager:      stash
    Operation:    Update
    Time:         2021-03-25T22:09:11Z
    API Version:  stash.appscode.com/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:kubectl.kubernetes.io/last-applied-configuration:
        f:labels:
          .:
          f:kustomize.toolkit.fluxcd.io/checksum:
          f:kustomize.toolkit.fluxcd.io/name:
          f:kustomize.toolkit.fluxcd.io/namespace:
      f:spec:
        .:
        f:driver:
        f:hooks:
          .:
          f:postBackup:
            .:
            f:containerName:
            f:exec:
              .:
              f:command:
          f:preBackup:
            .:
            f:containerName:
            f:exec:
              .:
              f:command:
        f:repository:
          .:
          f:name:
        f:retentionPolicy:
          .:
          f:keepDaily:
          f:keepLast:
          f:name:
          f:prune:
        f:schedule:
        f:target:
          .:
          f:paths:
          f:ref:
            .:
            f:apiVersion:
            f:kind:
            f:name:
          f:volumeMounts:
    Manager:         kustomize-controller
    Operation:       Update
    Time:            2021-03-28T15:56:30Z
  Resource Version:  109847583
  Self Link:         /apis/stash.appscode.com/v1beta1/namespaces/voize-ml-controller/backupconfigurations/mongodb-backup
  UID:               5017f3e8-ceea-4877-8e2d-9d865c14638b
Spec:
  Driver:  Restic
  Hooks:
    Post Backup:
      Container Name:  mongodb
      Exec:
        Command:
          /bin/sh
          -c
          rm /data/db/backup/mongodb.tar.gz
    Pre Backup:
      Container Name:  mongodb
      Exec:
        Command:
          /bin/sh
          -c
          mkdir -p /data/db/backup && mongodump -u="`cat $MONGO_INITDB_ROOT_USERNAME_FILE`" -p="`cat $MONGO_INITDB_ROOT_PASSWORD_FILE`" --authenticationDatabase=admin --gzip --archive=/data/db/backup/mongodb.tar.gz
  Repository:
    Name:  mongodb-s3-backup-repo
  Retention Policy:
    Keep Daily:  7
    Keep Last:   5
    Name:        keep-last-5
    Prune:       true
  Schedule:      */60 * * * *
  Target:
    Paths:
      /data/db/backup
    Ref:
      API Version:  apps/v1
      Kind:         StatefulSet
      Name:         mongodb
    Volume Mounts:
      Mount Path:  /data/db
      Name:        mongodb-data
Status:
  Conditions:
    Last Transition Time:  2021-03-25T22:09:11Z
    Message:               Repository voize-ml-controller/mongodb-s3-backup-repo exist.
    Reason:                RepositoryAvailable
    Status:                True
    Type:                  RepositoryFound
    Last Transition Time:  2021-03-25T22:09:11Z
    Message:               Backend Secret voize-ml-controller/mongodb-s3-backup-secret-c65b5tk9bf exist.
    Reason:                BackendSecretAvailable
    Status:                True
    Type:                  BackendSecretFound
    Last Transition Time:  2021-03-25T22:09:11Z
    Message:               Backup target apps/v1 statefulset/mongodb found.
    Reason:                TargetAvailable
    Status:                True
    Type:                  BackupTargetFound
    Last Transition Time:  2021-03-25T22:09:11Z
    Message:               Successfully injected stash sidecar into apps/v1 statefulset/mongodb
    Reason:                SidecarInjectionSucceeded
    Status:                True
    Type:                  StashSidecarInjected
    Last Transition Time:  2021-03-25T22:09:11Z
    Message:               Successfully created backup triggering CronJob.
    Reason:                CronJobCreationSucceeded
    Status:                True
    Type:                  CronJobCreated
  Observed Generation:     1
Events:

@hossainemruz
Copy link
Contributor

Interesting. After you re-install the operator. It should re-sync everything. I am wondering why it is not working for you. Can you share the log from the Stash operator pod?

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

Also all the other BackupConfigurations can not push their metrics, because with the helm chart upgrade the name of the service changed (I use the community edition) and the sidecars where not updated.

  - lastTransitionTime: "2021-03-28T17:00:16Z"
    message: 'Failed to push repository metrics. Reason: Post "http://stash.stash.svc:56789/metrics/job/backupconfiguration-monitoring-grafana-backup":
      dial tcp: lookup stash.stash.svc on 10.100.0.10:53: no such host'
    reason: FailedToPushRepositoryMetrics
    status: "False"
    type: RepositoryMetricsPushed

I removed namespace and name from the logs:

I0328 17:36:10.696719       1 log.go:184] FLAG: --alsologtostderr="false"
I0328 17:36:10.696767       1 log.go:184] FLAG: --audit-dynamic-configuration="false"
I0328 17:36:10.696775       1 log.go:184] FLAG: --audit-log-batch-buffer-size="10000"
I0328 17:36:10.696781       1 log.go:184] FLAG: --audit-log-batch-max-size="1"
I0328 17:36:10.696787       1 log.go:184] FLAG: --audit-log-batch-max-wait="0s"
I0328 17:36:10.696794       1 log.go:184] FLAG: --audit-log-batch-throttle-burst="0"
I0328 17:36:10.696801       1 log.go:184] FLAG: --audit-log-batch-throttle-enable="false"
I0328 17:36:10.696808       1 log.go:184] FLAG: --audit-log-batch-throttle-qps="0"
I0328 17:36:10.696815       1 log.go:184] FLAG: --audit-log-format="json"
I0328 17:36:10.696822       1 log.go:184] FLAG: --audit-log-maxage="0"
I0328 17:36:10.696828       1 log.go:184] FLAG: --audit-log-maxbackup="0"
I0328 17:36:10.696835       1 log.go:184] FLAG: --audit-log-maxsize="0"
I0328 17:36:10.696842       1 log.go:184] FLAG: --audit-log-mode="blocking"
I0328 17:36:10.696848       1 log.go:184] FLAG: --audit-log-path="-"
I0328 17:36:10.696854       1 log.go:184] FLAG: --audit-log-truncate-enabled="false"
I0328 17:36:10.696864       1 log.go:184] FLAG: --audit-log-truncate-max-batch-size="10485760"
I0328 17:36:10.696870       1 log.go:184] FLAG: --audit-log-truncate-max-event-size="102400"
I0328 17:36:10.696877       1 log.go:184] FLAG: --audit-log-version="audit.k8s.io/v1"
I0328 17:36:10.696883       1 log.go:184] FLAG: --audit-policy-file=""
I0328 17:36:10.696889       1 log.go:184] FLAG: --audit-webhook-batch-buffer-size="10000"
I0328 17:36:10.696895       1 log.go:184] FLAG: --audit-webhook-batch-initial-backoff="10s"
I0328 17:36:10.696901       1 log.go:184] FLAG: --audit-webhook-batch-max-size="400"
I0328 17:36:10.696906       1 log.go:184] FLAG: --audit-webhook-batch-max-wait="30s"
I0328 17:36:10.696912       1 log.go:184] FLAG: --audit-webhook-batch-throttle-burst="15"
I0328 17:36:10.696917       1 log.go:184] FLAG: --audit-webhook-batch-throttle-enable="true"
I0328 17:36:10.696923       1 log.go:184] FLAG: --audit-webhook-batch-throttle-qps="10"
I0328 17:36:10.696928       1 log.go:184] FLAG: --audit-webhook-config-file=""
I0328 17:36:10.696934       1 log.go:184] FLAG: --audit-webhook-initial-backoff="10s"
I0328 17:36:10.696939       1 log.go:184] FLAG: --audit-webhook-mode="batch"
I0328 17:36:10.696945       1 log.go:184] FLAG: --audit-webhook-truncate-enabled="false"
I0328 17:36:10.696951       1 log.go:184] FLAG: --audit-webhook-truncate-max-batch-size="10485760"
I0328 17:36:10.696958       1 log.go:184] FLAG: --audit-webhook-truncate-max-event-size="102400"
I0328 17:36:10.696964       1 log.go:184] FLAG: --audit-webhook-version="audit.k8s.io/v1"
I0328 17:36:10.696969       1 log.go:184] FLAG: --authentication-kubeconfig=""
I0328 17:36:10.696975       1 log.go:184] FLAG: --authentication-skip-lookup="false"
I0328 17:36:10.696984       1 log.go:184] FLAG: --authentication-token-webhook-cache-ttl="***REDACTED***"
I0328 17:36:10.696993       1 log.go:184] FLAG: --authentication-tolerate-lookup-failure="false"
I0328 17:36:10.697005       1 log.go:184] FLAG: --authorization-always-allow-paths="[]"
I0328 17:36:10.697011       1 log.go:184] FLAG: --authorization-kubeconfig=""
I0328 17:36:10.697018       1 log.go:184] FLAG: --authorization-webhook-cache-authorized-ttl="10s"
I0328 17:36:10.697024       1 log.go:184] FLAG: --authorization-webhook-cache-unauthorized-ttl="10s"
I0328 17:36:10.697034       1 log.go:184] FLAG: --backup-job-psp="[baseline]"
I0328 17:36:10.697047       1 log.go:184] FLAG: --bind-address="0.0.0.0"
I0328 17:36:10.697053       1 log.go:184] FLAG: --burst="100"
I0328 17:36:10.697077       1 log.go:184] FLAG: --bypass-validating-webhook-xray="false"
I0328 17:36:10.697083       1 log.go:184] FLAG: --cert-dir="apiserver.local.config/certificates"
I0328 17:36:10.697089       1 log.go:184] FLAG: --client-ca-file=""
I0328 17:36:10.697121       1 log.go:184] FLAG: --contention-profiling="false"
I0328 17:36:10.697133       1 log.go:184] FLAG: --cron-job-psp="[baseline]"
I0328 17:36:10.697140       1 log.go:184] FLAG: --docker-registry="appscode"
I0328 17:36:10.697145       1 log.go:184] FLAG: --egress-selector-config-file=""
I0328 17:36:10.697152       1 log.go:184] FLAG: --enable-analytics="true"
I0328 17:36:10.697158       1 log.go:184] FLAG: --enable-mutating-webhook="true"
I0328 17:36:10.697164       1 log.go:184] FLAG: --enable-swagger-ui="false"
I0328 17:36:10.697174       1 log.go:184] FLAG: --enable-validating-webhook="true"
I0328 17:36:10.697180       1 log.go:184] FLAG: --help="false"
I0328 17:36:10.697192       1 log.go:184] FLAG: --http2-max-streams-per-connection="1000"
I0328 17:36:10.697197       1 log.go:184] FLAG: --image="stash"
I0328 17:36:10.697207       1 log.go:184] FLAG: --image-pull-secrets="***REDACTED***"
I0328 17:36:10.697213       1 log.go:184] FLAG: --image-tag="v0.12.0"
I0328 17:36:10.697219       1 log.go:184] FLAG: --kubeconfig=""
I0328 17:36:10.697226       1 log.go:184] FLAG: --license-apiservice="v1beta1.admission.stash.appscode.com"
I0328 17:36:10.697232       1 log.go:184] FLAG: --license-file="/var/run/secrets/appscode/license/key.txt"
I0328 17:36:10.697239       1 log.go:184] FLAG: --log-flush-frequency="5s"
I0328 17:36:10.697246       1 log.go:184] FLAG: --log_backtrace_at=":0"
I0328 17:36:10.697252       1 log.go:184] FLAG: --log_dir=""
I0328 17:36:10.697258       1 log.go:184] FLAG: --logtostderr="true"
I0328 17:36:10.697264       1 log.go:184] FLAG: --profiling="true"
I0328 17:36:10.697271       1 log.go:184] FLAG: --qps="100"
I0328 17:36:10.697280       1 log.go:184] FLAG: --requestheader-allowed-names="[]"
I0328 17:36:10.697286       1 log.go:184] FLAG: --requestheader-client-ca-file=""
I0328 17:36:10.697295       1 log.go:184] FLAG: --requestheader-extra-headers-prefix="[x-remote-extra-]"
I0328 17:36:10.697312       1 log.go:184] FLAG: --requestheader-group-headers="[x-remote-group]"
I0328 17:36:10.697323       1 log.go:184] FLAG: --requestheader-username-headers="[x-remote-user]"
I0328 17:36:10.697337       1 log.go:184] FLAG: --restore-job-psp="[baseline]"
I0328 17:36:10.697343       1 log.go:184] FLAG: --resync-period="10m0s"
I0328 17:36:10.697349       1 log.go:184] FLAG: --scratch-dir="/tmp"
I0328 17:36:10.697353       1 log.go:184] FLAG: --secure-port="8443"
I0328 17:36:10.697360       1 log.go:184] FLAG: --service-name="stash-stash-community"
I0328 17:36:10.697365       1 log.go:184] FLAG: --stderrthreshold="0"
I0328 17:36:10.697371       1 log.go:184] FLAG: --tls-cert-file="/var/serving-cert/tls.crt"
I0328 17:36:10.697380       1 log.go:184] FLAG: --tls-cipher-suites="[]"
I0328 17:36:10.697386       1 log.go:184] FLAG: --tls-min-version=""
I0328 17:36:10.697393       1 log.go:184] FLAG: --tls-private-key-file="/var/serving-cert/tls.key"
I0328 17:36:10.697402       1 log.go:184] FLAG: --tls-sni-cert-key="[]"
I0328 17:36:10.697409       1 log.go:184] FLAG: --use-kubeapiserver-fqdn-for-aks="true"
I0328 17:36:10.697416       1 log.go:184] FLAG: --v="3"
I0328 17:36:10.697422       1 log.go:184] FLAG: --vmodule=""
I0328 17:36:10.791557       1 run.go:42] Starting operator version v0.12.0+b7700c299eac914edda2fe4ab008393284dfbec6 ...
I0328 17:36:21.278785       1 lib.go:171] Kubernetes version: &version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
W0328 17:36:30.647164       1 deploymentconfiguration.go:74] Skipping watching non-preferred GroupVersion:apps.openshift.io/v1 Kind:Deployment ***
I0328 17:36:30.672923       1 lib.go:247] Successfully verified license!
I0328 17:36:30.679037       1 controller.go:170] Starting Stash controller
I0328 17:36:30.700887       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0328 17:36:30.700911       1 shared_informer.go:242] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0328 17:36:30.700926       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0328 17:36:30.700930       1 shared_informer.go:242] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0328 17:36:30.701112       1 secure_serving.go:178] Serving securely on [::]:8443
I0328 17:36:30.701174       1 dynamic_serving_content.go:130] Starting serving-cert::/var/serving-cert/tls.crt::/var/serving-cert/tls.key
I0328 17:36:30.701195       1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0328 17:36:30.704456       1 log.go:184] http: TLS handshake error from 192.168.168.143:53130: EOF
I0328 17:36:30.715831       1 log.go:184] http: TLS handshake error from 192.168.168.143:53132: EOF
I0328 17:36:30.716835       1 log.go:184] http: TLS handshake error from 192.168.153.254:34700: EOF
I0328 17:36:30.719085       1 log.go:184] http: TLS handshake error from 192.168.168.143:53134: EOF
I0328 17:36:30.720339       1 log.go:184] http: TLS handshake error from 192.168.153.254:34702: EOF
I0328 17:36:30.722363       1 log.go:184] http: TLS handshake error from 192.168.153.254:34716: EOF
I0328 17:36:30.723698       1 log.go:184] http: TLS handshake error from 192.168.153.254:34704: EOF
I0328 17:36:30.725585       1 log.go:184] http: TLS handshake error from 192.168.153.254:34718: EOF
I0328 17:36:30.727068       1 log.go:184] http: TLS handshake error from 192.168.153.254:34706: EOF
I0328 17:36:30.728821       1 log.go:184] http: TLS handshake error from 192.168.168.143:53136: EOF
I0328 17:36:30.730401       1 log.go:184] http: TLS handshake error from 192.168.153.254:34708: EOF
I0328 17:36:30.732061       1 log.go:184] http: TLS handshake error from 192.168.168.143:53138: EOF
I0328 17:36:30.733712       1 log.go:184] http: TLS handshake error from 192.168.153.254:34710: EOF
I0328 17:36:30.735316       1 log.go:184] http: TLS handshake error from 192.168.153.254:34720: EOF
I0328 17:36:30.737068       1 log.go:184] http: TLS handshake error from 192.168.153.254:34712: EOF
I0328 17:36:30.738736       1 log.go:184] http: TLS handshake error from 192.168.168.143:53140: EOF
I0328 17:36:30.740457       1 log.go:184] http: TLS handshake error from 192.168.153.254:34714: EOF
I0328 17:36:30.741965       1 log.go:184] http: TLS handshake error from 192.168.168.143:53142: EOF
I0328 17:36:30.745273       1 log.go:184] http: TLS handshake error from 192.168.153.254:34722: EOF
I0328 17:36:30.751954       1 log.go:184] http: TLS handshake error from 192.168.153.254:34732: EOF
I0328 17:36:30.758195       1 log.go:184] http: TLS handshake error from 192.168.153.254:34780: EOF
I0328 17:36:30.775671       1 log.go:184] http: TLS handshake error from 192.168.168.143:53148: EOF
I0328 17:36:30.778941       1 log.go:184] http: TLS handshake error from 192.168.168.143:53154: EOF
I0328 17:36:30.782112       1 log.go:184] http: TLS handshake error from 192.168.168.143:53156: EOF
I0328 17:36:30.785341       1 log.go:184] http: TLS handshake error from 192.168.168.143:53164: EOF
I0328 17:36:30.788578       1 log.go:184] http: TLS handshake error from 192.168.168.143:53166: EOF
I0328 17:36:30.791856       1 log.go:184] http: TLS handshake error from 192.168.168.143:53168: EOF
I0328 17:36:30.795288       1 log.go:184] http: TLS handshake error from 192.168.168.143:53176: EOF
I0328 17:36:30.798611       1 log.go:184] http: TLS handshake error from 192.168.168.143:53186: EOF
I0328 17:36:30.801971       1 log.go:184] http: TLS handshake error from 192.168.168.143:53188: EOF
I0328 17:36:30.805288       1 log.go:184] http: TLS handshake error from 192.168.168.143:53194: EOF
I0328 17:36:30.808612       1 log.go:184] http: TLS handshake error from 192.168.168.143:53196: EOF
I0328 17:36:30.811987       1 log.go:184] http: TLS handshake error from 192.168.153.254:34734: EOF
I0328 17:36:30.818470       1 log.go:184] http: TLS handshake error from 192.168.153.254:34740: EOF
I0328 17:36:30.834265       1 log.go:184] http: TLS handshake error from 192.168.153.254:34782: EOF
I0328 17:36:30.834321       1 log.go:184] http: TLS handshake error from 192.168.168.143:53222: EOF
I0328 17:36:30.837518       1 log.go:184] http: TLS handshake error from 192.168.153.254:34800: EOF
I0328 17:36:30.837971       1 log.go:184] http: TLS handshake error from 192.168.168.143:53230: EOF
I0328 17:36:30.840819       1 log.go:184] http: TLS handshake error from 192.168.153.254:34802: EOF
I0328 17:36:30.844021       1 log.go:184] http: TLS handshake error from 192.168.153.254:34804: EOF
I0328 17:36:30.844766       1 log.go:184] http: TLS handshake error from 192.168.168.143:53232: EOF
I0328 17:36:30.847343       1 log.go:184] http: TLS handshake error from 192.168.153.254:34806: EOF
I0328 17:36:30.850405       1 log.go:184] http: TLS handshake error from 192.168.153.254:34808: EOF
I0328 17:36:30.851451       1 log.go:184] http: TLS handshake error from 192.168.168.143:53234: EOF
I0328 17:36:30.853697       1 log.go:184] http: TLS handshake error from 192.168.153.254:34810: EOF
I0328 17:36:30.855024       1 log.go:184] http: TLS handshake error from 192.168.168.143:53236: EOF
I0328 17:36:30.856938       1 log.go:184] http: TLS handshake error from 192.168.153.254:34812: EOF
I0328 17:36:30.858579       1 log.go:184] http: TLS handshake error from 192.168.168.143:53238: EOF
I0328 17:36:30.860168       1 log.go:184] http: TLS handshake error from 192.168.153.254:34814: EOF
I0328 17:36:30.862472       1 log.go:184] http: TLS handshake error from 192.168.168.143:53240: EOF
I0328 17:36:30.863379       1 log.go:184] http: TLS handshake error from 192.168.153.254:34816: EOF
I0328 17:36:30.866026       1 log.go:184] http: TLS handshake error from 192.168.168.143:53246: EOF
I0328 17:36:30.869438       1 log.go:184] http: TLS handshake error from 192.168.168.143:53248: EOF
I0328 17:36:30.872974       1 log.go:184] http: TLS handshake error from 192.168.168.143:53250: EOF
I0328 17:36:30.876433       1 log.go:184] http: TLS handshake error from 192.168.168.143:53254: EOF
I0328 17:36:30.879953       1 log.go:184] http: TLS handshake error from 192.168.168.143:53252: EOF
I0328 17:36:30.883377       1 log.go:184] http: TLS handshake error from 192.168.168.143:53256: EOF
I0328 17:36:30.886809       1 log.go:184] http: TLS handshake error from 192.168.168.143:53260: EOF
I0328 17:36:30.891596       1 log.go:184] http: TLS handshake error from 192.168.153.254:34728: EOF
I0328 17:36:30.911298       1 log.go:184] http: TLS handshake error from 192.168.168.143:53204: EOF
I0328 17:36:30.920667       1 log.go:184] http: TLS handshake error from 192.168.153.254:34770: EOF
I0328 17:36:30.924788       1 log.go:184] http: TLS handshake error from 192.168.153.254:34772: EOF
I0328 17:36:30.928243       1 log.go:184] http: TLS handshake error from 192.168.168.143:53218: EOF
I0328 17:36:31.012814       1 shared_informer.go:249] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0328 17:36:31.079285       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.079744       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.080160       1 daemonsets.go:118] Sync/Add/Update for DaemonSet ***
I0328 17:36:31.080507       1 daemonsets.go:118] Sync/Add/Update for DaemonSet ***
I0328 17:36:31.080782       1 statefulsets.go:120] Sync/Add/Update for StatefulSet ***
I0328 17:36:31.084844       1 statefulsets.go:120] Sync/Add/Update for StatefulSet ***
I0328 17:36:31.085470       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.085629       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.085830       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.085938       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.086059       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.086153       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.086228       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.086319       1 xray.go:250] testing ValidatingWebhook using an object with GVR = stash.appscode.com/v1alpha1, Resource=repositories
I0328 17:36:31.086389       1 repository.go:79] Sync/Add/Update for Repository ***
I0328 17:36:31.090087       1 repository.go:79] Sync/Add/Update for Repository ***
I0328 17:36:31.086330       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.090298       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.090373       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.090440       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.090495       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.090553       1 repository.go:79] Sync/Add/Update for Repository ***
I0328 17:36:31.090894       1 repository.go:79] Sync/Add/Update for Repository ***
I0328 17:36:31.091269       1 repository.go:79] Sync/Add/Update for Repository ***
I0328 17:36:31.090563       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.091679       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.091759       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.091836       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.091974       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092041       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092119       1 backup_session.go:118] Sync/Add/Update for BackupSession grafana-backup-1616950809
I0328 17:36:31.092174       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092226       1 backup_session.go:118] Sync/Add/Update for BackupSession mongodb-backup-1616950808
I0328 17:36:31.092274       1 backup_session.go:128] Skipping processing BackupSession ***/mongodb-backup-1616950808. Reason: phase is "Failed".
I0328 17:36:31.092286       1 backup_session.go:118] Sync/Add/Update for BackupSession influxdb-backup-1616950809
I0328 17:36:31.092292       1 backup_session.go:128] Skipping processing BackupSession ***/influxdb-backup-1616950809. Reason: phase is "Failed".
I0328 17:36:31.092297       1 backup_session.go:118] Sync/Add/Update for BackupSession mongodb-backup-1616950810
I0328 17:36:31.092303       1 backup_session.go:128] Skipping processing BackupSession ***/mongodb-backup-1616950810. Reason: phase is "Skipped".
I0328 17:36:31.086369       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092415       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092446       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092475       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092530       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092568       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092593       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092611       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092636       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092673       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092699       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092723       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092749       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092774       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092803       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092822       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092841       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092861       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092889       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092924       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092947       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.092970       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093002       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093045       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093062       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093087       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093133       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093179       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093210       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093227       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093254       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093304       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093340       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093363       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093384       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093412       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093437       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093459       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093524       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093551       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093576       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093604       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093636       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093665       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093689       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093716       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093749       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093772       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093796       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093817       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093857       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093883       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093908       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093937       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.093971       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094013       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094071       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094112       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094138       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094163       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094184       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094205       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094234       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094256       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094289       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094319       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094349       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094370       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094395       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094419       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094442       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094463       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094484       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094520       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094546       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094586       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094636       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094664       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094704       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094736       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094776       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094806       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094843       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094891       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094930       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.094966       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095031       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095110       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095190       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095245       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095326       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095402       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095450       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095517       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095570       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095626       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095693       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095764       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095826       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095879       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095934       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.095998       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.096122       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.096177       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.096240       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.096921       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.097049       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.097758       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.097883       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.098139       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.098255       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.098557       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.098693       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.099350       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.099463       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.099675       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.099795       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.100065       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.100196       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.100455       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.100563       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.100762       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.100887       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.101186       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.101472       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.101573       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.101800       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.101913       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.102172       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.102273       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.102508       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.102604       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.102685       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.102835       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.103075       1 shared_informer.go:249] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0328 17:36:31.103189       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.103292       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.103497       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.103608       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.103970       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.104109       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.104212       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.104333       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.104644       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.104777       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.104994       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.105115       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.105362       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.105492       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.105715       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.105818       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.105939       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.106189       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.106555       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.111804       1 statefulsets.go:120] Sync/Add/Update for StatefulSet ***
I0328 17:36:31.092175       1 backup_session.go:128] Skipping processing BackupSession ***/grafana-backup-1616950809. Reason: phase is "Failed".
I0328 17:36:31.092529       1 replicasets.go:108] Sync/Add/Update for ReplicaSet ***
I0328 17:36:31.113585       1 statefulsets.go:120] Sync/Add/Update for StatefulSet ***
I0328 17:36:31.126888       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.129049       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.132190       1 daemonsets.go:118] Sync/Add/Update for DaemonSet ***
I0328 17:36:31.132353       1 daemonsets.go:118] Sync/Add/Update for DaemonSet ***
I0328 17:36:31.150762       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.156211       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.158575       1 daemonsets.go:118] Sync/Add/Update for DaemonSet ***
I0328 17:36:31.161241       1 daemonsets.go:118] Sync/Add/Update for DaemonSet ***
I0328 17:36:31.167032       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.176553       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.190857       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.200980       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.217222       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.228347       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.252363       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.282838       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.353016       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.382913       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.452526       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.489052       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.552859       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.583911       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.653224       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.683374       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.752970       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.783028       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.852952       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.882940       1 deployment.go:101] Sync/Add/Update for Deployment ***
I0328 17:36:31.922554       1 deployment.go:101] Sync/Add/Update for Deployment ***

@hossainemruz
Copy link
Contributor

How did you upgrade the operator?

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

I use helm controller https://toolkit.fluxcd.io/components/helm/controller/ it applies the helm charts to the cluster. After I uninstalled the stash helm chart manually, the helm controller automatically reinstalled it.

@hossainemruz
Copy link
Contributor

We have made some major changes in installation process. So, simple helm upgrade won't work. You should follow this upgrade guide: https://stash.run/docs/v2021.03.17/setup/upgrade/

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

Where do I find the uninstall guide for v0.11.11 (v2021.03.11) or v0.11.10 (v2021.03.08)?
image
I will try to also uninstall the crds this time, BackupConfiguration are managed in a git repository, so recreating them is no problem. This will probably solve my problems, but I think a complete reinstallation should not be the normal update procedure (this is not the first time I have to fix stash this way).

@hossainemruz
Copy link
Contributor

Where do I find the uninstall guide for v0.11.11 (v2021.03.11) or v0.11.10 (v2021.03.08)?

You can just follow uninstallation guide for v0.11.9 (v2021.01.21). It should work fine.

I think a complete reinstallation should not be the normal update procedure (this is not the first time I have to fix stash this way).

We are really sorry that this is happening again and again. Some issues are happening because of how Helm handles CRDs.

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

The guide for v0.11.9 does not uninstall the crds, so I used the latest guide to uninstall the crds after I uninstalled the helm chart. The problem is, that all the stash crd objects still have the finalizer stash.appscode.com and will not be garbage collected by kubernetes, because the stash operator is already deleted and can not remove the finalizer.

@hossainemruz
Copy link
Contributor

Hmm. I see the issue. I don't think we can get rid of the finalizer. We use it for various reasons. We are probably going to provide a kubectl apply -f stash.crds.yaml command to update the CRDs just like Cert Manager did. This way, you don't have to uninstall the CRDs during upgrade.

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

I installed stash again (while the crds were marked for deletion) and then stash cleanup the resources and removed the finilizers, so the crds could be garbage collected. After crd deletion I uninstalled the helm chart.

After reinstallation only 3 of 5 Backups were successful. The other two were skipped, but all backups should run every hour.

BackupSession:

Name:         mongodb-backup-1616961611
...
Status:
  Phase:  Skipped
Events:
  Type     Reason                 Age    From                      Message
  ----     ------                 ----   ----                      -------
  Warning  BackupSession Skipped  4m18s  BackupSession Controller  Skipped taking new backup. Reason: Previous BackupSession: mongodb-backup-1616961610 is "Running".

But I can't find the BackupSession mongodb-backup-1616961610 and in Grafana I can not see any metrics from this backup.

CronJob:

Name:                          stash-backup-mongodb-backup
...
Events:
  Type    Reason            Age   From                Message
  ----    ------            ----  ----                -------
  Normal  SuccessfulCreate  11m   cronjob-controller  Created job stash-backup-mongodb-backup-1616961600
  Normal  MissingJob        11m   cronjob-controller  Active job went missing: stash-backup-mongodb-backup-1616961600

Log of sidecar:

I0328 19:49:15.358015       1 log.go:184] FLAG: --alsologtostderr="false"
I0328 19:49:15.358075       1 log.go:184] FLAG: --bypass-validating-webhook-xray="false"
I0328 19:49:15.358081       1 log.go:184] FLAG: --enable-analytics="true"
I0328 19:49:15.358087       1 log.go:184] FLAG: --enable-cache="true"
I0328 19:49:15.358469       1 log.go:184] FLAG: --help="false"
I0328 19:49:15.358487       1 log.go:184] FLAG: --host=""
I0328 19:49:15.358494       1 log.go:184] FLAG: --invoker-kind="BackupConfiguration"
I0328 19:49:15.358500       1 log.go:184] FLAG: --invoker-name="mongodb-backup"
I0328 19:49:15.358506       1 log.go:184] FLAG: --kubeconfig=""
I0328 19:49:15.358513       1 log.go:184] FLAG: --log-flush-frequency="5s"
I0328 19:49:15.358520       1 log.go:184] FLAG: --log_backtrace_at=":0"
I0328 19:49:15.358526       1 log.go:184] FLAG: --log_dir=""
I0328 19:49:15.358557       1 log.go:184] FLAG: --logtostderr="true"
I0328 19:49:15.358565       1 log.go:184] FLAG: --master=""
I0328 19:49:15.358572       1 log.go:184] FLAG: --max-connections="0"
I0328 19:49:15.358579       1 log.go:184] FLAG: --metrics-enabled="true"
I0328 19:49:15.358587       1 log.go:184] FLAG: --pushgateway-url="http://stash-stash-community.stash.svc:56789"
I0328 19:49:15.358594       1 log.go:184] FLAG: --secret-dir="***REDACTED***"
I0328 19:49:15.358601       1 log.go:184] FLAG: --service-name="stash-operator"
I0328 19:49:15.358608       1 log.go:184] FLAG: --stderrthreshold="0"
I0328 19:49:15.358642       1 log.go:184] FLAG: --target-kind="StatefulSet"
I0328 19:49:15.358650       1 log.go:184] FLAG: --target-name="mongodb"
I0328 19:49:15.358657       1 log.go:184] FLAG: --use-kubeapiserver-fqdn-for-aks="true"
I0328 19:49:15.358664       1 log.go:184] FLAG: --v="3"
I0328 19:49:15.358671       1 log.go:184] FLAG: --vmodule=""
W0328 19:49:15.577900       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0328 19:49:15.879823       1 backupsession.go:450] BackupSession controller started successfully.
I0328 20:00:11.472297       1 backupsession.go:185] Sync/Add/Update for Backup Session mongodb-backup-1616961611
I0328 20:00:11.478588       1 backupsession.go:226] Skip processing BackupSession ***/mongodb-backup-1616961611. Reason: Backup process is not initiated by the operator
I0328 20:00:11.519709       1 backupsession.go:185] Sync/Add/Update for Backup Session mongodb-backup-1616961611
I0328 20:00:11.525117       1 backupsession.go:226] Skip processing BackupSession ***/mongodb-backup-1616961611. Reason: Backup process is not initiated by the operator

Skip processing BackupSession ***/mongodb-backup-1616961611. Reason: Backup process is not initiated by the operator

What does that mean?

@hossainemruz
Copy link
Contributor

After reinstallation only 3 of 5 Backups were successful. The other two were skipped, but all backups should run every hour.

Are they backup of same target or different targets? It seems the other backup has got stuck in Running state.

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

They are backups in different namespaces and both backup a mongodb (there is a third namespace where the same BackupConfiguration successfully created a backup of a mongodb). And there is no other backup in the same namespace. So I don't know where this backup in Running state should come from (I deleted the crds, so it can not be some orphaned resource).

@Legion2
Copy link
Contributor Author

Legion2 commented Mar 28, 2021

One hour later 4 of 5 backups are successful, the 5th was skipped again. I think there is some kind of race condition where multiple BackupSessions are created and then deleted again.

@Legion2
Copy link
Contributor Author

Legion2 commented Apr 4, 2021

@hossainemruz I found the problem, why the stash backup service accounts are deleted in my cluster. It is because stash copies the labels of the Backup Configuration as is to the service account, including the labels generated by flux kustomize controller for the backup configuration.

labels: 
      kustomize.toolkit.fluxcd.io/checksum=ce1d06af37d4ce7096a58a2d48656d02b5721332
      kustomize.toolkit.fluxcd.io/name=flux-system
      kustomize.toolkit.fluxcd.io/namespace=flux-system

these labels indicate that a resource is managed by flux kustomize controller, but the service account is/should not be managed by kustomize controller. Therefore the kustomize controller will delete the service account, because it is not part of any kustomization managed by the controller.

Stash should not copy the labels of the Backup Configuration to the Service Account.

@Legion2
Copy link
Contributor Author

Legion2 commented Apr 5, 2021

I looked through the code and found that labels of owner objects are reused many times for resources created by the reconciliation logic. Even for the global resources such as ClusterRoleBindings the labels of the first reconciled object are used for creating them. As a result, the stash-cron-job ClusterRoleBinding in my cluster has labels from objects that do not exist in my cluster any more. (Also, the stash-cron-job ClusterRoleBinding was not deleted during the complete uninstallation of stash helm chart.)

err := stash_rbac.EnsureCronJobRBAC(c.kubeClient, inv.OwnerRef, inv.ObjectMeta.Namespace, serviceAccountName, c.getBackupSessionCronJobPSPNames(), inv.Labels)

func EnsureCronJobRBAC(kubeClient kubernetes.Interface, owner *metav1.OwnerReference, namespace, sa string, psps []string, labels map[string]string) error {
// ensure CronJob cluster role
err := ensureCronJobClusterRole(kubeClient, psps, labels)

This random coping of labels causes trouble in cluster where these labels are used for management purposes.

@Legion2 Legion2 changed the title Reconciliation of BackupConfiguration does not work Stash does not work with Flux Apr 7, 2021
@Legion2
Copy link
Contributor Author

Legion2 commented Apr 7, 2021

I changed the title and created an issue in the flux repo fluxcd/kustomize-controller#315. However the issue must be fixed in Stash. The problem is a race condition.

Flux uses a checksum label on all resources to manage garbage collection. Flux updates the checksum on the all resources, including the BackupConfiguration, but not the SAs created by stash. Then Stash updates the SA labels with the new checksum while flux is garbage collecting all resources with the old checksum. Depending on which controller is faster, the SA is deleted or not.

To fix this, Stash must not copy the labels of other resources. Also what I found weird, is that Stash does not recreate the SA after they were deleted. This means some part of the reconciliation logic of the BackupConfiguration does not work.

@hossainemruz
Copy link
Contributor

To fix this, Stash must not copy the labels of other resources.

Yes thats right. We should only pass the Stash labels. However, what will happen when user want pass some custom label to the resources created by Stash?

Also what I found weird, is that Stash does not recreate the SA after they were deleted. This means some part of the reconciliation logic of the BackupConfiguration does not work.

That's weird. We have have to investigate it. This is not the desired behavior.

@Legion2
Copy link
Contributor Author

Legion2 commented Apr 8, 2021

However, what will happen when user want pass some custom label to the resources created by Stash?

I think it should be made explicit which custom labels stash use when creating resources. Maybe BackupConfiguration can have a special field spec.runtimeSettings.metadata where we can specify the labels which are added to all objects created by that BackupConfiguration.

@Legion2
Copy link
Contributor Author

Legion2 commented Jun 15, 2021

Flux 0.15.0 moved the checksum into an annotation, so it is not copied by stash anymore.

@Legion2 Legion2 closed this as completed Jun 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants