Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v0.13.0 pod scheduling fails - didn't match pod affinity rules #2769

Closed
cameronbraid opened this issue Jun 6, 2020 · 4 comments · Fixed by #2768
Closed

v0.13.0 pod scheduling fails - didn't match pod affinity rules #2769

cameronbraid opened this issue Jun 6, 2020 · 4 comments · Fixed by #2768
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@cameronbraid
Copy link

Expected Behavior

pipeline run pod should schedule

Actual Behavior

warning
0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity.

pod yaml

kind: Pod
apiVersion: v1
metadata:
  name: webapp-126-1-z-staging-latest-6e4dda-git-clone-trdmq-pod-vw64q
  namespace: drivenow-build
  selfLink: >-
    /api/v1/namespaces/drivenow-build/pods/webapp-126-1-z-staging-latest-6e4dda-git-clone-trdmq-pod-vw64q
  uid: b5ff5d4a-8146-4849-989c-39a9a31ca7c8
  resourceVersion: '3299892'
  creationTimestamp: '2020-06-06T01:37:38Z'
  labels:
    app.kubernetes.io/managed-by: tekton-pipelines
    kapp.k14s.io/app: '1590583405350417649'
    kapp.k14s.io/association: v1.efe02c43b5248ea387b099e1be029a06
    tekton.dev/pipeline: mvn-spring-boot
    tekton.dev/pipelineRun: webapp-126-1-z-staging-latest-6e4dda
    tekton.dev/pipelineTask: git-clone
    tekton.dev/task: git-clone
    tekton.dev/taskRun: webapp-126-1-z-staging-latest-6e4dda-git-clone-trdmq
  annotations:
    kapp.k14s.io/identity: v1;drivenow-build/tekton.dev/Task/git-clone;tekton.dev/v1alpha1
    kapp.k14s.io/original: >-
      {"apiVersion":"tekton.dev/v1alpha1","kind":"Task","metadata":{"labels":{"kapp.k14s.io/app":"1590583405350417649","kapp.k14s.io/association":"v1.efe02c43b5248ea387b099e1be029a06"},"name":"git-clone","namespace":"drivenow-build"},"spec":{"inputs":{"params":[{"description":"git
      url to
      clone","name":"url","type":"string"},{"default":"master","description":"git
      revision to checkout (branch, tag, sha,
      ref…)","name":"revision","type":"string"},{"default":"true","description":"defines
      if the resource should initialize and fetch the
      submodules","name":"submodules","type":"string"},{"default":"1","description":"performs
      a shallow clone where only the most recent commit(s) will be
      fetched","name":"depth","type":"string"},{"default":"true","description":"defines
      if http.sslVerify should be set to true or false in the global git
      config","name":"sslVerify","type":"string"},{"default":"src","description":"subdirectory
      inside the \"output\" workspace to clone the git repo
      into","name":"subdirectory","type":"string"},{"default":"false","description":"clean
      out the contents of the repo's destination directory (if it already
      exists) before trying to clone the repo
      there","name":"deleteExisting","type":"string"}]},"results":[{"description":"The
      precise commit SHA that was fetched by this
      Task","name":"commit"}],"steps":[{"image":"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.12.0","name":"clone","script":"CHECKOUT_DIR=\"$(workspaces.output.path)/$(inputs.params.subdirectory)\"\n\ncleandir()
      {\n  # Delete any existing contents of the repo directory if it exists.\n 
      #\n  # We don't just \"rm -rf $CHECKOUT_DIR\" because $CHECKOUT_DIR might
      be \"/\"\n  # or the root of a mounted volume.\n  if [[ -d
      \"$CHECKOUT_DIR\" ]] ; then\n    # Delete non-hidden files and
      directories\n    rm -rf \"$CHECKOUT_DIR\"/*\n    # Delete files and
      directories starting with . but excluding ..\n    rm -rf
      \"$CHECKOUT_DIR\"/.[!.]*\n    # Delete files and directories starting with
      .. plus any other character\n    rm -rf \"$CHECKOUT_DIR\"/..?*\n 
      fi\n}\n\nif [[ \"$(inputs.params.deleteExisting)\" == \"true\" ]] ;
      then\n  cleandir\nfi\n\n/ko-app/git-init \\\n  -url
      \"$(inputs.params.url)\" \\\n  -revision \"$(inputs.params.revision)\"
      \\\n  -path \"$CHECKOUT_DIR\" \\\n  -sslVerify
      \"$(inputs.params.sslVerify)\" \\\n  -submodules
      \"$(inputs.params.submodules)\" \\\n  -depth
      \"$(inputs.params.depth)\"\ncd \"$CHECKOUT_DIR\"\nRESULT_SHA=\"$(git
      rev-parse HEAD | tr -d '\\n')\"\nEXIT_CODE=\"$?\"\nif [ \"$EXIT_CODE\" !=
      0 ]\nthen\n  exit $EXIT_CODE\nfi\n# Make sure we don't add a trailing
      newline to the result!\necho -n \"$RESULT_SHA\" \u003e
      $(results.commit.path)\n"}],"workspaces":[{"description":"The git repo
      will be cloned onto the volume backing this workspace","name":"output"}]}}
    kapp.k14s.io/original-diff: |
      - type: test
        path: /metadata/annotations
        value: {}
      - type: remove
        path: /metadata/annotations
      - type: test
        path: /spec/inputs
        absent: true
      - type: replace
        path: /spec/inputs?
        value:
          params:
          - description: git url to clone
            name: url
            type: string
          - default: master
            description: git revision to checkout (branch, tag, sha, ref…)
            name: revision
            type: string
          - default: "true"
            description: defines if the resource should initialize and fetch the submodules
            name: submodules
            type: string
          - default: "1"
            description: performs a shallow clone where only the most recent commit(s) will
              be fetched
            name: depth
            type: string
          - default: "true"
            description: defines if http.sslVerify should be set to true or false in the
              global git config
            name: sslVerify
            type: string
          - default: src
            description: subdirectory inside the "output" workspace to clone the git repo
              into
            name: subdirectory
            type: string
          - default: "false"
            description: clean out the contents of the repo's destination directory (if
              it already exists) before trying to clone the repo there
            name: deleteExisting
            type: string
      - type: test
        path: /spec/params
        value:
        - description: git url to clone
          name: url
          type: string
        - default: master
          description: git revision to checkout (branch, tag, sha, ref…)
          name: revision
          type: string
        - default: "true"
          description: defines if the resource should initialize and fetch the submodules
          name: submodules
          type: string
        - default: "1"
          description: performs a shallow clone where only the most recent commit(s) will
            be fetched
          name: depth
          type: string
        - default: "true"
          description: defines if http.sslVerify should be set to true or false in the global
            git config
          name: sslVerify
          type: string
        - default: src
          description: subdirectory inside the "output" workspace to clone the git repo
            into
          name: subdirectory
          type: string
        - default: "false"
          description: clean out the contents of the repo's destination directory (if it
            already exists) before trying to clone the repo there
          name: deleteExisting
          type: string
      - type: remove
        path: /spec/params
      - type: test
        path: /spec/steps/0/resources
        value: {}
      - type: remove
        path: /spec/steps/0/resources
    kapp.k14s.io/original-diff-full: ''
    kapp.k14s.io/original-diff-md5: 21bbd83a7ac104b4ea1c98e8a5d65311
    pipeline.tekton.dev/affinity-assistant: src-webapp-126-1-z-staging-latest-6e4dda
    pipeline.tekton.dev/release: devel
  ownerReferences:
    - apiVersion: tekton.dev/v1beta1
      kind: TaskRun
      name: webapp-126-1-z-staging-latest-6e4dda-git-clone-trdmq
      uid: 23f0a344-13b7-441c-addd-af820596383f
      controller: true
      blockOwnerDeletion: true
  managedFields:
    - manager: controller
      operation: Update
      apiVersion: v1
      time: '2020-06-06T01:37:38Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:kapp.k14s.io/identity': {}
            'f:kapp.k14s.io/original': {}
            'f:kapp.k14s.io/original-diff': {}
            'f:kapp.k14s.io/original-diff-full': {}
            'f:kapp.k14s.io/original-diff-md5': {}
            'f:pipeline.tekton.dev/affinity-assistant': {}
            'f:pipeline.tekton.dev/release': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/managed-by': {}
            'f:kapp.k14s.io/app': {}
            'f:kapp.k14s.io/association': {}
            'f:tekton.dev/pipeline': {}
            'f:tekton.dev/pipelineRun': {}
            'f:tekton.dev/pipelineTask': {}
            'f:tekton.dev/task': {}
            'f:tekton.dev/taskRun': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"23f0a344-13b7-441c-addd-af820596383f"}':
              .: {}
              'f:apiVersion': {}
              'f:blockOwnerDeletion': {}
              'f:controller': {}
              'f:kind': {}
              'f:name': {}
              'f:uid': {}
        'f:spec':
          'f:affinity':
            .: {}
            'f:podAffinity':
              .: {}
              'f:requiredDuringSchedulingIgnoredDuringExecution': {}
          'f:containers':
            'k:{"name":"step-clone"}':
              .: {}
              'f:args': {}
              'f:command': {}
              'f:env':
                .: {}
                'k:{"name":"HOME"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
              'f:image': {}
              'f:imagePullPolicy': {}
              'f:name': {}
              'f:resources':
                .: {}
                'f:requests':
                  .: {}
                  'f:cpu': {}
                  'f:ephemeral-storage': {}
                  'f:memory': {}
              'f:terminationMessagePath': {}
              'f:terminationMessagePolicy': {}
              'f:volumeMounts':
                .: {}
                'k:{"mountPath":"/tekton/downward"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/tekton/home"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/tekton/results"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/tekton/scripts"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/tekton/tools"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/workspace"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/workspace/output"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
              'f:workingDir': {}
          'f:dnsPolicy': {}
          'f:enableServiceLinks': {}
          'f:initContainers':
            .: {}
            'k:{"name":"credential-initializer"}':
              .: {}
              'f:args': {}
              'f:command': {}
              'f:env':
                .: {}
                'k:{"name":"HOME"}':
                  .: {}
                  'f:name': {}
                  'f:value': {}
              'f:image': {}
              'f:imagePullPolicy': {}
              'f:name': {}
              'f:resources': {}
              'f:terminationMessagePath': {}
              'f:terminationMessagePolicy': {}
              'f:volumeMounts':
                .: {}
                'k:{"mountPath":"/tekton/creds-secrets/tekton-builder-harbor-dockerconfigjson"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/tekton/home"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/tekton/results"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
                'k:{"mountPath":"/workspace"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
            'k:{"name":"place-scripts"}':
              .: {}
              'f:args': {}
              'f:command': {}
              'f:image': {}
              'f:imagePullPolicy': {}
              'f:name': {}
              'f:resources': {}
              'f:terminationMessagePath': {}
              'f:terminationMessagePolicy': {}
              'f:tty': {}
              'f:volumeMounts':
                .: {}
                'k:{"mountPath":"/tekton/scripts"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
            'k:{"name":"place-tools"}':
              .: {}
              'f:command': {}
              'f:image': {}
              'f:imagePullPolicy': {}
              'f:name': {}
              'f:resources': {}
              'f:terminationMessagePath': {}
              'f:terminationMessagePolicy': {}
              'f:volumeMounts':
                .: {}
                'k:{"mountPath":"/tekton/tools"}':
                  .: {}
                  'f:mountPath': {}
                  'f:name': {}
          'f:restartPolicy': {}
          'f:schedulerName': {}
          'f:securityContext': {}
          'f:serviceAccount': {}
          'f:serviceAccountName': {}
          'f:terminationGracePeriodSeconds': {}
          'f:volumes':
            .: {}
            'k:{"name":"tekton-internal-downward"}':
              .: {}
              'f:downwardAPI':
                .: {}
                'f:defaultMode': {}
                'f:items': {}
              'f:name': {}
            'k:{"name":"tekton-internal-home"}':
              .: {}
              'f:emptyDir': {}
              'f:name': {}
            'k:{"name":"tekton-internal-results"}':
              .: {}
              'f:emptyDir': {}
              'f:name': {}
            'k:{"name":"tekton-internal-scripts"}':
              .: {}
              'f:emptyDir': {}
              'f:name': {}
            'k:{"name":"tekton-internal-secret-volume-tekton-builder-harbor-docke-8cgqx"}':
              .: {}
              'f:name': {}
              'f:secret':
                .: {}
                'f:defaultMode': {}
                'f:secretName': {}
            'k:{"name":"tekton-internal-tools"}':
              .: {}
              'f:emptyDir': {}
              'f:name': {}
            'k:{"name":"tekton-internal-workspace"}':
              .: {}
              'f:emptyDir': {}
              'f:name': {}
            'k:{"name":"ws-d68sq"}':
              .: {}
              'f:name': {}
              'f:persistentVolumeClaim':
                .: {}
                'f:claimName': {}
    - manager: kube-scheduler
      operation: Update
      apiVersion: v1
      time: '2020-06-06T01:37:38Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:conditions':
            .: {}
            'k:{"type":"PodScheduled"}':
              .: {}
              'f:lastProbeTime': {}
              'f:lastTransitionTime': {}
              'f:message': {}
              'f:reason': {}
              'f:status': {}
              'f:type': {}
spec:
  volumes:
    - name: tekton-internal-workspace
      emptyDir: {}
    - name: tekton-internal-home
      emptyDir: {}
    - name: tekton-internal-results
      emptyDir: {}
    - name: tekton-internal-secret-volume-tekton-builder-harbor-docke-8cgqx
      secret:
        secretName: tekton-builder-harbor-dockerconfigjson
        defaultMode: 420
    - name: tekton-internal-scripts
      emptyDir: {}
    - name: tekton-internal-tools
      emptyDir: {}
    - name: tekton-internal-downward
      downwardAPI:
        items:
          - path: ready
            fieldRef:
              apiVersion: v1
              fieldPath: 'metadata.annotations[''tekton.dev/ready'']'
        defaultMode: 420
    - name: ws-d68sq
      persistentVolumeClaim:
        claimName: src-webapp-126-1-z-staging-latest-6e4dda
    - name: tekton-builder-token-hqsk8
      secret:
        secretName: tekton-builder-token-hqsk8
        defaultMode: 420
  initContainers:
    - name: credential-initializer
      image: >-
        gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/creds-init:v0.13.0@sha256:5206d6880896935ffa53d2d8326a2a0b49be902a04bb92f235b22958473e83d7
      command:
        - /ko-app/creds-init
      args:
        - '-docker-config=tekton-builder-harbor-dockerconfigjson'
      env:
        - name: HOME
          value: /tekton/home
      resources: {}
      volumeMounts:
        - name: tekton-internal-workspace
          mountPath: /workspace
        - name: tekton-internal-home
          mountPath: /tekton/home
        - name: tekton-internal-results
          mountPath: /tekton/results
        - name: tekton-internal-secret-volume-tekton-builder-harbor-docke-8cgqx
          mountPath: /tekton/creds-secrets/tekton-builder-harbor-dockerconfigjson
        - name: tekton-builder-token-hqsk8
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
    - name: place-scripts
      image: >-
        gcr.io/distroless/base:debug@sha256:f79e093f9ba639c957ee857b1ad57ae5046c328998bf8f72b30081db4d8edbe4
      command:
        - sh
      args:
        - '-c'
        - |
          ...

          script-heredoc-randomly-generated-cqbzp
      resources: {}
      volumeMounts:
        - name: tekton-internal-scripts
          mountPath: /tekton/scripts
        - name: tekton-builder-token-hqsk8
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
      tty: true
    - name: place-tools
      image: >-
        gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/entrypoint:v0.13.0@sha256:0cbfbc4f4ed9cbf6060e12c12d36599b2d8ca3f13e3fd5432adf2c2f9001913d
      command:
        - cp
        - /ko-app/entrypoint
        - /tekton/tools/entrypoint
      resources: {}
      volumeMounts:
        - name: tekton-internal-tools
          mountPath: /tekton/tools
        - name: tekton-builder-token-hqsk8
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
  containers:
    - name: step-clone
      image: 'gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.12.0'
      command:
        - /tekton/tools/entrypoint
      args:
        - '-wait_file'
        - /tekton/downward/ready
        - '-wait_file_content'
        - '-post_file'
        - /tekton/tools/0
        - '-termination_path'
        - /tekton/termination
        - '-results'
        - commit
        - '-entrypoint'
        - /tekton/scripts/script-0-6htxb
        - '--'
      workingDir: /workspace
      env:
        - name: HOME
          value: /tekton/home
      resources:
        requests:
          cpu: '0'
          ephemeral-storage: '0'
          memory: '0'
      volumeMounts:
        - name: ws-d68sq
          mountPath: /workspace/output
        - name: tekton-internal-scripts
          mountPath: /tekton/scripts
        - name: tekton-internal-tools
          mountPath: /tekton/tools
        - name: tekton-internal-downward
          mountPath: /tekton/downward
        - name: tekton-internal-workspace
          mountPath: /workspace
        - name: tekton-internal-home
          mountPath: /tekton/home
        - name: tekton-internal-results
          mountPath: /tekton/results
        - name: tekton-builder-token-hqsk8
          readOnly: true
          mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      terminationMessagePath: /tekton/termination
      terminationMessagePolicy: File
      imagePullPolicy: IfNotPresent
  restartPolicy: Never
  terminationGracePeriodSeconds: 30
  dnsPolicy: ClusterFirst
  serviceAccountName: tekton-builder
  serviceAccount: tekton-builder
  securityContext: {}
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchLabels:
              app.kubernetes.io/component: affinity-assistant
              app.kubernetes.io/instance: src-webapp-126-1-z-staging-latest-6e4dda
          topologyKey: kubernetes.io/hostname
  schedulerName: default-scheduler
  tolerations:
    - key: node.kubernetes.io/not-ready
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
    - key: node.kubernetes.io/unreachable
      operator: Exists
      effect: NoExecute
      tolerationSeconds: 300
  priority: 0
  enableServiceLinks: true
status:
  phase: Pending
  conditions:
    - type: PodScheduled
      status: 'False'
      lastProbeTime: null
      lastTransitionTime: '2020-06-06T01:37:38Z'
      reason: Unschedulable
      message: >-
        0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1
        node(s) didn't match pod affinity/anti-affinity.
  qosClass: BestEffort

@cameronbraid cameronbraid changed the title v0.13.0 didn't match pod affinity rules v0.13.0 pod scheduling fails - didn't match pod affinity rules Jun 6, 2020
@jlpettersson
Copy link
Member

warning
0/1 nodes are available: 1 node(s) didn't match pod affinity rules, 1 node(s) didn't match pod affinity/anti-affinity.

From the information given, I can not tell why this has happened. Do you have a pod from a StatefulSet running? E.g. kubectl get statefulset or kubectl get po -l app.kubernetes.io/component=affinity-assistant (should only give a result while a PipelineRun is running).

Did you install the new version, including applying the new RBAC yaml that has added RBAC for creating StatefulSet ?

What you can do is to set the feature-flag disable-affinity-assistant to "true". https://github.com/tektoncd/pipeline/blob/master/config/config-feature-flags.yaml#L32
to disable this newly introduced feaure.

@jlpettersson
Copy link
Member

jlpettersson commented Jun 6, 2020

pipeline.tekton.dev/affinity-assistant: src-webapp-126-1-z-staging-latest-6e4dda

This may be related to #2768 where I state that it appears that users can only have these names up to 34 chars (or creation of Affinity Assistant will fail) and it appears that your Affinity Assistant Name ended up with a name of 40 chars.

If you want, you can try with a shorter name of the pipelinerun name (currently webapp-126-1-z-staging-latest-6e4dda in your case)

/kind bug

@tekton-robot tekton-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jun 6, 2020
@cameronbraid
Copy link
Author

yep, the affinity assistant cant schedule the pod

warning
create Pod affinity-assistant-maven-repo-webapp-126-1-z-staging-latest-2e1b06-0 in StatefulSet affinity-assistant-maven-repo-webapp-126-1-z-staging-latest-2e1b06 failed error: Pod "affinity-assistant-maven-repo-webapp-126-1-z-staging-latest-2e1b06-0" is invalid: [metadata.labels: Invalid value: "affinity-assistant-maven-repo-webapp-126-1-z-staging-latest-2e1b06-655c5d48b9": must be no more than 63 characters, metadata.labels: Invalid value: "affinity-assistant-maven-repo-webapp-126-1-z-staging-latest-2e1b06-0": must be no more than 63 characters, spec.hostname: Invalid value: "affinity-assistant-maven-repo-webapp-126-1-z-staging-latest-2e1b06-0": must be no more than 63 characters]

@bobcatfish
Copy link
Collaborator

Looks like this is a duplicate of #2768, please re-open if not!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants