Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

panic: runtime error: invalid memory address or nil pointer dereference #13353

Closed
BartoszZawadzki opened this issue Mar 10, 2022 · 2 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@BartoszZawadzki
Copy link

BartoszZawadzki commented Mar 10, 2022

/kind bug

1. What kops version are you running? The command kops version, will display
this information.
Version 1.22.2

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
v1.19.16

3. What cloud provider are you using? AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

kops edit cluster

added (under .spec):

serviceAccountIssuerDiscovery:
  discoveryStore: s3://{{ MY_PUBLIC_S3_BUCKET_NAME }}
  enableAWSOIDCProvider: true

kops update cluster --yes

5. What happened after the commands executed?

*********************************************************************************

A new kops version is available: 1.22.3
Upgrading is recommended
More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_kops.md#1.22.3

*********************************************************************************

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x43b5de8]

goroutine 1 [running]:
k8s.io/kops/pkg/model.(*IssuerDiscoveryModelBuilder).Build(0xc000cf8000, 0xc001955ac0, 0x0, 0x0)
	pkg/model/issuerdiscovery.go:70 +0x128
k8s.io/kops/upup/pkg/fi/cloudup.(*Loader).BuildTasks(0xc001a5d848, 0xc000e0b020, 0x10, 0x10, 0x11)
	upup/pkg/fi/cloudup/loader.go:45 +0xbe
k8s.io/kops/upup/pkg/fi/cloudup.(*ApplyClusterCmd).Run(0xc001a5dc38, 0x61a64d0, 0xc000136008, 0x0, 0x0)
	upup/pkg/fi/cloudup/apply_cluster.go:674 +0x1d3c
main.RunUpdateCluster(0x61a64d0, 0xc000136008, 0xc00000c030, 0x6150420, 0xc00018e008, 0xc00052ad10, 0xc000d1fd40, 0x0, 0x0)
	cmd/kops/update_cluster.go:296 +0x6a8
main.NewCmdUpdateCluster.func1(0xc000cbe280, 0xc000c4dcf0, 0x0, 0x1, 0x0, 0x0)
	cmd/kops/update_cluster.go:111 +0x5d
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).execute(0xc000cbe280, 0xc000c4dce0, 0x1, 0x1, 0xc000cbe280, 0xc000c4dce0)
	vendor/github.com/spf13/cobra/command.go:856 +0x472
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x8450bc0, 0x84a0430, 0x0, 0x0)
	vendor/github.com/spf13/cobra/command.go:974 +0x375
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	vendor/github.com/spf13/cobra/command.go:902
main.Execute()
	cmd/kops/root.go:95 +0x8f
main.main()
	cmd/kops/main.go:20 +0x25

6. What did you expect to happen? I've expected the cluster to update without any issues

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: null
  generation: 2
  name: XXX
spec:
  additionalPolicies:
    master: |
      [
        {
          "Effect": "Allow",
          "Action": [
            "sts:AssumeRole"
          ],
          "Resource": ["*"]
        }
      ]
  addons:
  - manifest: monitoring-standalone
  api:
    loadBalancer:
      class: Classic
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    Domain: XXX
    Environment: XXX
    Project: XXX
  cloudProvider: aws
  configBase: s3://XXX/XXX
  etcdClusters:
  - cpuRequest: 150m
    etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    - instanceGroup: master-eu-west-1b
      name: b
    - instanceGroup: master-eu-west-1c
      name: c
    manager:
      env:
      - name: ETCD_LISTEN_METRICS_URLS
        value: http://0.0.0.0:8081
      - name: ETCD_METRICS
        value: extensive
    memoryRequest: 300Mi
    name: main
  - cpuRequest: 250m
    etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: a
    - instanceGroup: master-eu-west-1b
      name: b
    - instanceGroup: master-eu-west-1c
      name: c
    memoryRequest: 900Mi
    name: events
  fileAssets:
  - content: |
      mkdir -p /tmp/workspace
      chown -R 1000:1000 /tmp/workspace
    name: ci-directories
    path: /usr/local/bin/ci-directories
    roles:
    - Node
  - content: |
      ---
      apiVersion: audit.k8s.io/v1
      kind: Policy
      rules:
        # Do not log from kube-system accounts
        - level: None
          userGroups:
            - system:serviceaccounts:kube-system
        - level: None
          users:
            - system:apiserver
            - system:kube-scheduler
            - system:volume-scheduler
            - system:kube-controller-manager
            - system:node
        # Do not log from collector
        - level: None
          users:
            - system:serviceaccount:collectorforkubernetes:collectorforkubernetes
        # Don't log nodes communications
        - level: None
          userGroups:
            - system:nodes
        # The following requests were manually identified as high-volume and low-risk,
        # so drop them.
        - level: None
          users: ["system:kube-proxy"]
          verbs: ["watch"]
          resources:
            - group: "" # core
              resources: ["endpoints", "services", "services/status"]
        - level: None
          # Ingress controller reads 'configmaps/ingress-uid' through the unsecured port.
          users: ["system:serviceaccount:ingress-nginx:ingress-nginx"]
          namespaces: ["kube-system"]
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["configmaps"]
        - level: None
          users: ["kubelet"] # legacy kubelet identity
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["nodes", "nodes/status"]
        - level: None
          userGroups: ["system:nodes"]
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["nodes", "nodes/status"]
        - level: None
          users:
            - system:kube-controller-manager
            - system:kube-scheduler
            - system:serviceaccount:kube-system:endpoint-controller
          verbs: ["get", "update"]
          namespaces: ["kube-system"]
          resources:
            - group: "" # core
              resources: ["endpoints"]
        - level: None
          users: ["system:apiserver"]
          verbs: ["get"]
          resources:
            - group: "" # core
              resources: ["namespaces", "namespaces/status", "namespaces/finalize"]
        - level: None
          users: ["cluster-autoscaler"]
          verbs: ["get", "update"]
          namespaces: ["kube-system"]
          resources:
            - group: "" # core
              resources: ["configmaps", "endpoints"]
        # Don't log HPA fetching metrics.
        - level: None
          users:
            - system:kube-controller-manager
          verbs: ["get", "list"]
          resources:
            - group: "metrics.k8s.io"
        # Don't log these read-only URLs.
        - level: None
          nonResourceURLs:
            - /healthz*
            - /version
            - /swagger*
        # Don't log events requests because of performance impact.
        - level: None
          resources:
            - group: "" # core
              resources: ["events"]
        # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
        - level: None
          users: ["kubelet", "system:node-problem-detector", "system:serviceaccount:kube-system:node-problem-detector"]
          verbs: ["update","patch"]
          resources:
            - group: "" # core
              resources: ["nodes/status", "pods/status"]
          omitStages:
            - "RequestReceived"
        - level: None
          userGroups: ["system:nodes"]
          verbs: ["update","patch"]
          resources:
            - group: "" # core
              resources: ["nodes/status", "pods/status"]
          omitStages:
            - "RequestReceived"
        # deletecollection calls can be large, don't log responses for expected namespace deletions
        - level: Request
          users: ["system:serviceaccount:kube-system:namespace-controller"]
          verbs: ["deletecollection"]
          omitStages:
            - "RequestReceived"
        # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
        # so only log at the Metadata level.
        - level: Metadata
          resources:
            - group: "" # core
              resources: ["secrets", "configmaps"]
            - group: authentication.k8s.io
              resources: ["tokenreviews"]
          omitStages:
            - "RequestReceived"
        # Get repsonses can be large; skip them.
        - level: Request
          omitStages:
            - RequestReceived
          resources:
            - group: ""
            - group: admissionregistration.k8s.io
            - group: apiextensions.k8s.io
            - group: apiregistration.k8s.io
            - group: apps
            - group: authentication.k8s.io
            - group: authorization.k8s.io
            - group: autoscaling
            - group: batch
            - group: certificates.k8s.io
            - group: extensions
            - group: metrics.k8s.io
            - group: networking.k8s.io
            - group: policy
            - group: rbac.authorization.k8s.io
            - group: scheduling.k8s.io
            - group: settings.k8s.io
            - group: storage.k8s.io
          verbs:
            - get
            - list
            - watch
          # Default level for known APIs
        - level: RequestResponse
          omitStages:
            - RequestReceived
          resources:
            - group: ""
            - group: admissionregistration.k8s.io
            - group: apiextensions.k8s.io
            - group: apiregistration.k8s.io
            - group: apps
            - group: authentication.k8s.io
            - group: authorization.k8s.io
            - group: autoscaling
            - group: batch
            - group: certificates.k8s.io
            - group: extensions
            - group: metrics.k8s.io
            - group: networking.k8s.io
            - group: policy
            - group: rbac.authorization.k8s.io
            - group: scheduling.k8s.io
            - group: settings.k8s.io
            - group: storage.k8s.io
        # Default level for all other requests.
        - level: Metadata
          omitStages:
            - "RequestReceived"
    name: audit-policy-config
    path: /srv/kubernetes/kube-apiserver/audit-policy-config.yaml
    roles:
    - Master
  hooks:
  - before:
    - kubelet.service
    manifest: |
      Type=oneshot
      ExecStart=/bin/bash /usr/local/bin/ci-directories
    name: ci-directories.service
    roles:
    - Node
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    auditLogMaxAge: 30
    auditLogMaxBackups: 30
    auditLogMaxSize: 100
    auditLogPath: /var/log/kube-apiserver-audit.log
    auditPolicyFile: /srv/kubernetes/kube-apiserver/audit-policy-config.yaml
    authenticationTokenWebhookCacheTtl: 5m0s
    authorizationMode: Node,RBAC
    oidcClientID: oidc-auth-client
    oidcGroupsClaim: groups
    oidcIssuerURL: https://auth.XXX/
    oidcUsernameClaim: name
  kubeControllerManager:
    horizontalPodAutoscalerUseRestClients: true
  kubeProxy:
    metricsBindAddress: 0.0.0.0
  kubeScheduler:
    enableProfiling: false
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
  kubernetesApiAccess:
  - XXX/32
  - XXX/32
  - XXX/32
  kubernetesVersion: 1.19.16
  masterInternalName: api.internal.XXX
  masterPublicName: api.XXX
  networkCIDR: 172.20.0.0/16
  networkID: vpc-XXX
  networking:
    calico:
      majorVersion: v3
  nonMasqueradeCIDR: 100.64.0.0/10
  serviceAccountIssuerDiscovery:
    discoveryStore: s3://kops-irsa.XXX
    enableAWSOIDCProvider: true
  sshAccess:
  - XXX/32
  - XXX/32
  - XXX/32
  - XXX/32
  subnets:
  - cidr: 172.20.32.0/19
    egress: nat-XXX
    id: subnet-XXX
    name: XXX-private-eu-west-1a
    type: Private
    zone: eu-west-1a
  - cidr: 172.20.64.0/19
    egress: nat-XXX
    id: subnet-XXX
    name: XXX-private-eu-west-1b
    type: Private
    zone: eu-west-1b
  - cidr: 172.20.96.0/19
    egress: nat-XXX
    id: subnet-XXX
    name: XXX-private-eu-west-1c
    type: Private
    zone: eu-west-1c
  - cidr: 172.20.0.0/22
    id: subnet-XXX
    name: XXX-public-eu-west-1a
    type: Utility
    zone: eu-west-1a
  - cidr: 172.20.4.0/22
    id: subnet-XXX
    name: XXX-public-eu-west-1b
    type: Utility
    zone: eu-west-1b
  - cidr: 172.20.8.0/22
    id: subnet-XXX
    name: XXX-public-eu-west-1c
    type: Utility
    zone: eu-west-1c
  topology:
    bastion:
      bastionPublicName: bastion.XXX
    dns:
      type: Public
    masters: private
    nodes: private

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x43b5de8]

goroutine 1 [running]:
k8s.io/kops/pkg/model.(*IssuerDiscoveryModelBuilder).Build(0xc000f14000, 0xc001024470, 0x0, 0x0)
	pkg/model/issuerdiscovery.go:70 +0x128
k8s.io/kops/upup/pkg/fi/cloudup.(*Loader).BuildTasks(0xc001b17848, 0xc000f6e780, 0x10, 0x10, 0x11)
	upup/pkg/fi/cloudup/loader.go:45 +0xbe
k8s.io/kops/upup/pkg/fi/cloudup.(*ApplyClusterCmd).Run(0xc001b17c38, 0x61a64d0, 0xc0000560c8, 0x0, 0x0)
	upup/pkg/fi/cloudup/apply_cluster.go:674 +0x1d3c
main.RunUpdateCluster(0x61a64d0, 0xc0000560c8, 0xc0004fc408, 0x6150420, 0xc00000e018, 0xc00026e630, 0xc000cbbd40, 0x0, 0x0)
	cmd/kops/update_cluster.go:296 +0x6a8
main.NewCmdUpdateCluster.func1(0xc000d7ca00, 0xc000916800, 0x0, 0x2, 0x0, 0x0)
	cmd/kops/update_cluster:111 +0x5d
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).execute(0xc000d7ca00, 0xc0009167e0, 0x2, 0x2, 0xc000d7ca00, 0xc0009167e0)
	vendor/github.com/spf13/cobra/command.go:856 +0x472
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x8450bc0, 0x84a0430, 0x0, 0x0)
	vendor/github.com/spf13/cobra/command.go:974 +0x375
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	vendor/github.com/spf13/cobra/command.go:902
main.Execute()
	cmd/kops/root.go:95 +0x8f
main.main()
	cmd/kops/main.go:20 +0x25

9. Anything else do we need to know?

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 10, 2022
@olemarkus
Copy link
Member

Can you try kops 1.22.4?

@BartoszZawadzki
Copy link
Author

Works great. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants