Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nil pointer panic in minio operator 2.0.2 when readiness probe is not supplied. #124

Closed
dmayle opened this issue May 25, 2020 · 0 comments · Fixed by #129
Closed

Nil pointer panic in minio operator 2.0.2 when readiness probe is not supplied. #124

dmayle opened this issue May 25, 2020 · 0 comments · Fixed by #129
Assignees
Labels

Comments

@dmayle
Copy link
Contributor

dmayle commented May 25, 2020

I'm upgrading from 1.0.4 to 2.0.2 and trying to bring up a new minio cluster, but I can't get successful cluster creation. Instead, I'm seeing a crash in the minio operator logs. It seems to be because there is no readiness probe, which was not expected in previous versions for minio for clusters.

Expected Behavior

No crash

Current Behavior

Logs:


E0525 16:52:27.665191       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 44 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x13c59c0, 0x21651a0)
	k8s.io/apimachinery@v0.18.0/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	k8s.io/apimachinery@v0.18.0/pkg/util/runtime/runtime.go:48 +0x82
panic(0x13c59c0, 0x21651a0)
	runtime/panic.go:679 +0x1b2
github.com/minio/minio-operator/pkg/resources/statefulsets.minioServerContainer(0xc000586780, 0xc000601b00, 0x13, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	github.com/minio/minio-operator@/pkg/resources/statefulsets/minio-statefulset.go:171 +0x332
github.com/minio/minio-operator/pkg/resources/statefulsets.NewForMinIO(0xc000586780, 0xc000601b00, 0x13, 0xc0000dc270)
	github.com/minio/minio-operator@/pkg/resources/statefulsets/minio-statefulset.go:311 +0x410
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).syncHandler(0xc000396b60, 0xc000446100, 0x14, 0xc000100900, 0xc0003e54a0)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:414 +0x5f6
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem.func1(0xc000396b60, 0x135cba0, 0xc000452310, 0x0, 0x0)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:289 +0x16c
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem(0xc000396b60, 0x41720e)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:297 +0x53
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).runWorker(0xc000396b60)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:250 +0x62
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005d8010)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:155 +0x5e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005d8010, 0x17650a0, 0xc00061c000, 0xc00044e001, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005d8010, 0x3b9aca00, 0x0, 0x1, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:133 +0xe2
k8s.io/apimachinery/pkg/util/wait.Until(0xc0005d8010, 0x3b9aca00, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:90 +0x4d
created by github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).Start
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:233 +0x1f2
E0525 16:52:27.665391       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 44 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x13c59c0, 0x21651a0)
	k8s.io/apimachinery@v0.18.0/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	k8s.io/apimachinery@v0.18.0/pkg/util/runtime/runtime.go:48 +0x82
panic(0x13c59c0, 0x21651a0)
	runtime/panic.go:679 +0x1b2
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	k8s.io/apimachinery@v0.18.0/pkg/util/runtime/runtime.go:55 +0x105
panic(0x13c59c0, 0x21651a0)
	runtime/panic.go:679 +0x1b2
github.com/minio/minio-operator/pkg/resources/statefulsets.minioServerContainer(0xc000586780, 0xc000601b00, 0x13, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	github.com/minio/minio-operator@/pkg/resources/statefulsets/minio-statefulset.go:171 +0x332
github.com/minio/minio-operator/pkg/resources/statefulsets.NewForMinIO(0xc000586780, 0xc000601b00, 0x13, 0xc0000dc270)
	github.com/minio/minio-operator@/pkg/resources/statefulsets/minio-statefulset.go:311 +0x410
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).syncHandler(0xc000396b60, 0xc000446100, 0x14, 0xc000100900, 0xc0003e54a0)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:414 +0x5f6
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem.func1(0xc000396b60, 0x135cba0, 0xc000452310, 0x0, 0x0)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:289 +0x16c
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem(0xc000396b60, 0x41720e)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:297 +0x53
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).runWorker(0xc000396b60)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:250 +0x62
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005d8010)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:155 +0x5e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005d8010, 0x17650a0, 0xc00061c000, 0xc00044e001, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005d8010, 0x3b9aca00, 0x0, 0x1, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:133 +0xe2
k8s.io/apimachinery/pkg/util/wait.Until(0xc0005d8010, 0x3b9aca00, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:90 +0x4d
created by github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).Start
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:233 +0x1f2
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x12156f2]
goroutine 44 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	k8s.io/apimachinery@v0.18.0/pkg/util/runtime/runtime.go:55 +0x105
panic(0x13c59c0, 0x21651a0)
	runtime/panic.go:679 +0x1b2
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
	k8s.io/apimachinery@v0.18.0/pkg/util/runtime/runtime.go:55 +0x105
panic(0x13c59c0, 0x21651a0)
	runtime/panic.go:679 +0x1b2
github.com/minio/minio-operator/pkg/resources/statefulsets.minioServerContainer(0xc000586780, 0xc000601b00, 0x13, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	github.com/minio/minio-operator@/pkg/resources/statefulsets/minio-statefulset.go:171 +0x332
github.com/minio/minio-operator/pkg/resources/statefulsets.NewForMinIO(0xc000586780, 0xc000601b00, 0x13, 0xc0000dc270)
	github.com/minio/minio-operator@/pkg/resources/statefulsets/minio-statefulset.go:311 +0x410
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).syncHandler(0xc000396b60, 0xc000446100, 0x14, 0xc000100900, 0xc0003e54a0)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:414 +0x5f6
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem.func1(0xc000396b60, 0x135cba0, 0xc000452310, 0x0, 0x0)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:289 +0x16c
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).processNextWorkItem(0xc000396b60, 0x41720e)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:297 +0x53
github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).runWorker(0xc000396b60)
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:250 +0x62
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0005d8010)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:155 +0x5e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0005d8010, 0x17650a0, 0xc00061c000, 0xc00044e001, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:156 +0xa3
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0005d8010, 0x3b9aca00, 0x0, 0x1, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:133 +0xe2
k8s.io/apimachinery/pkg/util/wait.Until(0xc0005d8010, 0x3b9aca00, 0xc00033e000)
	k8s.io/apimachinery@v0.18.0/pkg/util/wait/wait.go:90 +0x4d
created by github.com/minio/minio-operator/pkg/controller/cluster.(*Controller).Start
	github.com/minio/minio-operator@/pkg/controller/cluster/main-controller.go:233 +0x1f2

CRD:

apiVersion: operator.min.io/v1
kind: MinIOInstance
metadata:
  annotations:
    argocd.argoproj.io/sync-options: Validate=false
  name: airflow-logs
  namespace: airflow
## If specified, MinIOInstance pods will be dispatched by specified scheduler.
## If not specified, the pod will be dispatched by default scheduler.
# scheduler:
#  name: my-custom-scheduler
spec:
  selector:
    matchLabels:
      app: airflow-logs
  metadata:
    labels:
      app: airflow-logs
    annotations:
      prometheus.io/path: /minio/prometheus/metrics
      prometheus.io/port: "443"
      prometheus.io/scrape: "true"
  image: minio/minio:RELEASE.2020-05-01T22-19-14Z
  serviceName: airflow-logs-hl-svc
  ## Secret with credentials to be used by MinIO instance.
  credsSecret:
    name: airflow-logs-account-secret
  ## Supply number of replicas.
  ## For standalone mode, supply 1. For distributed mode, supply 4 or more (should be even).
  ## Note that the operator does not support upgrading from standalone to distributed mode.
  zones:
  - name: "eu-west-3"
    servers: 4
  ## PodManagement policy for pods created by StatefulSet. Can be "OrderedReady" or "Parallel"
  ## Refer https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
  ## for details. Defaults to "Parallel"
  podManagementPolicy: Parallel

  ## Used to specify a toleration for a pod
  #tolerations:
  #  - effect: NoSchedule
  #    key: dedicated
  #    operator: Equal
  #    value: storage
  ## Add environment variables to be set in MinIO container (https://github.com/minio/minio/tree/master/docs/config)
  env:
    - name: MINIO_BROWSER
      value: "on"
    - name: MINIO_REGION_NAME
      value: "eu-west-3"
    # - name: MINIO_STORAGE_CLASS_RRS
    #   value: "EC:2"
  ## Configure resource requests and limits for MinIO containers
  resources:
    requests:
      memory: 256Mi
      cpu: 100m
  ## Liveness probe detects situations where MinIO server instance
  ## is not working properly and needs restart. Kubernetes automatically
  ## restarts the pods if liveness checks fail.
  liveness:
    httpGet:
      path: /minio/health/live
      port: 9000
      scheme: HTTPS
    initialDelaySeconds: 120
    periodSeconds: 20
  ## Readiness probe detects situations when MinIO server instance
  ## is not ready to accept traffic. Kubernetes doesn't forward
  ## traffic to the pod while readiness checks fail.
  ## Recommended to be used only for standalone MinIO Instances. (replicas = 1)
  # readiness:
  #   httpGet:
  #     path: /minio/health/ready
  #     port: 9000
  #   initialDelaySeconds: 120
  #   periodSeconds: 20
  ## nodeSelector parameters for MinIO Pods. It specifies a map of key-value pairs. For the pod to be
  ## eligible to run on a node, the node must have each of the
  ## indicated key-value pairs as labels.
  ## Read more here: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
  # nodeSelector:
  #   disktype: ssd
  ## Affinity settings for MinIO pods. Read more about affinity
  ## here: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity.
  # affinity:
  ## Secret with certificates to configure TLS for MinIO certs. Create secrets as explained
  ## here: https://github.com/minio/minio/tree/master/docs/tls/kubernetes#2-create-kubernetes-secret
  externalCertSecret:
    name: airflow-logs-example-com-tls
    type: cert-manager.io/v1alpha2
  ## Mountpath where PV will be mounted inside container(s). Defaults to "/export".
  # mountPath: /export
  ## Subpath inside Mountpath where MinIO starts. Defaults to "".
  # subPath: /data
  volumeClaimTemplate:
    metadata:
      name: airflow-logs-cluster-data
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 20Gi
      storageClassName: minio
  mcs:
    image: minio/mcs:v0.0.5
    replicas: 2
    mcsSecret:
      name: airflow-logs-mcs-secret
    metadata:
      labels:
        app: airflow-logs-mcs
    selector:
      matchLabels:
        app: airflow-logs-mcs

Steps to Reproduce (for bugs)

  1. Provided yaml above

Your Environment

  • Version used (minio-operator): 2.0.2
  • Environment name and version (e.g. kubernetes v1.17.2): Kubernetes 1.16.4
  • Server type and version: Off teh shelf
  • Operating System and version (uname -a): Alpine Linux 3.11
  • Link to your deployment file:
@dmayle dmayle changed the title Crash in minio operator 2.0.2 Nil pointer panic in minio operator 2.0.2 when readiness probe is not supplied. May 25, 2020
@nitisht nitisht self-assigned this May 26, 2020
harshavardhana pushed a commit that referenced this issue May 26, 2020
This PR removes user provided configuration for liveness and
readiness probes, since most of these fields are expected to be
constant and are known to us, we should not require users to
provide these. Instead, we simply use all known values and create
the probes - we only expect initialDelay and probe period from
users. If these fields are not provided, we do not create the
probe.

Additionally, this PR also adds steps on how to create local PV
before creating a MinIOInstance.

Fixes #124
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants