Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/minio] Access Denied when Deploying Bitnami/MinIO 2021.3.26-debian-10-r1 #5951

Closed
ethernoy opened this issue Mar 30, 2021 · 22 comments
Assignees
Labels
minio solved stale 15 days without activity triage Triage is needed

Comments

@ethernoy
Copy link

Which chart:
MinIO ( 6.7.1 )

Describe the bug
Encountered access denied error when deploying image Bitnami/MinIO 2021.3.26-debian-10-r1 using distributed mode. MinIO container restarts after this error occurs.

To Reproduce
Steps to reproduce the behavior:

Deploy Bitnami/MinIO 2021.3.26-debian-10-r1 using distributed mode.

Expected behavior
MinIO works normally

Version of Helm and Kubernetes:

  • Output of helm version:
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
  • Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.5+vmware.1", GitCommit:"1abde2b816bac0da89c6c71360799c681094ca0e", GitTreeState:"clean", BuildDate:"2020-06-29T22:31:51Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Additional context
The kernel I am running MinIO on is 4.19.129-1.ph3-esx. Admission controller is enabled and MinIO is granted root permission.

@marcosbc
Copy link
Contributor

Hi @ethernoy, could you share more details on how you're deploying Minio? It works for me.

$ helm install myminio bitnami/minio --set mode=distributed
$ kubectl get pods
...
myminio-0                                  1/1     Running            0          8m27s
myminio-1                                  1/1     Running            0          8m27s
myminio-2                                  1/1     Running            0          8m27s
myminio-3                                  1/1     Running            0          8m27s

@marcosbc
Copy link
Contributor

Also, make sure that there isn't any PVC from a previous deployment. If not, it may fail due to deploying with the wrong credentials, causing errors like:

API: SYSTEM()
Time: 08:39:27 UTC 03/30/2021
Error: Marking http://myminio-1.myminio-headless.default.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://myminio-1.myminio-headless.default.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": lookup myminio-1.myminio-headless.default.svc.cluster.local on 10.30.240.10:53: no such host (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()

@ethernoy
Copy link
Author

Hi @marcosbc

Attached is the content of value.yaml I use in the deployment:

global:
  imagePullSecrets: 
  - mySecret
  storageClass: myStorageClass
image:
  registry: myregistry
  repository: observability/bitnami/minio
  tag: 2021.3.26-debian-10-r1
  pullPolicy: Always
  debug: true
clientImage:
  registry: myregistry
  repository: observability/bitnami/minio-client
  tag: 2021.3.23-debian-10-r5
mode: distributed
accessKey:
  password: thanos123
  forcePassword: true
secretKey:
  password: thanos123
  forcePassword: true
defaultBuckets: "thanos"
statefulset:
  updateStrategy: RollingUpdate
  podManagementPolicy: Parallel
  replicaCount: 4
  zones: 1
  drivesPerNode: 1
securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 0
resources:
  limits:
    cpu: 300m
    memory: 512Mi
  requests:
    cpu: 256m
    memory: 256Mi
persistence:
  size: 10Gi

here is the content of myStorageClass:

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: "2021-03-25T03:26:52Z"
  name: myStorageClass
parameters:
  svStorageClass: myStorageClass
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

@marcosbc
Copy link
Contributor

Please also share more information like the deployment error you are seeing before the pod gets restarted.

Did you check if there could exist an existing PVC where Minio data is stored with different credentials than the ones you set?

@ethernoy
Copy link
Author

Please also share more information like the deployment error you are seeing before the pod gets restarted.

Did you check if there could exist an existing PVC where Minio data is stored with different credentials than the ones you set?

Most of the MinIO ends with the following log pattern, while seldomly it has other logs before it terminates, but I did not manage to capture one yet.

API: SYSTEM()
Time: 02:43:48 UTC 03/31/2021
Error: Marking http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 192.168.12.40:9000: connect: connection refused (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()
Waiting for a minimum of 2 disks to come online (elapsed 8s)
 02:43:48.95 INFO  ==> Adding local Minio host to 'mc' configuration...
API: SYSTEM()
Time: 02:43:49 UTC 03/31/2021
Error: Marking http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://minio-test-2.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 192.168.12.40:9000: connect: connection refused (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()
API: SYSTEM()
Time: 02:43:49 UTC 03/31/2021
Error: Marking http://minio-test-0.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29 temporary offline; caused by Post "http://minio-test-0.minio-test-headless.observability.svc.cluster.local:9000/minio/storage/data/v29/readall?disk-id=&file-path=format.json&volume=.minio.sys": dial tcp 192.168.9.48:9000: connect: connection refused (*fmt.wrapError)
       6: cmd/rest/client.go:138:rest.(*Client).Call()
       5: cmd/storage-rest-client.go:151:cmd.(*storageRESTClient).call()
       4: cmd/storage-rest-client.go:471:cmd.(*storageRESTClient).ReadAll()
       3: cmd/format-erasure.go:405:cmd.loadFormatErasure()
       2: cmd/format-erasure.go:325:cmd.loadFormatErasureAll.func1()
       1: pkg/sync/errgroup/errgroup.go:122:errgroup.(*Group).Go.func1()
Waiting for a minimum of 2 disks to come online (elapsed 8s)
API: SYSTEM()
Time: 02:43:49 UTC 03/31/2021
Error: Access Denied. (*errors.errorString)
       requestHeaders={"method":"GET","reqURI":"/minio/admin/v3/info","header":{"Host":["localhost:9000"],"User-Agent":["MinIO (linux; amd64) madmin-go/0.0.1 mc/DEVELOPMENT.2021-03-23T09-13-19Z"],"X-Amz-Content-Sha256":["e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855"]}}
       4: cmd/auth-handler.go:143:cmd.validateAdminSignature()
       3: cmd/auth-handler.go:159:cmd.checkAdminRequestAuth()
       2: cmd/admin-handlers.go:1520:cmd.adminAPIHandlers.ServerInfoHandler()
       1: net/http/server.go:2069:http.HandlerFunc.ServeHTTP()
 02:43:49.33 INFO  ==> MinIO is already stopped...
stream closed

@marcosbc
Copy link
Contributor

Hi @ethernoy, the error looks like it could be related to there being an existing PVC:

Error: Access Denied. (*errors.errorString)

I still haven't got confirmation from your side that you've checked if that could be the case. You can get the list of PVCs with kubectl get pvc.

Could you recommend in another namespace and/or release name and check if it works? Make sure the release name is unique in the namespace (i.e. miniotest-123-unique) or you will get the same errors.

@ethernoy
Copy link
Author

ethernoy commented Apr 1, 2021

Hi @ethernoy, the error looks like it could be related to there being an existing PVC:

Error: Access Denied. (*errors.errorString)

I still haven't got confirmation from your side that you've checked if that could be the case. You can get the list of PVCs with kubectl get pvc.

Could you recommend in another namespace and/or release name and check if it works? Make sure the release name is unique in the namespace (i.e. miniotest-123-unique) or you will get the same errors.

I just tested two cases:

  1. uninstall minio on the same namespace, delete all related pvc, then reinstall
  2. install minio on a different namespace using a different release name

Both test resulted in the same “Access Denied” error we discussed above

@marcosbc
Copy link
Contributor

marcosbc commented Apr 1, 2021

Hi @ethernoy, I'm checking your configuration and I don't understand this:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 0

That will cause Minio to run as a root user. Currently it is not supported by the Docker image as there seems to be a bug where the minio user is not created before starting the container:

 10:35:36.00 INFO  ==> ** Starting MinIO **
error: failed switching to "minio": unable to find user minio: no matching entries in passwd file

If I remove that configuration, I'm able to workaround that error. Could you try it?

@ethernoy
Copy link
Author

ethernoy commented Apr 1, 2021

Hi @ethernoy, I'm checking your configuration and I don't understand this:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 0

That will cause Minio to run as a root user. Currently it is not supported by the Docker image as there seems to be a bug where the minio user is not created before starting the container:

 10:35:36.00 INFO  ==> ** Starting MinIO **
error: failed switching to "minio": unable to find user minio: no matching entries in passwd file

If I remove that configuration, I'm able to workaround that error. Could you try it?

I believe it is not related to runAsRoot configuration. I tested under following cases:

  • Install as root, encounter access denied error, then restart service as non-root, encounter file unable to access error.
  • Install as non-root, encounter access denied error

Here is the non-root values.yaml I used:

global:
  imagePullSecrets: 
  - platform-tool-docker-repo
  storageClass: dev-cld-st01-storage-policy
image:
  registry: {repository_link}
  repository: observability/bitnami/minio
  tag: 2021.3.26-debian-10-r1
  pullPolicy: Always
  debug: true
clientImage:
  registry: {repository_link}
  repository: observability/bitnami/minio-client
  tag: 2021.3.23-debian-10-r5
mode: distributed
accessKey:
  password: thanos123
  forcePassword: true
secretKey:
  password: thanos123
  forcePassword: true
defaultBuckets: "thanos"
statefulset:
  updateStrategy: RollingUpdate
  podManagementPolicy: Parallel
  replicaCount: 4
  zones: 1
  drivesPerNode: 1
resources:
  limits:
    cpu: 300m
    memory: 512Mi
  requests:
    cpu: 256m
    memory: 256Mi
persistence:
  size: 10Gi

Installing Bitnami MinIO using this values.yaml still results in access denied error:
image

@marcosbc
Copy link
Contributor

marcosbc commented Apr 2, 2021

Hi, I'm going to forward this case to @juan131 who has more experience with MinIO. I'm finding some issues myself, although not related to your error (which I'm able to get past without issues).

In the meantime, it would be great if you could share more specs on your Kubernetes cluster. For instance, is it a vanilla Kubernetes cluster or are you running on a Kubernetes distribution? Could it also be that you are running a MinIO image based on Photon, from TAC?

@juan131
Copy link
Contributor

juan131 commented Apr 5, 2021

Hi @ethernoy

I agree with @marcosbc that the "securityContext" shouldn't be forcing the container to run as user "0", so please ensure you're including the section below in your values.yaml:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

You'll have to ensure that your cluster has support for changing the ownership and permissions for the contents of each volume. See:

Also, we're finding some issues with the "default buckets" feature in distributed mode. Therefore, I recommend you to remove the parameter defaultBuckets from your parameters and create manually your buckets after MinIO cluster is up and running.

@ethernoy
Copy link
Author

ethernoy commented Apr 7, 2021

Hi @ethernoy

I agree with @marcosbc that the "securityContext" shouldn't be forcing the container to run as user "0", so please ensure you're including the section below in your values.yaml:

securityContext:
  enabled: true
  fsGroup: 1001
  runAsUser: 1001

You'll have to ensure that your cluster has support for changing the ownership and permissions for the contents of each volume. See:

Also, we're finding some issues with the "default buckets" feature in distributed mode. Therefore, I recommend you to remove the parameter defaultBuckets from your parameters and create manually your buckets after MinIO cluster is up and running.

Hi, just double confirmed, when deploying with the default securityContext and disabled default bucket, Bitnami MinIO can work normally.

@juan131
Copy link
Contributor

juan131 commented Apr 7, 2021

Great!! I'll set a reminder to myself to see how can we modify the approach we're using to create the "default buckets" with the distributed mode. It's clearly not working as expected.

@ethernoy
Copy link
Author

ethernoy commented Apr 8, 2021

Great!! I'll set a reminder to myself to see how can we modify the approach we're using to create the "default buckets" with the distributed mode. It's clearly not working as expected.

I am curious, is this issue related only to the Bitnami MinIO chart and can be solved by chart modification solely, or is it related to the Bitnami MinIO docker image too?

@juan131
Copy link
Contributor

juan131 commented Apr 8, 2021

I'd say that both @ethernoy

It can be solved by improving the logic in the Bitnami MinIO container image which is not properly managing the buckets' creation when a distributed mode is used.
However, we can also take a completely different approach and delegate the default buckets creation on a Kubernetes job that creates them once the MinIO cluster is up and ready. In this case, we will implement the solution in the Minio chart without modifying the current logic of the container.

@github-actions
Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Apr 24, 2021
@juan131 juan131 added on-hold Issues or Pull Requests with this label will never be considered stale and removed stale 15 days without activity labels Apr 26, 2021
@ZILosoft
Copy link

any news?

@carrodher
Copy link
Member

Unfortunately, there was no internal progress on this task and I'm afraid that, if it was not prioritized during this time, there's not much chance we'll be working on it in the short term. Since we are a small team maintaining a lot of assets it is difficult to find the bandwidth to implement all the requests.

Being said that, thanks for reporting this issue and to be on top of it. Would you like to contribute by creating a PR to solve the issue? The Bitnami team will be happy to review it and provide feedback. Here you can find the contributing guidelines.

@carrodher carrodher removed the on-hold Issues or Pull Requests with this label will never be considered stale label Apr 22, 2022
@github-actions
Copy link

github-actions bot commented May 8, 2022

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label May 8, 2022
@github-actions
Copy link

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@papierkorp
Copy link

I had this problem with a minikube, since i just needed it for testing purposes I added --set persistence.enabled="false"

The Full Command being:

helm install minio bitnami/minio --namespace minio --create-namespace --set image.debug="true" --set service.type="ClusterIP" --set persistence.enabled="false"

@gruberdev
Copy link

I had this problem with a minikube, since i just needed it for testing purposes I added --set persistence.enabled="false"

The Full Command being:

helm install minio bitnami/minio --namespace minio --create-namespace --set image.debug="true" --set service.type="ClusterIP" --set persistence.enabled="false"

If there's no storage system or persistence, it makes sense that this error would not occur.

I would argue it is not relevant to the issue discussed here.

@github-actions github-actions bot added the triage Triage is needed label Nov 9, 2023
@carrodher carrodher added the minio label Nov 9, 2023
@github-actions github-actions bot added the solved label Nov 9, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
minio solved stale 15 days without activity triage Triage is needed
Projects
None yet
Development

No branches or pull requests

7 participants