Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MinIO Tenant generated from MinIO operator shows incorrect storage #2249

Closed
hopoffbaby opened this issue Jul 29, 2024 · 5 comments
Closed

MinIO Tenant generated from MinIO operator shows incorrect storage #2249

hopoffbaby opened this issue Jul 29, 2024 · 5 comments

Comments

@hopoffbaby
Copy link

hopoffbaby commented Jul 29, 2024

When generating a cluster using a YAML manifest of a tenant, the capacity of the drives are not reported correctly via the GUI, MC or Prometheus metrics.

Expected Behavior

Create a MinIO tenant from MinIO Operator and have it correctly reflect the available storage size

Current Behavior

I create a Tenant with 4 servers, each with 2x 5Ti drives.

MinIO Operator reports a total capacity of 40Ti which is correct. MinIO itself reports each drive as 955GiB. Prometheus reports:

minio_cluster_capacity_raw_free_bytes{server="source-tenant-pool-0-3.source-tenant-hl.minio-tenant-source.svc.cluster.local:9000"} 7.520646332416e+12, which is 7.5TB by my math.

Any ideas??

Steps to Reproduce (for bugs)

  1. Deploy MinIO Operator
  2. Apply this yaml
apiVersion: v1
kind: Secret
metadata:
  name: source-tenant-env-configuration
  namespace: minio-tenant-source
  labels:
    v1.min.io/tenant: source-tenant
type: Opaque
data:
  config.env: ZXhwb3J0IE1JTklPX05PVElGWV9LQUZLQV9FTkFCTEVfUFJJTUFSWT0ib24iDQpleHBvcnQgTUlOSU9fTk9USUZZX0tBRktBX0JST0tFUlNfUFJJTUFSWT0ibXktY2x1c3Rlci1sb2NhbC1rYWZrYS1ib290c3RyYXAua2Fma2Euc3ZjLmNsdXN0ZXIubG9jYWw6OTA5MiINCmV4cG9ydCBNSU5JT19OT1RJRllfS0FGS0FfVE9QSUNfUFJJTUFSWT0ibWluaW8iDQpleHBvcnQgTUlOSU9fUk9PVF9VU0VSPSJtaW5pbyINCmV4cG9ydCBNSU5JT19ST09UX1BBU1NXT1JEPSJwYXNzd29yZCINCmV4cG9ydCBNSU5JT19TVE9SQUdFX0NMQVNTX1NUQU5EQVJEPSJFQzozIg0KZXhwb3J0IE1JTklPX1NUT1JBR0VfQ0xBU1NfUlJTPSJFQzoxIg0KZXhwb3J0IE1JTklPX1BST01FVEhFVVNfQVVUSF9UWVBFPSJwdWJsaWMi

---

apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  name: source-tenant
  namespace: minio-tenant-source
spec:
  configuration:
    name: source-tenant-env-configuration
  credsSecret:
    name: source-tenant-secret
  exposeServices:
    console: true
    minio: true
  features: {}
  buckets:
    - name: "test-bucket1"
  imagePullSecret: {}
  mountPath: /export
  pools:
  - affinity:
    containerSecurityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
        - ALL
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
      seccompProfile:
        type: RuntimeDefault
    name: pool-0
    resources:
      requests:
        cpu: "1"
        memory: 2Gi
    runtimeClassName: ""
    securityContext:
      fsGroup: 1000
      fsGroupChangePolicy: OnRootMismatch
      runAsGroup: 1000
      runAsNonRoot: true
      runAsUser: 1000
    servers: 4
    volumeClaimTemplate:
      metadata:
        name: data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: "5Ti"
        #storageClassName: hostpath
      status: {}
    volumesPerServer: 2
  requestAutoCert: true
  users:
  - name: source-tenant-user-0
  1. With this environment
export MINIO_NOTIFY_KAFKA_ENABLE_PRIMARY="on"
export MINIO_NOTIFY_KAFKA_BROKERS_PRIMARY="my-cluster-local-kafka-bootstrap.kafka.svc.cluster.local:9092"
export MINIO_NOTIFY_KAFKA_TOPIC_PRIMARY="minio"
export MINIO_ROOT_USER="minio"
export MINIO_ROOT_PASSWORD="password"
export MINIO_STORAGE_CLASS_STANDARD="EC:3"
export MINIO_STORAGE_CLASS_RRS="EC:1"
export MINIO_PROMETHEUS_AUTH_TYPE="public"
  1. Correct size is shown in MinIO Operator dashboard, incorrect size is shown in MinIO Tenant dashboard

The completely empty cluster shows this:

mc admin info --insecure  myminio/
●  source-tenant-pool-0-0.source-tenant-hl.minio-tenant-source.svc.cluster.local:9000
   Uptime: 14 minutes 
   Version: 2024-05-01T01:11:10Z
   Network: 4/4 OK 
   Drives: 2/2 OK 
   Pool: 1

●  source-tenant-pool-0-1.source-tenant-hl.minio-tenant-source.svc.cluster.local:9000
   Uptime: 14 minutes
   Version: 2024-05-01T01:11:10Z
   Network: 4/4 OK
   Drives: 2/2 OK
   Pool: 1

●  source-tenant-pool-0-2.source-tenant-hl.minio-tenant-source.svc.cluster.local:9000
   Uptime: 14 minutes
   Version: 2024-05-01T01:11:10Z
   Network: 4/4 OK
   Drives: 2/2 OK
   Pool: 1

●  source-tenant-pool-0-3.source-tenant-hl.minio-tenant-source.svc.cluster.local:9000
   Uptime: 14 minutes
   Version: 2024-05-01T01:11:10Z
   Network: 4/4 OK
   Drives: 2/2 OK
   Pool: 1

┌──────┬───────────────────────┬─────────────────────┬──────────────┐ 
│ Pool │ Drives Usage          │ Erasure stripe size │ Erasure sets │ 
│ 1st  │ 8.4% (total: 4.7 TiB) │ 8                   │ 1            │ 
└──────┴───────────────────────┴─────────────────────┴──────────────┘ 

0 B Used, 1 Bucket, 0 Objects
8 drives online, 0 drives offline, EC:3

Context

I am trying to understand the reporting via Prometheus of free and used space, especially when using REDUCED_REDUNDANCY storage classes

I see the same issue if I create the tenant from the Operator GUI, instead of using a manifest file.

Regression

Your Environment

  • Version used (minio --version): RELEASE.2024-05-01T01-11-10Z
  • Server setup and configuration: Test cluster in Docker-Desktop
  • Operating System and version (uname -a): Windows 10 + Docker Desktop

PVs auto generated of correct size:

kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                              STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
pvc-10e08994-c17c-43e6-b0ae-dacb1eef333d   5Ti        RWO            Delete           Bound    minio-tenant-source/data1-source-tenant-pool-0-1   hostpath       <unset>                          3m9s    
pvc-59fa52c7-a2f6-40ab-8a6d-6d668bc4091e   5Ti        RWO            Delete           Bound    minio-tenant-source/data0-source-tenant-pool-0-1   hostpath       <unset>                          3m9s    
pvc-79f86f81-f661-465b-8493-7cb7a0e4db7d   5Ti        RWO            Delete           Bound    minio-tenant-source/data0-source-tenant-pool-0-3   hostpath       <unset>                          3m7s    
pvc-7f8d9c2b-b6e2-4e67-b724-1d9689640fb1   5Ti        RWO            Delete           Bound    minio-tenant-source/data1-source-tenant-pool-0-2   hostpath       <unset>                          3m9s    
pvc-8eb4c155-b003-4399-9b78-ff2ac36aafb1   5Ti        RWO            Delete           Bound    minio-tenant-source/data1-source-tenant-pool-0-3   hostpath       <unset>                          3m7s    
pvc-d24856c4-c797-49b7-9453-d62d6882bec9   5Ti        RWO            Delete           Bound    minio-tenant-source/data0-source-tenant-pool-0-0   hostpath       <unset>                          3m10s   
pvc-d76a72da-447e-47ec-9c94-ae3176903c9e   5Ti        RWO            Delete           Bound    minio-tenant-source/data1-source-tenant-pool-0-0   hostpath       <unset>                          3m10s   
pvc-fa460940-5639-489d-bbbe-a700efaec273   5Ti        RWO            Delete           Bound    minio-tenant-source/data0-source-tenant-pool-0-2   hostpath       <unset>                          3m9s 
@harshavardhana harshavardhana transferred this issue from minio/minio Jul 29, 2024
@harshavardhana
Copy link
Member

harshavardhana commented Jul 29, 2024

The reporting is based on default storage class not rrs, if you want to see what rrs value would be make them same parity values.

@hopoffbaby
Copy link
Author

Thanks, good to know.

My main concern here though is that Prometheus and the MinIO console is reporting 7.5TiB total available, whereas it should be 40TiB.

Regarding reporting though, is there a way to see the logical and physical space a object takes. For example a 10MB file logically takes 10MB of space, but on a cluster with 50% parity it physically take 20MB.

Thanks

@hopoffbaby
Copy link
Author

I have a sneaky suspicion this is related to the way local-path k3s and hostpath docker-desktop storage classes present storage to MinIO.

I have a similar issue when testing on k3s. I specify 5Ti drives, but if I go into the minio container I see the /export0 mount is showing:

/dev/mapper/sysdisk-var 63G 12G 49G 19% /export0

The console shows each drive as 59.2Gi, which is you do the GiB > GB conversion gives 63GB

Is there anything that can be done here to help dev deployments and testing?

@ramondeklein
Copy link
Contributor

The local-path provider hasn't support for volume capacity limits (source). There is not a lot we can do about that from MinIO. We use the statfs system call to determine the capacity and amount of used space. Because the storage provider maps it to the host filesystem, it reports the sizes of the host. The only way to fix this is to use a different storage provider. Not sure if that's possible with k3s.

@hopoffbaby
Copy link
Author

Yes I agree. This isn't a minio issue but a k8s one.

I managed to get around this in my dev k3s by deploying Longhorn which can then generate its own PVs from the local storage that are correctly sized.

After that, everything reports exactly as it should

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants