-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Waiting for a minimum of 8 drives to come online (elapsed 12s) #1913
Comments
@sathishkumar-p turn-off anonymous logging in the tenant deployment spec and share the actual logs. |
Sure, let me do it |
Hello @harshavardhana , |
1.- Was expansion performed using the Console UI, 2.- Was this tested successfully on a lower environment before moving to production? 3.- Please provide the pod logs for one pod from each of the original 3 pools, as well as for one pod from the expansion pool. e.g. 4.- The logs provided seem to replication related. However replication was not mentioned in the original description. Note that there have been several important fixes to replication since minio RELEASE.2023-08-31T15-31-16Z. When do you plan to upgrade. |
Hi, @allanrogerr
|
Node Expansion using helm chart of ' A Helm chart for MinIO Operator' tenant |
Hi @harshavardhana and @allanrogerr, |
@sathishkumar-p Still pending is the following: |
@allanrogerr, |
@sathishkumar-p Is there a new deploy will restart? And stuck there? |
Yes its restating @jiuker |
@allanrogerr and @harshavardhana |
It fix in minio/minio#17979 |
I met the same problem, log like: API: SYSTEM()
Time: 09:15:25 UTC 01/08/2024
Error: Read failed. Insufficient number of drives online (*errors.errorString)
8: internal/logger/logger.go:258:logger.LogIf()
7: cmd/prepare-storage.go:254:cmd.connectLoadInitFormats()
6: cmd/prepare-storage.go:312:cmd.waitForFormatErasure()
5: cmd/erasure-server-pool.go:104:cmd.newErasureServerPools()
4: cmd/server-main.go:976:cmd.newObjectLayer()
3: cmd/server-main.go:718:cmd.serverMain.func9()
2: cmd/server-main.go:434:cmd.bootstrapTrace()
1: cmd/server-main.go:716:cmd.serverMain()
Waiting for a minimum of 8 drives to come online (elapsed 21m31s)
API: SYSTEM()
Time: 09:15:26 UTC 01/08/2024
Error: Read failed. Insufficient number of drives online (*errors.errorString)
8: internal/logger/logger.go:258:logger.LogIf()
7: cmd/prepare-storage.go:254:cmd.connectLoadInitFormats()
6: cmd/prepare-storage.go:312:cmd.waitForFormatErasure()
5: cmd/erasure-server-pool.go:104:cmd.newErasureServerPools()
4: cmd/server-main.go:976:cmd.newObjectLayer()
3: cmd/server-main.go:718:cmd.serverMain.func9()
2: cmd/server-main.go:434:cmd.bootstrapTrace()
1: cmd/server-main.go:716:cmd.serverMain()
Waiting for a minimum of 8 drives to come online (elapsed 21m32s)
[root@node-136 minio]# k get po -n minio
NAME READY STATUS RESTARTS AGE
myminio-pool-0-0 2/2 Running 0 17m
myminio-pool-0-1 2/2 Running 0 17m
myminio-pool-0-2 2/2 Running 0 17m
myminio-pool-0-3 2/2 Running 0 17m
minio version:
|
yeah let me try with latest version of minio. Also should i update operator version ? |
Upgrading operator is important as well. |
Sure i will upgrade the operator, but is that issue ? bcz @allanrogerr also facing the issue with latest minio version |
@JasperWey What log is this? Please provide all pod logs. @sathishkumar-p
Replace 2.- For each sts get the output. Replace
Several things could be wrong e.g. no pvs available. The logs should tell this. It would be easier to use the tenant console to perform the expansion if you are unable to get this running. |
Hello @allanrogerr , |
From your steps to reproduce:
Please provide the exact steps on how you did this. I will attempt to reproduce your issue. |
tenant spec: Do you think will this prb with PVC ? |
@sathishkumar-p Sorry for the late response. The spec provides give no clues as to what your issue is. When I mention exact steps, I need to know what your working with. Otherwise I can make a simple walkthru showing how to achieve what you're attempting to do using my own methods - probably a smaller setup. |
This should be fixed in the latest MinIO release. Please upgrade the image a type of a similar issue was addressed with MinIO server. Thanks closing this for now. |
Hi,
I have deployed the minio in distributed mode using minio operator. When I trying to expand the pool. I am getting the following error:
Waiting for a minimum of 8 drives to come online (elapsed 12s)
I have nearly, waited for 3 hours for all pods get sync, but not work so reverted to the previous pool size.
Expected Behavior
The extra pool should added and the size of minio cluster should expanded.
Current Behavior
Failed to expand due to minimum drivers not coming online.
Steps to Reproduce (for bugs)
Context
This is my production environment. Now we are reaching the size limit. We are unable to expand it.
Regression
No quay.io/minio/operator:v5.0.6Your Environment
minio-operator
): quay.io/minio/operator:v5.0.6uname -a
):Complete: Error Log :
"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.98244083Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*fmt.wrapError","source":["internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()","internal/logger/logonce.go:149:logger.LogOnceIf()","internal/rest/client.go:319:rest.(*Client).Call()","cmd/storage-rest-client.go:167:cmd.(*storageRESTClient).call()","cmd/storage-rest-client.go:567:cmd.(*storageRESTClient).ReadAll()","cmd/format-erasure.go:391:cmd.loadFormatErasure()","cmd/format-erasure.go:327:cmd.loadFormatErasureAll.func1()","github.com/minio/pkg@v1.7.5/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()"]}} {"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.98268901Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*fmt.wrapError","source":["internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()","internal/logger/logonce.go:149:logger.LogOnceIf()","internal/rest/client.go:319:rest.(*Client).Call()","cmd/storage-rest-client.go:167:cmd.(*storageRESTClient).call()","cmd/storage-rest-client.go:567:cmd.(*storageRESTClient).ReadAll()","cmd/format-erasure.go:391:cmd.loadFormatErasure()","cmd/format-erasure.go:327:cmd.loadFormatErasureAll.func1()","github.com/minio/pkg@v1.7.5/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()"]}} {"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.982752021Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*fmt.wrapError","source":["internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()","internal/logger/logonce.go:149:logger.LogOnceIf()","internal/rest/client.go:319:rest.(*Client).Call()","cmd/storage-rest-client.go:167:cmd.(*storageRESTClient).call()","cmd/storage-rest-client.go:567:cmd.(*storageRESTClient).ReadAll()","cmd/format-erasure.go:391:cmd.loadFormatErasure()","cmd/format-erasure.go:327:cmd.loadFormatErasureAll.func1()","github.com/minio/pkg@v1.7.5/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()"]}} {"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.98295671Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*fmt.wrapError","source":["internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()","internal/logger/logonce.go:149:logger.LogOnceIf()","internal/rest/client.go:319:rest.(*Client).Call()","cmd/storage-rest-client.go:167:cmd.(*storageRESTClient).call()","cmd/storage-rest-client.go:567:cmd.(*storageRESTClient).ReadAll()","cmd/format-erasure.go:391:cmd.loadFormatErasure()","cmd/format-erasure.go:327:cmd.loadFormatErasureAll.func1()","github.com/minio/pkg@v1.7.5/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()"]}} {"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.982946786Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*fmt.wrapError","source":["internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()","internal/logger/logonce.go:149:logger.LogOnceIf()","internal/rest/client.go:319:rest.(*Client).Call()","cmd/storage-rest-client.go:167:cmd.(*storageRESTClient).call()","cmd/storage-rest-client.go:567:cmd.(*storageRESTClient).ReadAll()","cmd/format-erasure.go:391:cmd.loadFormatErasure()","cmd/format-erasure.go:327:cmd.loadFormatErasureAll.func1()","github.com/minio/pkg@v1.7.5/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()"]}} {"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.990843037Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*fmt.wrapError","source":["internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()","internal/logger/logonce.go:149:logger.LogOnceIf()","internal/rest/client.go:319:rest.(*Client).Call()","cmd/storage-rest-client.go:167:cmd.(*storageRESTClient).call()","cmd/storage-rest-client.go:567:cmd.(*storageRESTClient).ReadAll()","cmd/format-erasure.go:391:cmd.loadFormatErasure()","cmd/format-erasure.go:327:cmd.loadFormatErasureAll.func1()","github.com/minio/pkg@v1.7.5/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()"]}} {"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.991756289Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*fmt.wrapError","source":["internal/logger/logonce.go:118:logger.(*logOnceType).logOnceIf()","internal/logger/logonce.go:149:logger.LogOnceIf()","internal/rest/client.go:319:rest.(*Client).Call()","cmd/storage-rest-client.go:167:cmd.(*storageRESTClient).call()","cmd/storage-rest-client.go:567:cmd.(*storageRESTClient).ReadAll()","cmd/format-erasure.go:391:cmd.loadFormatErasure()","cmd/format-erasure.go:327:cmd.loadFormatErasureAll.func1()","github.com/minio/pkg@v1.7.5/sync/errgroup/errgroup.go:123:errgroup.(*Group).Go.func1()"]}} {"level":"ERROR","errKind":"ALL","time":"2023-12-17T04:43:31.9971456Z","api":{"name":"SYSTEM","args":{"bucket":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","object":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5"}},"remotehost":"5e76d207cf4ab20866fdc03c83e8a0f4e8f458e880777956ec0bae4e9f23f6c5","error":{"message":"*errors.errorString","source":["internal/logger/logger.go:258:logger.LogIf()","cmd/prepare-storage.go:254:cmd.connectLoadInitFormats()","cmd/prepare-storage.go:312:cmd.waitForFormatErasure()","cmd/erasure-server-pool.go:103:cmd.newErasureServerPools()","cmd/server-main.go:957:cmd.newObjectLayer()","cmd/server-main.go:704:cmd.serverMain.func9()","cmd/server-main.go:423:cmd.bootstrapTrace()","cmd/server-main.go:702:cmd.serverMain()"]}} {"level":"INFO","errKind":"","time":"2023-12-17T04:43:31.997294634Z","message":"Waiting for a minimum of 8 drives to come online (elapsed 12s)\n"}
The text was updated successfully, but these errors were encountered: