-
Notifications
You must be signed in to change notification settings - Fork 456
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: Pool creation name conflict when Expanding Tenant #1626
Comments
I even tried the manual creation with this command: kubectl minio tenant expand test --pool pool-0 --servers 4 --volumes 4 --capacity 12Gi --namespace test --storage-class directpv-min-io With the pool name i choosed (to avoid the use of "pool-1" again), i doesn't seems to have any effect on the tenant, since i don't see it restart nor "provision" the pods and volumes available. EDIT : I tried by curiosity to start again from scratch. I avoided the Operator Console web interface approach for the second expansion (after the "pool-1" creation and the deletion of "pool-0") of the tenant for the creation of a new pool. By doing it directly with the following command, it works: kubectl minio tenant expand test --pool pool-0 --servers 4 --volumes 4 --capacity 48Gi --namespace test --storage-class directpv-min-io The tenant restart and create the "pool-0" who get the pods and volumes up. |
@bobacarOrpheo How do you remove the |
@jiuker Yes, as i told, i tried to remove the "pool-0" with multiple ways: |
Ok, it seem edit like |
When you add the same pool name. Operator will logs err always with a cycle. So it can't do anything. @bobacarOrpheo |
If you want to fix this. There is no way. Unless you rename it back by the edit. But the data that has be stored will be gone. @bobacarOrpheo |
No problem @jiuker . Now, when we click on the Expand Tenant from the web interface of the Operator, it will automaticly assign a name not already used right? Do you know how i will be able to update to get the fix applied later? For the moment i'm using the manual way thanks to this command: kubectl minio tenant expand test \
--pool pool-0 \ # pool-0 is different from the already active pool named pool-1
--servers 4 \
--volumes 4 \
--capacity 36Gi \
--namespace test \
--storage-class directpv-min-io |
@bobacarOrpheo if you want to fix. The name will give |
I was learning and reproducing this issue, my steps:
$ mc admin decommission status myminio --insecure
┌─────┬───────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────────┬────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ https://myminio-pool-0-{0...3}.myminio-hl.tenant-lite.svc.cluster.local/export{0...1} │ 121 GiB (used) / 2.3 TiB (total) │ Active │
│ 2nd │ https://myminio-pool-1-{0...3}.myminio-hl.tenant-lite.svc.cluster.local/export{0...1} │ 121 GiB (used) / 2.3 TiB (total) │ Active │
└─────┴───────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────┴────────┘ $ mc admin decommission start myminio/ https://myminio-pool-0-{0...3}.myminio-hl.tenant-lite.svc.cluster.local/export{0...1} --insecure
Decommission started successfully for `https://myminio-pool-0-{0...3}.myminio-hl.tenant-lite.svc.cluster.local/export{0...1}`. $ mc admin decommission status myminio --insecure
┌─────┬───────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────────┬──────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ https://myminio-pool-0-{0...3}.myminio-hl.tenant-lite.svc.cluster.local/export{0...1} │ 121 GiB (used) / 2.3 TiB (total) │ Complete │
│ 2nd │ https://myminio-pool-1-{0...3}.myminio-hl.tenant-lite.svc.cluster.local/export{0...1} │ 121 GiB (used) / 2.3 TiB (total) │ Active │
└─────┴───────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────┴──────────┘
$ mc admin decommission status myminio --insecure
┌─────┬───────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────────┬────────┐
│ ID │ Pools │ Capacity │ Status │
│ 1st │ https://myminio-pool-1-{0...3}.myminio-hl.tenant-lite.svc.cluster.local/export{0...1} │ 121 GiB (used) / 2.3 TiB (total) │ Active │
└─────┴───────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────┴────────┘
pools:
- name: pool-1
resources: {}
servers: 4
volumeClaimTemplate:
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
status: {}
volumesPerServer: 2
- affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: v1.min.io/tenant
operator: In
values:
- myminio
- key: v1.min.io/pool
operator: In
values:
- pool-1
topologyKey: kubernetes.io/hostname
name: pool-1 |
Yes that is correct, data is never lost, PVCs are persisted. All you need to do is to put back the pool-0 back in the YAML of the tenant spec, so you are correct. Like the point here is that there is a WA by just editing the YAML and all is going to be good. But the issue I don't like and that I agree has to be fixed at some point is that we shouldn't assign or allow same name in two pools after decommission as this is not correct |
@cniackz So the pr protect that thing. But can't protect via console Edit. |
Oh ok got it, ok one step at a time. If PR protects that, then we have half of the equation resolved. |
We can consult with @dvaldivia or @bexsoft on this. The right solution is that UI should check existing pool's name and automatically assign next proper name for this to work. |
No, thank you for the quick fix @jiuker 👍 |
That's right @cniackz It's UI issue. We just check. That's enough |
You're welcome |
Expected Behavior
When expanding tenant, the new pool should have a name different from the others pools, thanks to incrementation, to provoke restart of the MinIO tenant and process of it's creation.
I suggest the possibility of deleting a pool directly inside the Pools interface.
Current Behavior
When extanding the tenant with a new pool, there is no effect since the new pool "pool-1" have the same name of another active pool "pool-1".
If i try to add on top of that a new pool, this creation have now a different name "pool-2" and make the tenant restart.
It never occurs since it's stuck.
Possible Solution
My workaround was to avoid the double "pool-1" name who break the expand of the tenant.
So for that, instead of decommissioning the "pool-0", i do it on the "pool-1" that i delete once completed.
Like that i have no conflict.
This approach work for the PoC i try to do, but if want to decommission the "pool-0" it may be a problem after due to the presence of "pool-1" name used again when expanding the tenant.
Steps to Reproduce (for bugs)
I built the nodes thanks to those commands:
Once in the Operator Console web interface, i first create a tenant named test with 4 servers, 36 Gi in total and 1 volume per server.
Like that i obtain my first Pool named "pool-0".
After creating a bucket where i upload one file, i go back to the Operator Console web interface.
In the Pool section, i click on Expand Tenant.
There i put 4 Number of Servers, 9 Gi as Volume Size, 1 Volumes per Server, and i choose directpv-min-io for Storage Class.
Like that i obtain my second Pool named "pool-1".
I decommission the "pool-0", then deleted it once completed using multiple different approachs:
Edit the tenant configuration inside the Operator Console with the icon of the pen at the up-right
Get the tenant.yaml from this command:
Then i remove the corresponding pools block starting from - affinity to volumesPerServer in the yaml and multiple others things to finish with this:
Context
I'm trying to find a way for expanding storage size of a tenant without having to create numbers of other servers for cumulative gains.
Your Environment
A Kubernetes cluster from OVH composed of 8 nodes with 10 Go HDD attached for each.
minio-operator
): 5.0.4uname -a
): Arch-LinuxThe text was updated successfully, but these errors were encountered: