Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce default volume size on example files #155

Closed
vmwiz opened this issue Jun 24, 2020 · 2 comments
Closed

Reduce default volume size on example files #155

vmwiz opened this issue Jun 24, 2020 · 2 comments
Labels
question Further information is requested

Comments

@vmwiz
Copy link

vmwiz commented Jun 24, 2020

Just experienced a dangerous behavior while trying to deploy examples/minioinstance-mcs.yaml ; 4 nodes with 4 volumes of 1Tb were spin up on my k8s cluster.

Obviously I should have checked the yaml before hitting apply but are you sure provisioning 16 Tb of storage space is relevant for an example file? Imagine the consequences on a pay-as-you-go service.

Expected Behavior

For a working example, volumes of 1Gi looks more than enough for testing functionalities, users can still adapt depending on their needs. At the end of the day, this is only an example file.

Current Behavior

All example files are configured to provision a total of 16 persistent volumes for a total of 16 Tb

Possible Solution

Revert #121 or explain why such choice? (PR discussion is empty)

Steps to Reproduce (for bugs)

kubectl apply -f minioinstance-mcs.yaml

Context

Just wanted to quickly spin a small instance to check the changes since my last try of the operator

Regression

Change was made with #121

Your Environment

  • Version used (minio-operator): 2.0.6
  • Environment name and version (e.g. kubernetes v1.17.2):1.17.6
  • Server type and version: bare metal
  • Operating System and version (uname -a): RHEL 7.8
  • Link to your deployment file: N/A
@kerneltime
Copy link
Contributor

kerneltime commented Jun 24, 2020

@vmwiz that is just an example, you can edit it, setting a value that works for your deployment.

@kerneltime kerneltime added the question Further information is requested label Jun 24, 2020
@harshavardhana
Copy link
Member

Just experienced a dangerous behavior while trying to deploy examples/minioinstance-mcs.yaml ; 4 nodes with 4 volumes of 1Tb were spin up on my k8s cluster.

Obviously I should have checked the yaml before hitting apply but are you sure provisioning 16 Tb of storage space is relevant for an example file? Imagine the consequences on a pay-as-you-go service.

Expected Behavior

For a working example, volumes of 1Gi looks more than enough for testing functionalities, users can still adapt depending on their needs. At the end of the day, this is only an example file.

Current Behavior

All example files are configured to provision a total of 16 persistent volumes for a total of 16 Tb

Possible Solution

Revert #121 or explain why such choice? (PR discussion is empty)

Steps to Reproduce (for bugs)

kubectl apply -f minioinstance-mcs.yaml

Context

Just wanted to quickly spin a small instance to check the changes since my last try of the operator

Regression

Change was made with #121

Your Environment

  • Version used (minio-operator): 2.0.6
  • Environment name and version (e.g. kubernetes v1.17.2):1.17.6
  • Server type and version: bare metal
  • Operating System and version (uname -a): RHEL 7.8
  • Link to your deployment file: N/A

Nothing dangerous about it, it is default value and we expect that you have atleast this much for any serious usage of MinIO. Defaults for serious deployments, if you want to use a lower value for testing you are free to set it.

Always read docs, it is usually bad idea to copy paste things from internet without verification.

jmontleon added a commit to jmontleon/operator that referenced this issue Jul 23, 2024
Fixes minio#155

Signed-off-by: Jason Montleon <jmontleo@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants