Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

config: make sure minimum servers is always >= 0 #158

Closed
wants to merge 1 commit into from

Conversation

iameli
Copy link
Contributor

@iameli iameli commented Dec 14, 2016

I was setting up a new cluster and intended to create all of my worker nodes using node pools, so I had a workerCount: 0 line in there. This all works, except it tries to set the ASG's minSize to -1, and CloudFormation complains. This fixes that.

I was setting up a new cluster and intended to create all of my worker nodes
using node pools. This all works, except it tries to set the ASG's minSize to
-1, and CloudFormation complains. This fixes that.
@codecov-io
Copy link

codecov-io commented Dec 14, 2016

Current coverage is 69.78% (diff: 0.00%)

Merging #158 into master will decrease coverage by 0.12%

@@             master       #158   diff @@
==========================================
  Files             5          5          
  Lines          1110       1112     +2   
  Methods           0          0          
  Messages          0          0          
  Branches          0          0          
==========================================
  Hits            776        776          
- Misses          249        250     +1   
- Partials         85         86     +1   

Powered by Codecov. Last update 56f08d5...4b809dd

@iameli
Copy link
Contributor Author

iameli commented Dec 14, 2016

Closed in favor of #142.

@iameli iameli closed this Dec 14, 2016
@mumoshu
Copy link
Contributor

mumoshu commented Dec 14, 2016

@iameli Thanks for trying kube-aws 👍
Could I ask more details to follow your use-case?
Which deployment option you've chosen for your node pool, ASG or Spot Fleet?

If you've chosen ASG, workerCount is still required in order to instruct ASG to set its desired capacity.

workerCount: 1

If you've chosen Spot Fleet, workerCount is not required hence you can keep workerCount commented out like #workerCount: 1 while adding something to define Spot Fleet like:

#workerCount: 1

worker:
  spotFleet:
    targetCapacity: 10
    # *snip*

Also, regardless of how kube-aws works as of now, Node Pool is still an experimental feature hence there may be a lot of space for improvements and there may even be some friction for users in it.
Please don't hesitate to submit issues filled with your experiences, inconvenience and/or suggestions if necessary!

@mumoshu
Copy link
Contributor

mumoshu commented Dec 14, 2016

Anyways, this and @c-knowles's fix do resolve the problem arises when you try to set workerCount to zero. Thanks for the fix to both of you!

I'm just wanting to know the exact reason why you've tried to do so in the first place so that
I can reveal some hidden variables behind which may contribute to improvements of kube-aws hence better experience for you, in the future.

@iameli
Copy link
Contributor Author

iameli commented Dec 14, 2016

I have one cluster, test-cluster, with two node pools, test-cluster-mem and test-cluster-cpu. As their name implies, one of them has high-memory instances and one has high-CPU instances.

Each node pool has workerCount: 3 -- each node pool has three workers. But the main cluster.yaml file has workerCount: 0, so the main cluster itself doesn't have any workers associated with it. I like this because I can add and remove additional node pools and completely change my instance type without changing the settings of the cluster itself.

@mumoshu
Copy link
Contributor

mumoshu commented Dec 14, 2016

@iameli Understood!

That's definitely a valid use-case which I had imagined/intended but not tested myself yet.
Thanks for revealing the clear path to achieve the use-case 👍

@cknowles
Copy link
Contributor

Great, that's definitely the same outcome (worker count of zero in the main cluster def). The decouple of the node pools makes it quite a bit more flexible.

@mumoshu mumoshu added this to the v0.9.2-rc.5 milestone Dec 14, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants