Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scale instancegroup command #2765

Closed
wants to merge 2 commits into from

Conversation

gianrubio
Copy link
Contributor

@gianrubio gianrubio commented Jun 16, 2017

I did this PR to make easy to scale nodes as suggested on #1892.

It introduce the scale command, so the user will be able to run scale ig --name cluster.kops.ddy.systems nodes --replicas=4 to increase the instance size.
It run faster than the rolling update because it just change the max/desired size of instance.

I'm open for discussions about!!


This change is Reviewable

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jun 16, 2017
@k8s-ci-robot
Copy link
Contributor

Hi @gianrubio. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with @k8s-bot ok to test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 16, 2017
@gianrubio gianrubio force-pushed the cmd-scale branch 2 times, most recently from 2ca448a to 679ade2 Compare June 16, 2017 19:22
@chrislovecnm
Copy link
Contributor

@k8s-bot ok to test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jun 16, 2017
@chrislovecnm
Copy link
Contributor

You need to run make gen-cli-docs to create the documentation.

return fmt.Errorf("error scaling autoscaling group %q: %v", asgName, err)
}

// TODO it's important to wait to this command apply or just print a message to the user is enough?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think printing a message is a great idea, rather than waiting.

@justinsb
Copy link
Member

So I think this looks really good - can you make gen-cli-docs so we can merge it? I think that's all that is missing!

@justinsb justinsb added this to the backlog milestone Sep 25, 2017
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Sep 29, 2017
@gianrubio
Copy link
Contributor Author

gianrubio commented Sep 29, 2017

I add more validations also reviewed the docs.

Thats the output:

$ ./.build/local/kops  scale ig nodes --replicas=0 && kops get ig nodes -o yaml
Using cluster from kubectl context: kops

I0929 08:25:49.677842   81183 instancegroups.go:493] Scaling InstanceGroup "nodes" from 5 to 0
I0929 08:25:49.677875   81183 instancegroups.go:499] Changing min instances to 0
I0929 08:25:49.844774   81183 scale_instancegroup.go:127] Successful scaled! It will take few minutes to complete this operation...
Using cluster from kubectl context: kops

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-08-02T13:46:39Z
  labels:
    kops.k8s.io/cluster: kops
  name: nodes
spec:
  image: 595879546273/CoreOS-stable-1465.8.0-hvm
  machineType: t2.xlarge
  maxSize: 5
  minSize: 0
  role: Node
  subnets:
  - eu-west-1a
  - eu-west-1b
giancarlo.rubio$

$ ./.build/local/kops  scale ig master-eu-west-1a --replicas=1 && kops get ig master-eu-west-1a -o yaml
Using cluster from kubectl context: kops

I0929 08:27:11.740500   81196 instancegroups.go:493] Scaling InstanceGroup "master-eu-west-1a" from 1 to 1
I0929 08:27:11.740527   81196 instancegroups.go:499] Changing min instances to 1
I0929 08:27:11.890719   81196 scale_instancegroup.go:127] Successful scaled! It will take few minutes to complete this operation...
Using cluster from kubectl context: kops

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-08-02T13:46:39Z
  labels:
    kops.k8s.io/cluster: kops
  name: master-eu-west-1a
spec:
  image: 595879546273/CoreOS-stable-1465.8.0-hvm
  machineType: t2.large
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - eu-west-1a

@gianrubio gianrubio changed the title WIP: Add scale instancegroup command Add scale instancegroup command Sep 29, 2017
@k8s-github-robot
Copy link

@gianrubio PR needs rebase

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Oct 1, 2017
defReplicas = -1
)

// TODO add ability to add-nodes rather than scale
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this be like kops scale ig nodes +2? That would indeed be cool (but let's get this PR merged!)

Run: func(cmd *cobra.Command, args []string) {

if len(args) == 0 {
exitWithError(fmt.Errorf("Specify name of instance group to edit"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: probably should scale "to scale"

return fmt.Errorf("name is required")
}

if c.Replicas == defReplicas {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: It's probably better to use the Changed field on the flag (doing this in the cobra.Command Run function instead) (spf13/cobra#23). The reason being that then the -1 default value won't appear in the help!


scale_instancegroup_example = templates.Examples(i18n.T(`
# Scale a ig fixing it to 2 replicas
kops scale ig --name cluster.kops.ddy.systems nodes --replicas=2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either $NAME or cluster.example.com

@@ -17,14 +17,15 @@ limitations under the License.
package simple

import (
"net/url"
"strings"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what triggered the rebase requirement!

glog.Infof("Scaling InstanceGroup %q instances size to: %v", group.ObjectMeta.Name, c.DesiredReplicas)

// decrease max cluster size
if c.DesiredReplicas < *cig.asg.MinSize {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - tricky! I'm thinking we should just set min and max both to the desired size. If you disagree, can you add a comment (here or in the code) to explain?


func (g *CloudInstanceGroup) Scale(cloud fi.Cloud) error {

c := cloud.(awsup.AWSCloud)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I worry we might have broken this with some of the refactorings of this interface that have gone in recently... it should be easier now, but let me know if you need some pointers figuring out where things have moved to!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 8, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 10, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@chrislovecnm
Copy link
Contributor

/lifecycle frozen
/remove-lifecycle rotten

@chrislovecnm chrislovecnm reopened this Mar 14, 2018
@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Mar 14, 2018
@kubernetes kubernetes deleted a comment from k8s-github-robot Mar 14, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: gianrubio
To fully approve this pull request, please assign additional approvers.
We suggest the following additional approver: chrislovecnm

Assign the PR to them by writing /assign @chrislovecnm in a comment when ready.

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@justinsb justinsb modified the milestones: backlog, 1.11 May 26, 2018
@justinsb justinsb modified the milestones: 1.11, 1.12, backlog Nov 9, 2018
@emmekappa
Copy link

any updates on this?

@k8s-ci-robot
Copy link
Contributor

@gianrubio: The following tests failed, say /retest to rerun them all:

Test name Commit Details Rerun command
pull-kops-verify-gomod f522dd7 link /test pull-kops-verify-gomod
pull-kops-verify-generated f522dd7 link /test pull-kops-verify-generated
pull-kops-verify-staticcheck f522dd7 link /test pull-kops-verify-staticcheck

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@justinsb
Copy link
Member

I'm going to go ahead and close this PR. If you do want to revive it @gianrubio it would be a great addition :-)

/close

@k8s-ci-robot
Copy link
Contributor

@justinsb: Closed this PR.

In response to this:

I'm going to go ahead and close this PR. If you do want to revive it @gianrubio it would be a great addition :-)

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. waiting-for-input
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants