Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuration file instead of command line switches #267

Closed
chrislovecnm opened this issue Aug 4, 2016 · 17 comments
Closed

Configuration file instead of command line switches #267

chrislovecnm opened this issue Aug 4, 2016 · 17 comments
Assignees
Milestone

Comments

@chrislovecnm
Copy link
Contributor

Any thoughts on driving this off of a yaml file, rather than switches?

@justinsb
Copy link
Member

justinsb commented Aug 5, 2016

We do, I think!

Just like with k8s, there is an underlying spec for the cluster and for instance groups. e.g. https://github.com/kubernetes/kops/blob/master/upup/pkg/api/cluster.go

When you kops edit cluster or kops edit ig you are editing the actual spec.

The CLI switches act as shortcuts to make it easier to create your seed config.

Or are you saying we should allow you to just specify a yaml file, like kops create -f <clusterspec> That would also be good - and not terribly hard - if so!

@justinsb justinsb added this to the backlog milestone Aug 5, 2016
@chrislovecnm
Copy link
Contributor Author

The second case, kops create -f my_cluster.yaml

justinsb added a commit to justinsb/kops that referenced this issue Aug 5, 2016
justinsb added a commit to justinsb/kops that referenced this issue Aug 5, 2016
@justinsb
Copy link
Member

justinsb commented Aug 8, 2016

Also, would be good to support something like kubectl replace or kubectl apply. A use case that came up in sig-aws was changing the default subnet CIDRs.

@chrislovecnm
Copy link
Contributor Author

We need this documented!!!

@krisnova
Copy link
Contributor

I'm still confused what we want/need here... Assigning to you @chrislovecnm - please send a PR with docs/use cases

@krisnova krisnova assigned chrislovecnm and unassigned justinsb Oct 27, 2016
@Eyjafjallajokull
Copy link

Eyjafjallajokull commented Nov 9, 2016

Hi, just wanted to note this issue is very important in my usecase. I need to automate cluster creation with custom options which are not editable by command line switches (spot price, cidrs). Is there currently any way to script whole process of cluster creation? Right now I am blocked because the only way to change spot price is by editing it in editor.

@krisnova
Copy link
Contributor

krisnova commented Nov 9, 2016

So this feature is already coded into kops https://github.com/kubernetes/kops/blob/master/cmd/kops/create.go#L44

I think we have a little more effort in documenting it, and providing an example YAML file with all possible configurations..

I can try to get one out soon, if nobody else wants to take it.

@blakebarnett
Copy link

Just reiterating what I said in the channel, I use this currently, my main wish is that I could follow the same pattern with kops update cluster -f, currently it's required to duplicate the changes in the yaml files we store in git, and in kops edit cluster|ig

@vlerenc
Copy link

vlerenc commented Mar 19, 2017

Any update on this? Creation by file/spec would be highly appreciated for headless/automated deployment of Kubernetes clusters through kops. It was written above that the code is already present, but it's not yet released?

@krisnova
Copy link
Contributor

krisnova commented Mar 19, 2017

Hey @vlerenc

So as it stands today on 1.5.3 a user can certainly use kops create -f $CONFIG.

A baseline config can be found by using a basic kops edit cluster command as in:

kops edit cluster $NAME

My example for private topology looks like:

# Please edit the object below. Lines beginning with a '#' will be ignored,                                                                                                                     
# and an empty file will abort the edit. If an error occurs while saving this file will be                                                                                                      
# reopened with the relevant failures.                                                                                                                                                          
#                                                                                                                                                                                               
apiVersion: kops/v1alpha2                                                                                                                                                                       
kind: Cluster                                                                                                                                                                                   
metadata:                                                                                                                                                                                       
  creationTimestamp: "2017-03-19T13:34:45Z"                                                                                                                                                     
  name: demo.nivenly.com                                                                                                                                                                        
spec:                                                                                                                                                                                           
  api:                                                                                                                                                                                          
    loadBalancer:                                                                                                                                                                               
      type: Public                                                                                                                                                                              
  channel: stable                                                                                                                                                                               
  cloudProvider: aws                                                                                                                                                                            
  configBase: s3://nivenly-state-store/demo.nivenly.com                                                                                                                                         
  etcdClusters:                                                                                                                                                                                 
  - etcdMembers:                                                                                                                                                                                
    - instanceGroup: master-us-west-1a                                                                                                                                                          
      name: a                                                                                                                                                                                   
    name: main                                                                                                                                                                                  
  - etcdMembers:                                                                                                                                                                                
    - instanceGroup: master-us-west-1a                                                                                                                                                          
      name: a                                                                                                                                                                                   
    name: events                                                                                                                                                                                
  kubernetesApiAccess:                                                                                                                                                                          
  - 0.0.0.0/0                                                                                                                                                                                   
  kubernetesVersion: 1.5.2                                                                                                                                                                      
  masterInternalName: api.internal.demo.nivenly.com                                                                                                                                             
  masterPublicName: api.demo.nivenly.com                                                                                                                                                        
  networkCIDR: 172.20.0.0/16                                                                                                                                                                    
  networking:                                                                                                                                                                                   
    weave: {}                                                                                                                                                                                   
  nonMasqueradeCIDR: 100.64.0.0/10                                                                                                                                                              
  sshAccess:                                                                                                                                                                                    
  - 0.0.0.0/0                                                                                                                                                                                   
  subnets:                                                                                                                                                                                      
  - cidr: 172.20.32.0/19                                                                                                                                                                        
    name: us-west-1a                                                                                                                                                                            
    type: Private                                                                                                                                                                               
    zone: us-west-1a                                                                                                                                                                            
  - cidr: 172.20.0.0/22                                                                                                                                                                         
    name: utility-us-west-1a                                                                                                                                                                    
    type: Utility                                                                                                                                                                               
    zone: us-west-1a                                                                                                                                                                            
  topology:
    bastion:                                                                                                                                                                                    
      bastionPublicName: bastion.demo.nivenly.com                                                                                                                                               
    dns:                                                                                                                                                                                        
      type: Public                                                                                                                                                                              
    masters: private                                                                                                                                                                            
    nodes: private             

kops update -f

Thanks for the suggestion @blakebarnett!

Now this frankly is a great idea. I can't believe we don't have this yet. I will push for this in the next release(s) and see if I can't find a volunteer to help code it. I know @geojaz is working on publicizing kops create -f in general, so maybe he would be interested in kops update -f as well? 😉

It will be tricky managing the deltas but thanks to the way kops handles it's internal models this feature should fit in nicely.

@justinsb what are your thoughts on this as far as implementation goes? This seems like low hanging fruit but want to bounce it off you before I get too carried away here.

I think we should close this issue as kops create -f is now in place, and open up an issue in favor of kops update -f.

@vlerenc
Copy link

vlerenc commented Mar 19, 2017

Ah, great, it works like a charm. Thank you!

@blakebarnett
Copy link

Also, you can use kops replace -f in place of kops upgrade -f, there may be some more validation that could be done with something like kops upgrade -f but replace is working well for us.

@vlerenc
Copy link

vlerenc commented Mar 20, 2017

Thanks, yes, that's what I was using so far (after a somewhat convoluted create step), but now create and replace both work from the same generated source, so that's really wonderful.

@max-lobur
Copy link
Contributor

max-lobur commented May 26, 2017

It's still unclear to me how can I get the initial yml template.
@kris-nova mentioned it can be obtained from kops edit cluster $NAME , but that requires an s3://<store>/<name>/config to be present. In other words to get a template for kops create -f I need to issue kops create <flags>, then get a template with kops edit, and only then I can edit a template and use kops create -f. Is that correct?

@gambol99
Copy link
Contributor

It would be nice to have a equivalent kubectl apply -f .. that way you could drive you CI from a cluster and instancegroup spec file ... The problem with kops replace is it won't create new resources if not there or delete when removed ...

@blakebarnett
Copy link

@max-lobur that's correct. Kops does some subnet calculations and generates a fair amount of boilerplate. An initial create command with the "close-enough" settings that you then use kops get cluster <name> -o yaml > cluster_name.yaml on is a pretty smooth process considering everything it's doing. I think there's some work to make this a one-step process in flight also here: #2954

@blakebarnett
Copy link

@gambol99 I think those features are where they're headed with the kops server/controller features. Not sure when they'll land.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants