Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add optional tags to AWS deployments #729

Closed
starkers opened this issue Oct 26, 2016 · 13 comments
Closed

Add optional tags to AWS deployments #729

starkers opened this issue Oct 26, 2016 · 13 comments
Labels
chunk/tags lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Milestone

Comments

@starkers
Copy link

I would like to add additional tags to a k8s cluster that are used at an organisational level for tracking resources..

Examples would be:

  • owner: starkers
  • environment: develop
  • project: awesome

Currently this can be done by modifying the cluster after kops has deployed but then I can no longer run kops again because it would undo the tags.

Initially I think this is essential on ec2 instances but if these can also applied to the ELB and other resources it would be very handy

@justinsb
Copy link
Member

@starkers have you seen: https://github.com/kubernetes/kops/blob/master/docs/labels.md

I like the idea of applying this to other resources also. There are two types of "other resources":

  • resources created & managed by kops. Should be relatively easy to add.
  • resources dynamically created by k8s (EBS volumes & ELBs primarily). A little trickier to add, as this would need to be done by k8s. We do apply the KubernetesCluster tag, and I think we have an issue to support more.

@starkers
Copy link
Author

Ahh cool, so for now 'labels' will do most of what I need (thanks)

I will have a look at the kubernetes issues to see about the resources it creates.

It would be nice to do this at the time of creating the cluster hopefully because I'm planning to put kops in a pipeline (unless something like this already exists?)

All this in terms of functionality is utterly trivial but I think it would be a pretty cool thing to tell someone looking at k8s+AWS that they will have full accountability out of the box.. :-)

@justinsb justinsb added this to the 1.5.1 milestone Dec 28, 2016
@justinsb
Copy link
Member

You can kops create -f in kops 1.5 (soon to be released) if you are willing to forgo all the wizardly calculation that kops does for you. If you're using a script this might be pretty easy. Check out https://github.com/kubernetes/kops/blob/master/tests/integration/minimal/in-v1alpha2.yaml

Alternatively, you can kops create cluster, kops edit to apply the labels, and then kops update.

But I think what you're asking for is either cluster-level labels, or a CLI flag on kops create cluster where you can specify the labels?

@ahawkins
Copy link
Contributor

But I think what you're asking for is either cluster-level labels, or a CLI flag on kops create cluster where you can specify the labels?

Personally I'd like a --labels option on the CLI.

@hridyeshpant
Copy link

@justinsb
when we are creating service based "type": "LoadBalancer", we need to apply tag to loadbalancer.
Our org are running cleanup process, so if any new resource dont have valid tag, is getting deleted.
Is there way we can tag LoadBalancer ? i tried cloudLabels but it is not allowing when creating service.

@krisnova
Copy link
Contributor

@hridyeshpant this is interesting..

What error are you getting from cloudLabels? ELB's actually DO support tagging.. so curious what your steps are.. Can you share replication steps please?

@hridyeshpant
Copy link

@kris-nova i am not getting any error but tag is not getting apply to ELB

  1. kops edit ig --name=kubetnets.us-west-2.test.XYZ.com master-us-west-2c
  2. add following cloudLabels
    cloudLabels:
    Portfolio: Shared - Internal Tools
    3.run kops update cluster $NAME --yes
  3. kops rolling-update cluster $NAME --yes
  4. kubectl create -f service.json
  5. service created with elb but not tag applied except two default tag.

i also put cloudLabels at nodes level , but still no luck.

do i need to put cloudLabels in service level? if yes which section i need to put. I tried under spec but getting error .
error: error validating "service.json": error validating data: found invalid field cloudLabels for v1.ServiceSpec; if you choose to ignore these errors, turn validation off with --validate=false

@hridyeshpant
Copy link

i tried to put cloudLabels at cluster level

kops edit cluster kubetnets.us-west-2.dev.XYZ.com
but that configuration is not getting save, even i ran update --yes option.

@pierreozoux
Copy link
Contributor

pierreozoux commented May 30, 2017

I just tried to add cloudLabels, run update, but my ELBs are still not tagged.

Is it implemented yet? What is the status of this issue?

FYI https://github.com/kubernetes/kubernetes/pull/45932/files :)

@starkers
Copy link
Author

The cloudLabels is really in the realm of ec2 host labels.. for changing tags in an ELB the AWS provider seems to now have the ability for you to tag ELBs (https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/aws.go#L138)

I actually tried that today but never got round to checking if it worked (to be honest I've not checked when it was even added to kubernetes so it may require you have a more current version of k8s also)

Hope this helps anyway

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 25, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 24, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chunk/tags lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants