Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Add ability to apply labels to individual nodes in a nodepool #903

Closed
maggie44 opened this issue Jul 25, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@maggie44
Copy link
Contributor

maggie44 commented Jul 25, 2023

Description

In #894 there is a discussion around node pool limits. While originally thought to be 255 it turns out to be 50 due to Hetzner's limits on subnets.

This is problematic for those who scale their services via nodepools instead of nodes as we now have much lower limits than originally anticipated. I for example add additional node pools to allow me to add labels and then the labels are used to associate a deployment. E.g.:

default = [
    {
      name        = "x1-1",
      server_type = "cax11",
      location    = "fsn1",
      labels      = ["cluster=a1"],
      taints      = [],
      count       = 1
    },
    {
      name        = "x1-2",
      server_type = "cpx11",
      location    = "nbg1",
      labels      = ["cluster=a1"],
      taints      = [],
      count       = 1
    },
    {
      name        = "x1-3",
      server_type = "cpx11",
      location    = "hel1",
      labels      = ["cluster=a1"],
      taints      = [],
      count       = 1
    }
  ]

In the above example I deploy to nodes with the label cluster=a1 which allows me to create a geographically dispersed HA cluster for a particular deployment and be able to target pods at particular sets of nodes. Being able to target pods at particular nodes is important for my services, as is those sets being geographically dispersed.

For a second deployment on different nodes, I add additional nodepools with labels for a second cluster:

default = [
    {
      name        = "x1-1",
      server_type = "cax11",
      location    = "fsn1",
      labels      = ["cluster=a1"],
      taints      = [],
      count       = 1
    },
    {
      name        = "x1-2",
      server_type = "cpx11",
      location    = "nbg1",
      labels      = ["cluster=a1"],
      taints      = [],
      count       = 1
    },
    {
      name        = "x1-3",
      server_type = "cpx11",
      location    = "hel1",
      labels      = ["cluster=a1"],
      taints      = [],
      count       = 1
    },
    {
      name        = "x2-1",
      server_type = "cax11",
      location    = "fsn1",
      labels      = ["cluster=a2"],
      taints      = [],
      count       = 1
    },
    {
      name        = "x2-2",
      server_type = "cpx11",
      location    = "nbg1",
      labels      = ["cluster=a2"],
      taints      = [],
      count       = 1
    },
    {
      name        = "x2-3",
      server_type = "cpx11",
      location    = "hel1",
      labels      = ["cluster=a2"],
      taints      = [],
      count       = 1
    }
  ]

Ideally, we would be able to increase the count on each pool to add additional nodes and that way we do not hit the limit of 50 nodepools. But when scaling nodes via count there is no means of adding labels to the new nodes, and therefore no way to target pods at a particular set of nodes.

I think being able to add labels to nodes within nodepools would be a useful feature to overcome this. It could look something like this:

agent_nodepools = [
    {
      name        = "my_nodes_location1",
      server_type = "cax11",
      location    = "fsn1",
      labels      = [],
      taints      = [],
      count      = [
        {
            name = "node1", 
            labels = ["cluster=1"]
        }, 
        {
            name = "node2",
            labels = ["cluster=2"]
        },
        {
            name = "unique_useful_user_id_for_reference",
            labels = ["cluster=3"]
        }
      ], # <-- starts 3 nodes all in the same region and node pool and with unique cluster labels.
    }
  ]

These could then be deprovisioned like this, assuming we need to set to 0 rather than change the length of the count when deprovisioning (maybe that's only a nodepool thing):

count = [{}, {name = "node2", labels = ["cluster=1"]}]]

It would be nice if this then allows us more control over the node labelling. It currently looks something like:

k3s-mynodename-epo

k3s is customisable via the config file and can be disabled. mynodename is set via name in the agent_nodepool. epo seems to be a random three letter code to distinguish each node.

It would be nice to be able to add identifiers to the nodes within each pool too, so either name in the item within the count array replaces epo and the terraform apply checks for unique values, or we add that name in addition to the random three letters:

 count      = [ 
 {
    name = "nodename1", 
    labels = ["cluster=1"]
 }
]
k3s-mynodename-nodename1
# or
k3s-mynodename-nodename1-epo

It wouldn't mean we would be able to use 255 node pools as originally thought but permits a similar effect and have more control over the nodes.

@mysticaltech
Copy link
Collaborator

mysticaltech commented Jul 26, 2023

Oh my, that's a layered request! But doable. However, please don't hesitate to submit a PR, if you you cannot code terraform, GPT-4 can 😂, you just manage it. It would help, otherwise I will get to it sooner or later. Also don't forget about backward compatibility, so all new additions must be optional and well thought of.

@maggie44
Copy link
Contributor Author

Oh my, that's a layered request!

Hopefully in a good way 😟. I tried to add more detail than less, I never like being on the other end of the feature requests with a one liner and no rationale for the goal.

GPT and I have a very close working relationship already, this one may be a bit of a stretch though 😉.

@mysticaltech
Copy link
Collaborator

@maggie44 No worries, will do.

@mysticaltech mysticaltech added the enhancement New feature or request label Oct 17, 2023
@kube-hetzner kube-hetzner locked and limited conversation to collaborators Oct 18, 2023
@mysticaltech mysticaltech converted this issue into discussion #1038 Oct 18, 2023

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants