Skip to content

Commit

Permalink
tweaked the docs for recent PRs
Browse files Browse the repository at this point in the history
  • Loading branch information
mysticaltech committed Jul 26, 2023
1 parent d13ba94 commit fab133c
Show file tree
Hide file tree
Showing 5 changed files with 26 additions and 26 deletions.
15 changes: 4 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,26 +149,19 @@ To manage your cluster with `kubectl`, you can either use SSH to connect to a co
### Connect via SSH
You can connect to one of the control plane nodes via SSH with `ssh root@<cp-ip-address>`. Now you are able to use `kubectl` to manage your workloads right away. By default, the firewall allows SSH connections from everywhere. You can change that by configuring the `firewall_ssh_source` in your kube.tf file.
You can connect to one of the control plane nodes via SSH with `ssh root@<control-plane-ip> -i /path/to/private_key -o StrictHostKeyChecking=no`. Now you are able to use `kubectl` to manage your workloads right away. By default, the firewall allows SSH connections from everywhere. Best to change that to your own IP by configuring the `firewall_ssh_source` in your kube.tf file (don't worry, you can always change it for deploy if your IP changes).

### Connect via Kube API

Make sure you can connect to the Kube API from a trusted network by configuring `firewall_kube_api_source` in your kube.tf file like that:
```hcl
firewall_kube_api_source = ["1.2.3.4/32"]
```
**Info:** Opening the Kube API to the public (`["0.0.0.0/0", "::/0"]`) is not recommended!
If you have access to the Kube API, you can immediately kubectl into it (using the `clustername_kubeconfig.yaml` saved to the project's directory after the installation). By doing `kubectl --kubeconfig clustername_kubeconfig.yaml`, but for more convenience, either create a symlink from `~/.kube/config` to `clustername_kubeconfig.yaml` or add an export statement to your `~/.bashrc` or `~/.zshrc` file, as follows (you can get the path of `clustername_kubeconfig.yaml` by running `pwd`):
If you have access to the Kube API (depending of the value of your `firewall_kube_api_source` variable, best to have the value of your own IP and not open to the world), you can immediately kubectl into it (using the `clustername_kubeconfig.yaml` saved to the project's directory after the installation). By doing `kubectl --kubeconfig clustername_kubeconfig.yaml`, but for more convenience, either create a symlink from `~/.kube/config` to `clustername_kubeconfig.yaml` or add an export statement to your `~/.bashrc` or `~/.zshrc` file, as follows (you can get the path of `clustername_kubeconfig.yaml` by running `pwd`):
```sh
export KUBECONFIG=/<path-to>/clustername_kubeconfig.yaml
```
If chose to turn `create_kubeconfig` to false in your kube.tf (good practice), you can still create this file by running `terraform output --raw kubeconfig > clustername_kubeconfig.yaml` and then use it as described above.
You can also use it in an automated flow, in which case `create_kubeconfig` should be set to false, and you can use the `kubeconfig` output variable to get the kubeconfig file in a structured data format.

_You can also use it in an automated flow, in which case `create_kubeconfig` should be set to false, and you can use the `kubeconfig` output variable to get the kubeconfig file in a structured data format._
## CNI
Expand Down Expand Up @@ -674,7 +667,7 @@ First and foremost, it depends, but it's always good to have a quick look into H
Then for the rest, you'll often need to log in to your cluster via ssh, to do that, use:
```sh
ssh root@xxx.xxx.xxx.xxx -i ~/.ssh/id_ed25519 -o StrictHostKeyChecking=no
ssh root@<control-plane-ip> -i /path/to/private_key -o StrictHostKeyChecking=no
```
Then, for control-plane nodes, use `journalctl -u k3s` to see the k3s logs, and for agents, use `journalctl -u k3s-agent` instead.
Expand Down
2 changes: 2 additions & 0 deletions control_planes.tf
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,8 @@ resource "null_resource" "control_planes" {
advertise-address = module.control_planes[each.key].private_ipv4_address
node-label = each.value.labels
node-taint = each.value.taints
cluster-cidr = var.cluster_ipv4_cidr
service-cidr = var.service_ipv4_cidr
selinux = true
write-kubeconfig-mode = "0644" # needed for import into rancher
},
Expand Down
2 changes: 2 additions & 0 deletions init.tf
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ resource "null_resource" "first_control_plane" {
advertise-address = module.control_planes[keys(module.control_planes)[0]].private_ipv4_address
node-taint = local.control_plane_nodes[keys(module.control_planes)[0]].taints
node-label = local.control_plane_nodes[keys(module.control_planes)[0]].labels
cluster-cidr = var.cluster_ipv4_cidr
service-cidr = var.service_ipv4_cidr
selinux = true
},
lookup(local.cni_k3s_settings, var.cni_plugin, {}),
Expand Down
31 changes: 17 additions & 14 deletions kube.tf.example
Original file line number Diff line number Diff line change
Expand Up @@ -61,17 +61,17 @@ module "kube-hetzner" {
# network_ipv4_cidr = "10.0.0.0/8"

# Using the default configuration you can only create a maximum of 42 agent-nodepools.
# This is due to the creation of a subnet for each nodepool with cidrs being `10.[nodepool-index].0.0/16` which collides with k3s' cluster and service IP ranges (defaults below).
# Furthermore the maximum number of nodepools (controlplane and agent) is 50, due to a hard limit of 50 subnets per network (https://docs.hetzner.com/cloud/networks/faq/#:~:text=You%20can%20create%20up%20to%2050%20subnets.)
# This is due to the creation of a subnet for each nodepool with CIDRs being in the shape of 10.[nodepool-index].0.0/16 which collides with k3s' cluster and service IP ranges (defaults below).
# Furthermore the maximum number of nodepools (controlplane and agent) is 50, due to a hard limit of 50 subnets per network, see https://docs.hetzner.com/cloud/networks/faq/.
# So to be able to create a maximum of 50 nodepools in total, the values below have to be changed to something outside that range, e.g. `10.200.0.0/16` and `10.201.0.0/16` for cluster and service respectively.

# If you must change the cluster CIDR you can do so below, but it is highly advised against.
# Cluster CIDR must be a part of the network CIDR!
# The cluster CIDR must be a part of the network CIDR!
# cluster_ipv4_cidr = "10.42.0.0/16"

# If you must change the service CIDR you can do so below, but it is highly advised against.
# Cluster CIDR must be a part of the network CIDR!
# cluster_ipv4_cidr = "10.43.0.0/16"
# The service CIDR must be a part of the network CIDR!
# service_ipv4_cidr = "10.43.0.0/16"

# For the control planes, at least three nodes are the minimum for HA. Otherwise, you need to turn off the automatic upgrades (see README).
# **It must always be an ODD number, never even!** Search the internet for "splitbrain problem with etcd" or see https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/
Expand All @@ -84,9 +84,8 @@ module "kube-hetzner" {
# Once the cluster is up and running, you can change nodepool count and even set it to 0 (in the case of the first control-plane nodepool, the minimum is 1).
# You can also rename it (if the count is 0), but do not remove a nodepool from the list.

# The only nodepools that are safe to remove from the list are at the end. That is due to how subnets and IPs get allocated (FILO).
# You can, however, freely add other nodepools at the end of each list if you want. The theoratical maximum number of nodepools you can create combined for both lists is 255.
# But due to a limitation of 50 subnets per network by hetzner, the realistic limit is 50 (see ipv4_cidr above).)
# You can safely add or remove nodepools at the end of each list. That is due to how subnets and IPs get allocated (FILO).
# The maximum number of nodepools you can create combined for both lists is 50 (see above).
# Also, before decreasing the count of any nodepools to 0, it's essential to drain and cordon the nodes in question. Otherwise, it will leave your cluster in a bad state.

# Before initializing the cluster, you can change all parameters and add or remove any nodepools. You need at least one nodepool of each kind, control plane, and agent.
Expand Down Expand Up @@ -492,12 +491,15 @@ module "kube-hetzner" {
# If you want to allow all outbound traffic you can set this to "false". Default is "true".
# restrict_outbound_traffic = false

# Allow access to the Kube API from the specified networks. Default: ["0.0.0.0/0", "::/0"]
# Allowed values: null (disable Kube API rule entirely) or a list of allowed networks with CIDR notation
firewall_kube_api_source = null
# Allow access to the Kube API from the specified networks. The default is ["0.0.0.0/0", "::/0"].
# Allowed values: null (disable Kube API rule entirely) or a list of allowed networks with CIDR notation.
# For maximum security, it's best to disable it completely by setting it to null. However, in that case, to get access to the kube api,
# you would have to connect to any control plane node via SSH, as you can run kubectl from within these.
# firewall_kube_api_source = null

# Allow SSH access from the specified networks. Default: ["0.0.0.0/0", "::/0"]
# Allowed values: null (disable SSH rule entirely) or a list of allowed networks with CIDR notation
# Allowed values: null (disable SSH rule entirely) or a list of allowed networks with CIDR notation.
# Ideally you would set your IP there. And if it changes after cluster deploy, you can always come back, update this variable and apply again.
# firewall_ssh_source = ["1.2.3.4/32", "1234::1/128"]

# Adding extra firewall rules, like opening a port
Expand Down Expand Up @@ -554,8 +556,9 @@ module "kube-hetzner" {
# When this is enabled, rather than the first node, all external traffic will be routed via a control-plane loadbalancer, allowing for high availability.
# The default is false.
# use_control_plane_lb = true
# when this use_control_plane_lb is enabled, change the load balancer type to lb21, the default is "lb11"
# control_plane_lb_type = lb21

# When the above use_control_plane_lb is enabled, you can change the lb type for it, the default is "lb11".
# control_plane_lb_type = "lb21"

# Let's say you are not using the control plane LB solution above, and still want to have one hostname point to all your control-plane nodes.
# You could create multiple A records of to let's say cp.cluster.my.org pointing to all of your control-plane nodes ips.
Expand Down
2 changes: 1 addition & 1 deletion locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ locals {
apply_k3s_selinux = ["/sbin/semodule -v -i /usr/share/selinux/packages/k3s.pp"]

install_k3s_server = concat(local.common_pre_install_k3s_commands, [
"curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_START=true INSTALL_K3S_SKIP_SELINUX_RPM=true INSTALL_K3S_CHANNEL=${var.initial_k3s_channel} INSTALL_K3S_EXEC='server ${var.k3s_exec_server_args} --cluster-cidr=${var.cluster_ipv4_cidr} --service-cidr=${var.service_ipv4_cidr}' sh -"
"curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_START=true INSTALL_K3S_SKIP_SELINUX_RPM=true INSTALL_K3S_CHANNEL=${var.initial_k3s_channel} INSTALL_K3S_EXEC='server ${var.k3s_exec_server_args}' sh -"
], local.apply_k3s_selinux)
install_k3s_agent = concat(local.common_pre_install_k3s_commands, [
"curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_START=true INSTALL_K3S_SKIP_SELINUX_RPM=true INSTALL_K3S_CHANNEL=${var.initial_k3s_channel} INSTALL_K3S_EXEC='agent ${var.k3s_exec_agent_args}' sh -"
Expand Down

0 comments on commit fab133c

Please sign in to comment.