Skip to content

Commit

Permalink
Merge pull request #983 from fsalvini/fix-typos
Browse files Browse the repository at this point in the history
Fix typos.
  • Loading branch information
mysticaltech authored Sep 22, 2023
2 parents 634686e + 2442d23 commit dede2bd
Show file tree
Hide file tree
Showing 5 changed files with 22 additions and 22 deletions.
26 changes: 13 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,8 @@ To achieve this, we built up on the shoulders of giants by choosing [openSUSE Mi

- Optimized container OS that is fully locked down, most of the filesystem is read-only!
- Hardened by default with an automatic ban for abusive IPs on SSH for instance.
- Evergreen release, your node will stay valid forever, as it piggy-backs into OpenSUSE Tumbleweed's rolling release!
- Automatic updates by default and automatic roll-backs if something breaks, thanks to its use of BTRFS snapshots.
- Evergreen release, your node will stay valid forever, as it piggybacks into OpenSUSE Tumbleweed's rolling release!
- Automatic updates by default and automatic rollbacks if something breaks, thanks to its use of BTRFS snapshots.
- Supports [Kured](https://github.com/kubereboot/kured) to properly drain and reboot nodes in an HA fashion.

**Why k3s?**
Expand Down Expand Up @@ -82,8 +82,8 @@ brew install hcloud
### 💡 [Do not skip] Creating your kube.tf file and the OpenSUSE MicroOS snapshot

1. Create a project in your [Hetzner Cloud Console](https://console.hetzner.cloud/), and go to **Security > API Tokens** of that project to grab the API key, it needs to be Read & Write. Take note of the key! ✅
1. Generate a passphrase-less ed25519 SSH key pair for your cluster; take note of the respective paths of your private and public keys. Or, see our detailed [SSH options](https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/docs/ssh.md). ✅
1. Now navigate to where you want to have your project live and execute the following command, which will help you get started with a **a new folder** along with the required files, and will propose you to create a needed MicroOS snapshot. ✅
2. Generate a passphrase-less ed25519 SSH key pair for your cluster; take note of the respective paths of your private and public keys. Or, see our detailed [SSH options](https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/docs/ssh.md). ✅
3. Now navigate to where you want to have your project live and execute the following command, which will help you get started with a **new folder** along with the required files, and will propose you to create a needed MicroOS snapshot. ✅

```sh
tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/create.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"
Expand Down Expand Up @@ -153,7 +153,7 @@ You can connect to one of the control plane nodes via SSH with `ssh root@<contro

### Connect via Kube API

If you have access to the Kube API (depending of the value of your `firewall_kube_api_source` variable, best to have the value of your own IP and not open to the world), you can immediately kubectl into it (using the `clustername_kubeconfig.yaml` saved to the project's directory after the installation). By doing `kubectl --kubeconfig clustername_kubeconfig.yaml`, but for more convenience, either create a symlink from `~/.kube/config` to `clustername_kubeconfig.yaml` or add an export statement to your `~/.bashrc` or `~/.zshrc` file, as follows (you can get the path of `clustername_kubeconfig.yaml` by running `pwd`):
If you have access to the Kube API (depending on the value of your `firewall_kube_api_source` variable, best to have the value of your own IP and not open to the world), you can immediately kubectl into it (using the `clustername_kubeconfig.yaml` saved to the project's directory after the installation). By doing `kubectl --kubeconfig clustername_kubeconfig.yaml`, but for more convenience, either create a symlink from `~/.kube/config` to `clustername_kubeconfig.yaml` or add an export statement to your `~/.bashrc` or `~/.zshrc` file, as follows (you can get the path of `clustername_kubeconfig.yaml` by running `pwd`):
```sh
export KUBECONFIG=/<path-to>/clustername_kubeconfig.yaml
Expand Down Expand Up @@ -258,14 +258,14 @@ reboot
Rarely needed, but can be handy in the long run. During the installation, we automatically download a backup of the kustomization to a `kustomization_backup.yaml` file. You will find it next to your `clustername_kubeconfig.yaml` at the root of your project.

1. First create a duplicate of that file and name it `kustomization.yaml`, keeping the original file intact, in case you need to restore the old config.
1. Edit the `kustomization.yaml` file; you want to go to the very bottom where you have the links to the different source files; grab the latest versions for each on GitHub, and replace. If present, remove any local reference to traefik_config.yaml, as Traefik is updated automatically by the system upgrade controller.
1. Apply the updated `kustomization.yaml` with `kubectl apply -k ./`.
2. Edit the `kustomization.yaml` file; you want to go to the very bottom where you have the links to the different source files; grab the latest versions for each on GitHub, and replace. If present, remove any local reference to traefik_config.yaml, as Traefik is updated automatically by the system upgrade controller.
3. Apply the updated `kustomization.yaml` with `kubectl apply -k ./`.

## Customizing the Cluster Components

Most cluster components of Kube-Hetzner are deployed with the Rancher [Helm Chart](https://rancher.com/docs/k3s/latest/en/helm/) yaml definition and managed by the Helm Controller inside k3s.

By default, we strive to give you optimal defaults, but if wish, you can customize them.
By default, we strive to give you optimal defaults, but if you wish, you can customize them.

For Traefik, Nginx, Rancher, Cilium, Traefik, and Longhorn, for maximum flexibility, we give you the ability to configure them even better via helm values variables (e.g. `cilium_values`, see the advanced section in the kube.tf.example for more).

Expand All @@ -291,7 +291,7 @@ _That said, you can also use pure Terraform and import the kube-hetzner module a

After the initial bootstrapping of your Kubernetes cluster, you might want to deploy applications using the same terraform mechanism. For many scenarios it is sufficient to create a `kustomization.yaml.tpl` file (see [Adding Extras](#adding-extras)). All applied kustomizations will be applied at once by executing a single `kubectl apply -k` command.

However, some applications that e.g. provide custom CRDs (e.g. [ArgoCD](https://argoproj.github.io/cd/)) need a different deployment strategy: one has to deploy CRDs first, then wait for the deployment, before being able to install the actual application. In the ArgoCD case, not waiting for the CRD setup to finish will cause failures. Therefore an additional mechanism is available to support these kind of deployments. Specify `extra_kustomize_deployment_commands` in your `kube.tf` file containing a series of commands to be executed, after the `Kustomization` step finished:
However, some applications that e.g. provide custom CRDs (e.g. [ArgoCD](https://argoproj.github.io/cd/)) need a different deployment strategy: one has to deploy CRDs first, then wait for the deployment, before being able to install the actual application. In the ArgoCD case, not waiting for the CRD setup to finish will cause failures. Therefore, an additional mechanism is available to support these kind of deployments. Specify `extra_kustomize_deployment_commands` in your `kube.tf` file containing a series of commands to be executed, after the `Kustomization` step finished:

```
extra_kustomize_deployment_commands = <<-EOT
Expand Down Expand Up @@ -634,7 +634,7 @@ To enable the [PodNodeSelector and optionally the PodTolerationRestriction](http
k3s_exec_server_args = "--kube-apiserver-arg enable-admission-plugins=PodTolerationRestriction,PodNodeSelector"
```
Next, you can set default nodeSelector values per namespace. This lets you assign namespaces to specific nodes. Note though, that this is the default as well as the whitelist, so if a pod sets its own nodeSelector value that must be a subset of the default. Otherwise the pod will not be scheduled.
Next, you can set default nodeSelector values per namespace. This lets you assign namespaces to specific nodes. Note though, that this is the default as well as the whitelist, so if a pod sets its own nodeSelector value that must be a subset of the default. Otherwise, the pod will not be scheduled.
Then set the according annotations on your namespaces:
```yaml
Expand All @@ -656,7 +656,7 @@ metadata:
name: this-runs-on-arm64
```
This can be helpful when you setup a mixed-architecture cluster, and there are many other use cases.
This can be helpful when you set up a mixed-architecture cluster, and there are many other use cases.
</details>
Expand Down Expand Up @@ -777,7 +777,7 @@ module "kube-hetzner" {
- `export TF_VAR_k3s_token="..."` (Be careful, this token is like an admin password to the entire cluster. You need to use the same k3s_token which you saved when creating the backup.)
- `export etcd_s3_secret_key="..."`
3. Create the cluster as usual. You can also change the cluster-name and deploy it next to the original backuped cluster.
3. Create the cluster as usual. You can also change the cluster-name and deploy it next to the original backed up cluster.
Awesome! You restored a whole cluster from a backup.
Expand Down Expand Up @@ -813,7 +813,7 @@ If you want to take down the cluster, you can proceed as follows:
terraform destroy -auto-approve
```
If you see the destroy hanging, it's probably because of the Hetzner LB and the autoscaled nodes. You can use the following command to delete everything (dry run option is available don't worry, and it will only delete ressources specific to your cluster):
If you see the destroy hanging, it's probably because of the Hetzner LB and the autoscaled nodes. You can use the following command to delete everything (dry run option is available don't worry, and it will only delete resources specific to your cluster):
```sh
tmp_script=$(mktemp) && curl -sSL -o "${tmp_script}" https://raw.githubusercontent.com/kube-hetzner/terraform-hcloud-kube-hetzner/master/scripts/cleanup.sh && chmod +x "${tmp_script}" && "${tmp_script}" && rm "${tmp_script}"
Expand Down
2 changes: 1 addition & 1 deletion docs/ssh.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ If your key-pair is of the `ssh-ed25519` sort (useful command `ssh-keygen -t ed2

---

Otherwise, for a key-pair with passphrase or a device like a Yubikey, make sure you have have an SSH agent running and your key is loaded with:
Otherwise, for a key-pair with passphrase or a device like a Yubikey, make sure you have an SSH agent running and your key is loaded with:

```bash
eval ssh-agent $SHELL
Expand Down
12 changes: 6 additions & 6 deletions kube.tf.example
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ module "kube-hetzner" {

# * Your ssh public key
ssh_public_key = file("~/.ssh/id_ed25519.pub")
# * Your private key must be "ssh_private_key = null" when you want to use ssh-agent for a Yubikey-like device authentification or an SSH key-pair with a passphrase.
# * Your private key must be "ssh_private_key = null" when you want to use ssh-agent for a Yubikey-like device authentication or an SSH key-pair with a passphrase.
# For more details on SSH see https://github.com/kube-hetzner/kube-hetzner/blob/master/docs/ssh.md
ssh_private_key = file("~/.ssh/id_ed25519")
# You can add additional SSH public Keys to grant other team members root access to your cluster nodes.
Expand Down Expand Up @@ -74,7 +74,7 @@ module "kube-hetzner" {
# service_ipv4_cidr = "10.43.0.0/16"

# For the control planes, at least three nodes are the minimum for HA. Otherwise, you need to turn off the automatic upgrades (see README).
# **It must always be an ODD number, never even!** Search the internet for "splitbrain problem with etcd" or see https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/
# **It must always be an ODD number, never even!** Search the internet for "split-brain problem with etcd" or see https://rancher.com/docs/k3s/latest/en/installation/ha-embedded/
# For instance, one is ok (non-HA), two is not ok, and three is ok (becomes HA). It does not matter if they are in the same nodepool or not! So they can be in different locations and of various types.

# Of course, you can choose any number of nodepools you want, with the location you want. The only constraint on the location is that you need to stay in the same network region, Europe, or the US.
Expand All @@ -96,7 +96,7 @@ module "kube-hetzner" {
# Please note that changing labels and taints after the first run will have no effect. If needed, you can do that through Kubernetes directly.

# ⚠️ When choosing ARM cax* server types, for the moment they are only available in fsn1 and hel1.
# Muli-architecture clusters are OK for most use cases, as container underlying images tend to be multi-architecture too.
# Multi-architecture clusters are OK for most use cases, as container underlying images tend to be multi-architecture too.

# * Example below:

Expand Down Expand Up @@ -218,7 +218,7 @@ module "kube-hetzner" {
# FYI, Hetzner says "Traffic between cloud servers inside a Network is private and isolated, but not automatically encrypted."
# Source: https://docs.hetzner.com/cloud/networks/faq/#is-traffic-inside-hetzner-cloud-networks-encrypted
# It works with all CNIs that we support.
# Just note, that if Cilium with cilium_values, the responsability of enabling of disabling Wireguard falls on you.
# Just note, that if Cilium with cilium_values, the responsibility of enabling of disabling Wireguard falls on you.
# enable_wireguard = true

# * LB location and type, the latter will depend on how much load you want it to handle, see https://www.hetzner.com/cloud/load-balancer
Expand Down Expand Up @@ -252,7 +252,7 @@ module "kube-hetzner" {
# Providing at least one map for the array enables the cluster autoscaler feature, default is disabled
# By default we set a compatible version with the default initial_k3s_channel, to set another one,
# have a look at the tag value in https://github.com/kubernetes/autoscaler/blob/master/charts/cluster-autoscaler/values.yaml
# ⚠️ Based on how the autoscaler works with this project, you can only choose either x86 instances or ARM server types for ALL autocaler nodepools.
# ⚠️ Based on how the autoscaler works with this project, you can only choose either x86 instances or ARM server types for ALL autoscaler nodepools.
# Also, as mentioned above, for the time being ARM cax* instances are only available in fsn1.
# If you are curious, it's ok to have a multi-architecture cluster, as most underlying container images are multi-architecture too.
# * Example below:
Expand Down Expand Up @@ -616,7 +616,7 @@ module "kube-hetzner" {
# When Rancher is enabled, it automatically installs cert-manager too, and it uses rancher's own self-signed certificates.
# See for options https://rancher.com/docs/rancher/v2.0-v2.4/en/installation/resources/advanced/helm2/helm-rancher/#choose-your-ssl-configuration
# The easiest thing is to leave everything as is (using the default rancher self-signed certificate) and put Cloudflare in front of it.
# As for the number of replicas, by default it is set to the numbe of control plane nodes.
# As for the number of replicas, by default it is set to the number of control plane nodes.
# You can customized all of the above by adding a rancher_values variable see at the end of this file in the advanced section.
# After the cluster is deployed, you can always use HelmChartConfig definition to tweak the configuration.
# IMPORTANT: Rancher's install is quite memory intensive, you will require at least 4GB if RAM, meaning cx21 server type (for your control plane).
Expand Down
2 changes: 1 addition & 1 deletion scripts/cleanup.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ DRY_RUN=1

echo "Welcome to the Kube-Hetzner cluster deletion script!"
echo " "
echo "We advise you to first run 'terraform destroy' and execute that script when it starts hanging because of ressources still attached to the network."
echo "We advise you to first run 'terraform destroy' and execute that script when it starts hanging because of resources still attached to the network."
echo "In order to run this script need to have the hcloud CLI installed and configured with a context for the cluster you want to delete."
command -v hcloud >/dev/null 2>&1 || { echo "hcloud (Hetzner CLI) is not installed. Install it with 'brew install hcloud'."; exit 1; }
echo "You can do so by running 'hcloud context create <cluster_name>' and inputting your HCLOUD_TOKEN."
Expand Down
2 changes: 1 addition & 1 deletion scripts/create.sh
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,6 @@ fi

# Output commands
echo " "
echo "Remember, don't skip the hcloud cli, to activate it run 'hcloud context create <project-name>'. It is ideal to quickly debug and allows targetted cleanup when needed!"
echo "Remember, don't skip the hcloud cli, to activate it run 'hcloud context create <project-name>'. It is ideal to quickly debug and allows targeted cleanup when needed!"
echo " "
echo "Before running 'terraform apply', go through the kube.tf file and fill it with your desired values."

0 comments on commit dede2bd

Please sign in to comment.