This repo is not actively maintained. We are working on a documentation for setting up a Gardener landscape based on the Gardener Operator
Gardener uses Kubernetes to manage Kubernetes clusters. This documentation describes how to install Gardener on an existing Kubernetes cluster of your IaaS provider.
Where reference is made in this document to the base cluster, we are actually referring to the existing cluster where you will install Gardener. This helps to distinguish them from the clusters that you will create after the installation using Gardener. Once it's installed, it is also referred to as garden cluster. Whenever you create clusters, Gardener will create seed clusters and shoot clusters. In this documentation we will only cover the installation of clusters in one region of one IaaS provider. More information: Architecture.
Please be aware that garden-setup was created with the intent of providing an easy way to install Gardener for the purpose of "having a look into it". While it offers lots of configuration options and can be used to create landscapes , garden-setup lacks some features which are usually expected from a 'productive' installer. The most prominent example is that garden-setup does not have any built-in support for upgrading an existing landscape. You can 'deploy over' an existing landscape with a new version of garden-setup (or one of its components), but this scenario is not tested or validated in any way, and might or might not work.
- The installation was tested on Linux and MacOS
- You need to have the following tools installed:
- You need a base cluster. Currently, the installation tools supports to install Gardener on the following Kubernetes clusters:
- Kubernetes version >= 1.11 or enable the feature gate
CustomResourceSubresources
for 1.10 clusters - Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP)
- Elastic Container Service for Kubernetes (EKS) or Kubernetes Operations (kops) on Amazon Web Services (AWS)
- Standard EKS clusters impose some additional difficulties for deploying a Gardener, one example being the EKS networking plugin that uses the same CIDR for nodes and pods, which Gardener can't handle. We are working on an improved documentation for this case. In the meantime, it is recommended to use other means for getting the initial cluster to avoid additional efforts.
- Azure Kubernetes Service (AKS) on Microsoft Azure
- Kubernetes version >= 1.11 or enable the feature gate
- Your base cluster needs at least 4 nodes with a size of 8GB for each node
- This is only a rough estimate for the required resources, you can also use fewer or more nodes if the node size is adjusted accordingly
- If you don't create additional seeds, all shoots' controlplanes will be hosted on your base cluster and these minimal requirements won't hold
- You need a service account for the virtual machine instance of your IaaS provider where your Kubernetes version runs
- You need to have permissions to access your base cluster's private key
- You are connected to your Kubernetes cluster (environment variable
KUBECONFIG
is set) - You need to have the Vertical Pod Autoscaler (VPA) installed on the base cluster and each seed cluster (Gardener deploys it on shooted seeds automatically).
To install Gardener in your base cluster, a command line tool sow is used. It depends on other tools to be installed. To make it simple, we have created a Docker image that already contains sow
and all required tools. To execute sow
you call a wrapper script which starts sow
in a Docker container (Docker will download the image from eu.gcr.io/gardener-project/sow if it is not available locally yet). Docker executes the sow command with the given arguments, and mounts parts of your file system into that container so that sow
can read configuration files for the installation of Gardener components, and can persist the state of your installation. After sow
's execution Docker removes the container again.
Which version of sow
is compatible with this version of garden-setup is specified in the SOW_VERSION file. Other versions might work too, but especially older versions of sow
are probably incompatible with newer versions of garden-setup.
-
Clone the
sow
repository and add the path to our wrapper script to yourPATH
variable so you can callsow
on the command line.# setup for calling sow via the wrapper git clone "https://github.com/gardener/sow" cd sow export PATH=$PATH:$PWD/docker/bin
-
Create a directory
landscape
for your Gardener landscape and clone this repository into a subdirectory calledcrop
:cd .. mkdir landscape cd landscape git clone "https://github.com/gardener/garden-setup" crop
-
If you don't have your
kubekonfig
stored locally somewhere yet, download it. For example, for GKE you would use the following command:gcloud container clusters get-credentials <your_cluster> --zone <your_zone> --project <your_project>
-
Save your
kubeconfig
somewhere in yourlandscape
directory. For the remaining steps we will assume that you saved it using file pathlandscape/kubeconfig
. -
In your
landscape
directory, create a configuration file calledacre.yaml
. The structure of the configuration file is described below. Note that the relative file path./kubeconfig
file must be specified in fieldlandscape.cluster.kubeconfig
in the configuration file.Do not use file
acre.yaml
in directorycrop
. This file is used internally by the installation tool. -
The Gardener itself, but also garden-setup can only handle kubeconfigs with standard authentication methods (basic auth, token, ...). Authentication methods that require a third party tool, e.g. the
aws
orgcloud
CLI, are not supported.If you don't have a kubeconfig with a supported method of authentication, you can use this workaround: create a serviceaccount, grant it cluster-admin privileges by adding it to the corresponding
ClusterRoleBinding
, and construct a kubeconfig using that serviceaccount's token. Here is an example on how to do it manually. Alternatively, you can use the following command:sow convertkubeconfig
It will create a namespace with a serviceaccount in it and a clusterrolebinding which binds this serviceaccount to the
cluster-admin
role. Note that this replaces the kubeconfig which is referenced in the acre.yaml file. Also, there is currently no command to cleanup the resources created by this command, you will have to remove the namespace as well as the clusterrolebinding namedgarden-setup-auth
manually if you want them cleaned up (the kubeconfig will then stop working). -
Open a second terminal window which current directory is your
landscape
directory. Set theKUBECONFIG
environment variable as specified inlandscape.cluster.kubeconfig
, and watch the progress of the Gardener installation:export KUBECONFIG=./kubeconfig watch -d kubectl -n garden get pods,ingress,sts,svc
-
In your first terminal window, use the following command to check in which order the components will be installed. Nothing will be deployed yet and you can test this way if your syntax in
acre.yaml
is correct:sow order -A
-
If there are no error messages, use the following command to deploy Gardener on your base cluster:
sow deploy -A
-
sow
now starts to install Gardener in your base cluster. The installation can take about 30 minutes.sow
prints out status messages to the terminal window so that you can check the status of the installation. The other terminal window will show the newly created Kubernetes resources after a while and if their deployment was successful. Wait until the last component is deployed and all created Kubernetes resources are in statusRunning
. -
Use the following command to find out the URL of the Gardener dashboard.
sow url
More information: Most Important Commands and Directories
As a part of garden-setup, a kube-apiserver
and kube-controller-manager
will be deployed into your base cluster, creating the so-called 'virtual' cluster. The name comes from the fact that it behaves like a kubernetes cluster, but there aren't any nodes behind this kube-apiserver and thus no workload will actually run on this cluster. This kube-apiserver is then extended by the Gardener apiserver.
At first glance, this feels unintuitive. Why do we create another kube-apiserver which needs its own kubeconfig? However, there are two major reasons for this approach:
The kube-apiserver needs to be configured in a certain way so that it can be used for a Gardener landscape. For example, the Gardener dashboard needs some OIDC configuration to be set on the kube-apiserver, otherwise authentication at the dashboard won't work. However, since garden-setup relies on a base cluster created by other means, many people will probably use a managed kubernetes service (like GKE) to create the initial cluster - but most of the managed services do not grant access to the kube-apiserver to the end-users. By deploying an own kube-apiserver, garden-setup ensures full control over its configuration, which improves stability and reduces complexity of the landscape setup.
Garden-setup also deploys an own etcd for the kube-apiserver. Because the kube-apiserver - and thus its etcd - is only being used for Gardener resources, restoring the state of a Gardener landscape from an etcd backup is significantly easier than it would be if the Gardener resources were mixed with other resources in the etcd.
The major disadvantage of this approach is that two kubeconfigs are needed to operate the Gardener: one for the base cluster, where all the pods are running, and one for the 'virtual' cluster where the Gardener resources - shoot
, seed
, cloudprofile
, ... - are maintained. The kubeconfig for the 'virtual' cluster can be found in the landscape folder at export/kube-apiserver/kubeconfig
or it can be pulled from the secret garden-kubeconfig-for-admin
in the garden
namespace of the base cluster after the kube-apiserver
component of garden-setup has been deployed.
Use the kubeconfig at export/kube-apiserver/kubeconfig
to access the cluster where the Gardener resources - shoot
, seed
, cloudprofile
, and so on - are maintained.
This file will be evaluated using spiff
, a dynamic templating language for yaml files. For example, this simplifies the specification of field values that are used multiple times in the yaml file. For more information, see the spiff repository.
Please note that, for the sake of clarity, not all configuration options are listed in this readme. Instead, the more advanced configuration options have been moved into a set of additional documentation files. You can access these pages via their index and they are usually linked in their corresponding sections below.
landscape: name: <Identifier> # general Gardener landscape identifier, for example, `my-gardener` domain: <prefix>.<cluster domain> # unique basis domain for DNS entries cluster: # information about your base cluster kubeconfig: <relative path + filename> # path to your `kubeconfig` file, rel. to directory `landscape` (defaults to `./kubeconfig`) networks: # CIDR IP ranges of base cluster nodes: <CIDR IP range> pods: <CIDR IP range> services: <CIDR IP range> iaas: - name: (( iaas[0].type )) # name of the seed type: <gcp|aws|azure|alicloud|openstack|vsphere> # iaas provider region: <major region>-<minor region> # region for initial seed zones: # remove zones block for Azure - <major region>-<minor region>-<zone> # example: europe-west1-b - <major region>-<minor region>-<zone> # example: europe-west1-c - <major region>-<minor region>-<zone> # example: europe-west1-d credentials: # provide access to IaaS layer used for creating resources for shoot clusters - name: # see above type: <gcp|aws|azure|alicloud|openstack> # see above region: <major region>-<minor region> # region for seed zones: # remove zones block for Azure - <major region>-<minor region>-<zone> # Example: europe-west1-b - <major region>-<minor region>-<zone> # Example: europe-west1-c - <major region>-<minor region>-<zone> # Example: europe-west1-d cluster: # information about your seed's base cluster networks: # CIDR IP ranges of seed cluster nodes: <CIDR IP range> pods: <CIDR IP range> services: <CIDR IP range> kubeconfig: # kubeconfig for seed cluster apiVersion: v1 kind: Config ... credentials: etcd: # optional for gcp/aws/azure/alicloud/openstack, default values based on `landscape.iaas` backup: type: <gcs|s3|abs|oss|swift> # type of blob storage resourceGroup: # Azure resource group you would like to use for your backup region: (( iaas.region )) # region of blob storage (default: same as above) credentials: (( iaas.credentials )) # credentials for the blob storage's IaaS provider (default: same as above) resources: # optional: override resource requests and limits defaults requests: cpu: 400m memory: 2000Mi limits: cpu: 1 memory: 2560Mi dns: # optional for gcp/aws/azure/openstack, default values based on `landscape.iaas` type: <google-clouddns|aws-route53|azure-dns|alicloud-dns|openstack-designate|cloudflare-dns|infoblox-dns> # dns provider credentials: (( iaas.credentials )) # credentials for the dns provider identity: users: - email: # email (used for Gardener dashboard login) username: # username (displayed in Gardener dashboard) password: # clear-text password (used for Gardener dashboard login) - email: # see above username: # see above hash: # bcrypted hash of password, see above cert-manager: email: # email for acme registration server: <live|staging|self-signed|url> # which kind of certificates to use for the dashboard/identity ingress (defaults to `self-signed`) privateKey: # optional existing user account's private key
landscape:
name: <Identifier>
Arbitrary name for your landscape. The name will be part of the names for resources, for example, the etcd buckets.
domain: <prefix>.<cluster domain>
Basis domain for DNS entries. As a best practice, use an individual prefix together with the cluster domain of your base cluster.
cluster:
kubeconfig: <relative path + filename>
networks:
nodes: <CIDR IP range>
pods: <CIDR IP range>
services: <CIDR IP range>
Information about your base cluster, where the Gardener will be deployed on.
landscape.cluster.kubeconfig
contains the path to your kubeconfig, relative to your landscape directory. It is recommended to create a kubeconfig file in your landscape directory to be able to sync all files relevant for your installation with a git repository. This value is optional and will default to ./kubeconfig
if not specified.
landscape.cluster.networks
contains the CIDR ranges of your base cluster.
Finding out CIDR ranges of your cluster is not trivial. For example, GKE only tells you a "pod address range" which is actually a combination of pod and service CIDR. However, since the kubernetes
service typically has the first IP of the service IP range and most methods to get a kubernetes cluster tell you at least something about the CIDRs, it is usually possible to find out the CIDRs with a little bit of educated guessing.
iaas:
- name: (( type )) # name of the seed
type: <gcp|aws|azure|alicloud|openstack|vsphere> # iaas provider
region: <major region>-<minor region> # region for initial seed
zones: # remove zones block for Azure
- <major region>-<minor region>-<zone> # example: europe-west1-b
- <major region>-<minor region>-<zone> # example: europe-west1-c
- <major region>-<minor region>-<zone> # example: europe-west1-d
credentials: # provide access to IaaS layer used for creating resources for shoot clusters
- name: # see above
type: <gcp|aws|azure|alicloud|openstack|vsphere> # see above
region: <major region>-<minor region> # region for seed
zones: # remove zones block for Azure
- <major region>-<minor region>-<zone> # example: europe-west1-b
- <major region>-<minor region>-<zone> # example: europe-west1-c
- <major region>-<minor region>-<zone> # example: europe-west1-d
cluster: # information about your seed's base cluster
networks: # CIDR IP ranges of seed cluster
nodes: <CIDR IP range>
pods: <CIDR IP range>
services: <CIDR IP range>
kubeconfig: # kubeconfig for seed cluster
apiVersion: v1
kind: Config
...
credentials:
Contains the information where Gardener will create intial seed clusters and cloudprofiles to create shoot clusters.
Field | Type | Description | Examples | Iaas Provider Documentation |
---|---|---|---|---|
name |
Custom value | Name of the seed/cloudprofile. Must be unique. | gcp |
|
type |
Fixed value | IaaS provider for the seed. | gcp |
|
region |
IaaS provider specific | Region for the seed cluster. The convention to use <major region>-<minor region> does not apply to all providers. In Azure, use az account list-locations to find out the location name ( name attribute = lower case name without spaces). |
europe-west1 (GCP)eu-west-1 (AWS) eu-central-1 (Alicloud) westeurope (Azure) |
GCP (HowTo), GCP (overview); AWS (HowTo), AWS (Overview); Azure (Overview), Azure (HowTo) |
zones |
IaaS provider specific | Zones for the seed cluster. Not needed for Azure. | europe-west1-b (GCP) |
GCP (HowTo), GCP (overview); AWS (HowTo), AWS (Overview) |
credentials |
IaaS provider specific | Credentials in a provider-specific format. | See table with yaml keys below. | GCP, AWS, Azure |
cluster.kubeconfig |
Kubeconfig | The kubeconfig for your seed base cluster. Must have basic auth authentification. | ||
cluster.networks |
CIDRs | The CIDRs of your seed cluster. See landscape.cluster for more information. |
Here a list of configurations can be given. The setup will create one cloudprofile and seed per entry. Currently, you will have to provide the cluster you want to use as a seed - in future, the setup will be able to create a shoot and configure that shoot as a seed. The type
should match the type of the underlying cluster.
The first entry of the landscape.iaas
list is special:
- It has to exist - the list needs at least one entry.
- Don't specify the
cluster
node for it - it will configure your base cluster as seed.- Its
type
should match the one of your base cluster.
- Its
See the advanced documentation for more advanced configuration options and information about Openstack.
It's also possible to have the setup create shoots and then configure them as seeds. This has advantages compared to configuring existing clusters as seeds, e.g. you don't have to provide the clusters as they will be created automatically, the shooted seed clusters can leverage the Gardener's autoscaling capabilities, ...
How to configure shooted seeds is explained in the advanced documentation.
The credentials will be used to give Gardener access to the IaaS layer:
- To create a secret that will be used on the Gardener dashboard to create shoot clusters.
- To allow the control plane of the seed clusters to store the etcd backups of the shoot clusters.
Use the following yaml keys depending on your provider (excerpts):
AWS | GCP |
---|---|
credentials: |
credentials: |
Azure | Openstack |
credentials: |
credentials: |
Alicloud | Other |
credentials: |
The region
field in the openstack credentials is only evaluated within the dns
block (as iaas
and etcd.backup
have their own region fields, which will be used instead).
etcd:
backup:
# active: true
type: <gcs|s3|abs|swift|oss>
resourceGroup: ...
region: (( iaas.region ))
credentials: (( iaas.credentials ))
# schedule: "0 */24 * * *" # optional, default: 24h
# maxBackups: 7 # optional, default: 7
Configuration of what blob storage to use for the etcd key-value store. If your IaaS provider offers a blob storage you can use the same values for etc.backup.region
and etc.backup.credentials
as above for iaas.region
and iaas.credentials
correspondingly by using the (( foo )) expression of spiff.
If the type of landscape.iaas[0]
is one of gcp
, aws
, azure
, alicloud
, or openstack
, this block can be defaulted - either partly or as a whole - based on values from landscape.iaas
. The resourceGroup
, which is necessary for Azure, cannot be defaulted and must be specified. Make sure that the specified resourceGroup
is empty and unused as deleting the cluster using sow delete all
deletes this resourceGroup
.
Field | Type | Description | Example | Iaas Provider Documentation |
---|---|---|---|---|
backup.active |
Boolean | If set to false , deactivates the etcd backup for the virtual cluster etcd. Defaults to true . |
true |
n.a. |
backup.type |
Fixed value | Type of your blob store. Supported blob stores: gcs (Google Cloud Storage), s3 (Amazon S3), abs (Azure Blob Storage), oss (Alicloud Object Store), and swift (Openstack Swift). |
gcs |
n.a. |
backup.resourceGroup |
IaaS provider specific | Azure specific. Create an Azure blob store first which uses a resource group. Provide the resource group here. | my-Azure-RG |
Azure (HowTo) |
backup.region |
IaaS provider specific | Region of blob storage. | (( iaas.region )) |
GCP (overview), AWS (overview) |
backup.credentials |
IaaS provider specific | Service account credentials in a provider-specific format. | (( iaas.creds )) |
GCP, AWS, Azure |
dns:
type: <google-clouddns|aws-route53|azure-dns|openstack-designate|cloudflare-dns|infoblox-dns>
credentials:
Configuration for the Domain Name Service (DNS) provider. If your IaaS provider also offers a DNS service you can use the same values for dns.credentials
as for iaas.creds
above by using the (( foo )) expression of spiff. If they belong to another account (or to another IaaS provider) the appropriate credentials (and their type) have to be configured.
Similar to landscape.etcd
, this block - and parts of it - are optional if the type of landscape.iaas[0]
is one of gcp
, aws
, azure
, alicloud
, or openstack
. Missing values will be derived from landscape.iaas
.
Field | Type | Description | Example | IaaS Provider Documentation |
---|---|---|---|---|
type |
Fixed value | Your DNS provider. Supported providers: google-clouddns (Google Cloud DNS), aws-route53 (Amazon Route 53), alicloud-dns (Alicloud DNS), azure-dns (Azure DNS), openstack-designate (Openstack Designate), cloudflare-dns (Cloudflare DNS), and infoblox-dns (Infoblox DNS). |
google-clouddns |
n.a. |
credentials |
IaaS provider specific | Service account credentials in a provider-specific format (see above). | (( iaas.credentials )) |
GCP, AWS, Azure |
The credentials to use Cloudflare DNS consist of a single key apiToken
, containing your API token.
For Infoblox DNS, you have to specify USERNAME
, PASSWORD
, and HOST
in the credentials
node. For a complete list of optional credentials keys see here
identity:
users:
- email:
username:
password:
- email:
username:
hash:
Configures the identity provider that allows access to the Gardener dashboard. The easiest method is to provide a list of users
, each containing email
, username
, and either a clear-text password
or a bcrypted hash
of the password.
You can then login into the dashboard using one of the specified email/password combinations.
ingress:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true" # example for internal loadbalancers on aws
...
You can add annotations for the ingress controller load balancer service. This can be used for example to deploy an internal load balancer on your cloud provider (see the example for aws above).
cert-manager:
email:
server: <live|staging|self-signed|url>
privateKey: # optional
The setup deploys a cert-manager to provide a certificate for the Gardener dashboard, which can be configured here.
The entire landscape.cert-manager
block is optional.
If not specified, landscape.cert-manager.server
defaults to self-signed
. This means, that a selfs-signed CA will be created, which is used by the cert-manager (using a CA issuer) to sign the certificate. Since the CA is not publicly trusted, your webbrowser will show a 'untrusted certificate' warning when accessing the dashboard.
The landscape.cert-manager.email
field is not evaluated in self-signed
mode.
If set to live
, the cert-manager will use the letsencrypt ACME server to get trusted certificates for the dashboard. Beware the rate limits of letsencrypt.
Letsencrypt requires an email address and will send information about expiring certificates to that address. If landscape.cert-manager.email
is not specified, landscape.identity.users[0].email
will be used. One of the two fields has to be present.
If set to staging
, the cert-manager will use the letsencrypt staging server. This is for testing purposes mainly. The communication with letsencrypt works exactly as for the live
case, but the staging server does not produce trusted certificates, so you will still get the browser warning. The rate limits are significantly higher for the staging server, though.
If set to anything else, it is assumed to be the URL of an ACME server and the setup will create an ACME issuer for it.
See the advanced configuration for more configuration options.
If the given email address is already registers at letsencrypt, you can specify the private key of the associated user account with landscape.cert-manager.privateKey
.
-
Run
sow delete -A
to delete all components from your base Kubernetes cluster in inverse order. -
During the deletion, the corresponding contents in directories
gen
,export
, andstate
in yourlandscape
directory are deleted automatically as well.
- create a new service account in the kube-system namespace
kubectl -n kube-system create serviceaccount <service-account-name>
- Create a new clusterrolebinding with cluster administration permissions and bind it to the service account you just created
kubectl create clusterrolebinding <binding-name> --clusterrole=cluster-admin --serviceaccount=kube-system:<service-account-name>
- Obtain the name of the service account authentication token and assign its value to an environment variable
TOKENNAME=`kubectl -n kube-system get serviceaccount/<service-account-name> -o jsonpath='{.secrets[0].name}'`
- Obtain the value of the service account authentication token and assign its value (decoded from base64) to an environment variable. These instructions assume you specify TOKEN as the name of the environment variable.
TOKEN=`kubectl -n kube-system get secret $TOKENNAME -o jsonpath='{.data.token}'| base64 --decode`
- Add the service account (and its authentication token) as a new user definition in the kubeconfig file
kubectl config set-credentials <service-account-name> --token=$TOKEN
- Set the user specified in the kubeconfig file for the current context to be the new service account user you created
kubectl config set-context --current --user=<service-account-name>
- Your Kubeconfig should now contain the token to authenticate the Service Account
Example of the SA user in the kubeconfig
:
users:
- name: <service-account-name>
user:
token: <service-account-token>
These are the most important sow
commands for deploying and deleting components:
Command | Use |
---|---|
sow <component> |
Same as sow deploy <component> . |
sow delete <component> |
Deletes a single component |
sow delete -A |
Deletes all components in the inverse order |
sow delete all |
Same as sow delete -A |
sow delete -a <component> |
Deletes a component and all components that depend on it (including transitive dependencies) |
sow deploy <component> |
Deploys a single component. The deployment will fail if the dependencies have not been deployed before. |
sow deploy -A |
Deploys all components in the order specified by sow order -A |
sow deploy -An |
Deploys all components that are not deployed yet |
sow deploy all |
Same as sow deploy -A |
sow deploy -a <component> |
Deploys a component and all of its dependencies |
sow help |
Displays a command overview for sow |
sow order -a <component> |
Displays all dependencies of a given component (in the order they should be deployed in) |
sow order -A |
Displays the order in which all components can be deployed |
sow url |
Displays the URL for the Gardener dashboard (after a successful installation) |
After using sow to deploy the components, you will notice that there are new directories inside your landscape directory:
Directory | Use |
---|---|
gen |
Temporary files that are created during the deployment of components, for example, generated manifests. |
export |
Allows communication (exports and imports) between components. It also contains the kubeconfig for the virtual cluster that handles the Gardener resources. |
state |
Important state information of the components is stored here, for example, the terraform state and generated certificates. It is crucial that this directory is not deleted while the landscape is active. While the contents of the export and gen directorys will be overwritten when a component is deployed again, the contents of state will be reused instead. In some cases, it is necessary to delete the state of a component before deploying it again, for example if you want to create new certificates for it. |