Skip to content

Commit

Permalink
GCE integration (#317)
Browse files Browse the repository at this point in the history
* update GCE terraform config

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* GCE integration

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* Adjust terraform config

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* Upgrade machinecontroller to v1.1.2

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* Be explicit about port range

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* Fix constant forwarding rule recreation

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* terraform trick

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* double b64 encode GOOGLE_SERVICE_ACCOUNT

as machine-controller expects it in such way

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* Use zoned terraform output for GCE

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* Update GCE docs

Signed-off-by: Artiom Diomin <kron82@gmail.com>

* gce: Few doc additions

Signed-off-by: Artiom Diomin <kron82@gmail.com>
  • Loading branch information
kron4eg authored and kubermatic-bot committed Mar 29, 2019
1 parent cfac141 commit 1e88279
Show file tree
Hide file tree
Showing 9 changed files with 498 additions and 48 deletions.
1 change: 1 addition & 0 deletions docs/environment_variables.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,4 @@ In the following table you can find all configuration variables with support for
| `VSPHERE_ADDRESS` | The address of the vSphere instance |
| `VSPHERE_USERNAME` | The username of the vSphere user |
| `VSPHERE_PASSWORD` | The password of the vSphere user |
| `GOOGLE_CREDENTIALS` | GCE Service Account |
246 changes: 246 additions & 0 deletions docs/quickstart-gce.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,246 @@
# How To Install Kubernetes On GCE Cluster Using KubeOne

In this quick start we're going to show how to get started with KubeOne on GCE.
We'll cover how to create the needed infrastructure using our example terraform
configuration and then install Kubernetes. Finally, we're going to show how to
destroy the cluster along with the infrastructure.

As a result, you'll get Kubernetes 1.14.0 High-Available (HA) clusters with
three control plane nodes and two worker nodes.

### Prerequisites

To follow this quick start, you'll need:

* `kubeone` installed, which can be done by following the `Installing KubeOne`
section of [the
README](https://github.com/kubermatic/kubeone/blob/master/README.md),
* `terraform` installed. The binaries for `terraform` can be found on the
[Terraform website](https://www.terraform.io/downloads.html)

## Setting Up Credentials

In order for Terraform to successfully create the infrastructure and for KubeOne
to install Kubernetes and create worker nodes you need an [Service
Account](https://cloud.google.com/iam/docs/creating-managing-service-accounts)
with the appropriate permissions.

Once you have the service account you need to set `GOOGLE_CREDENTIALS`
environment variable:

```bash
export GOOGLE_CREDENTIALS=$(cat path/to/your_service_account.json)
```

**Note:** The credentials are also deployed to the cluster to be used by
`machine-controller` for creating worker nodes.

## Creating Infrastructure

KubeOne is based on the Bring-Your-Own-Infra approach, which means that you have
to provide machines and needed resources yourself. To make this task easier we
are providing Terraform scripts that you can use to get started. You're free to
use your own scripts or any other preferred approach.

The example terraform configuration for GCE is located in the
[`./examples/terraform/gce`](https://github.com/kubermatic/kubeone/tree/master/examples/terraform/gce)
directory.

**Note:** KubeOne comes with Terraform integration that is capable of reading
information about the infrastructure from Terraform output. If you decide not to
use our Terraform scripts but want to use Terraform integration, make sure
variable names in the output match variable names used by KubeOne.
Alternatively, if you decide not to use Terraform, you can provide needed
information about the infrastructure manually in the KubeOne configuration file.

First, we need to switch to the directory with Terraform scripts:

```bash
cd ./examples/terraform/gce
```

Before we can use Terraform to create the infrastructure for us, Terraform needs
to download the AWS plugin and setup it's environment. This is done by running
the `init` command:

```bash
terraform init
```

**Note:** You need to run this command only the first time before using scripts.

You may want to configure the provisioning process by setting variables defining
the cluster name, AWS region, instances size and similar. The easiest way is to
create the `terraform.tfvars` file and store variables there. This file is
automatically read by Terraform.

```bash
nano terraform.tfvars
```

For the list of available settings along with their names please see the
[`variables.tf`](https://github.com/kubermatic/kubeone/blob/master/examples/terraform/gce/variables.tf)
file. You should consider setting:

* `cluster_name` (required) - prefix for cloud resources
* `project` (required) — GCP Project ID
* `region` (default: europe-west3)
* `ssh_public_key_file` (default: `~/.ssh/id_rsa.pub`) - path to your SSH public
key that's deployed on instances
* `control_plane_type` (default: n1-standard-1) - note that you should have at
least 2 GB RAM and 2 CPUs for Kubernetes to work properly

The `terraform.tfvars` file can look like:

```
cluster_name = "demo"
project = "kubeone-demo-project"
region = "europe-west1"
```

Now that you configured Terraform you can use the `plan` command to see what
changes will be made:

```bash
terraform plan
```

Finally, if you agree with changes you can proceed and provision the
infrastructure:

```bash
terraform apply control_plane_target_pool_members_count=1
```

`control_plane_target_pool_members_count` is needed in order to bootstrap
control plane. Once install is done it's recommended to include all control
plane VMs into the LB (will be covered a bit later in this document).

Shortly after you'll be asked to enter `yes` to confirm your intention to
provision the infrastructure.

Infrastructure provisioning takes around 5 minutes. Once it's done you need to
create a Terraform state file that is parsed by KubeOne:

```bash
terraform output -json > tf.json
```

## Installing Kubernetes

Now that you have infrastructure you can proceed with installing Kubernetes
using KubeOne.

Before you start you'll need a configuration file that defines how Kubernetes
will be installed, e.g. what version will be used and what features will be
enabled. For the configuration file reference see
[`config.yaml.dist`](https://github.com/kubermatic/kubeone/blob/master/config.yaml.dist).

To get started you can use the following configuration. It'll install Kubernetes
1.13.4 and create 2 worker nodes. KubeOne automatically populates information
about VPC IDs and region for worker nodes from the Terraform output.
Alternatively, you can set those information manually. As KubeOne is using
[Kubermatic
`machine-controller`](https://github.com/kubermatic/machine-controller) for
creating worker nodes, see [AWS example
manifest](https://github.com/kubermatic/machine-controller/blob/master/examples/aws-machinedeployment.yaml)
for available options.

```yaml
name: demo
versions:
kubernetes: '1.14.0'
provider:
name: 'gce'
```
Finally, we're going to install Kubernetes by using the `install` command and
providing the configuration file and the Terraform output:

```bash
kubeone install config.yaml --tfjson tf.json
```

The installation process takes some time, usually 5-10 minutes. The output
should look like the following one:

```
INFO[17:24:41 EET] Installing prerequisites…
INFO[17:24:42 EET] Determine operating system… node=35.198.117.209
INFO[17:24:42 EET] Determine operating system… node=35.246.186.88
INFO[17:24:42 EET] Determine operating system… node=35.198.129.205
INFO[17:24:42 EET] Determine hostname… node=35.198.117.209
INFO[17:24:42 EET] Creating environment file… node=35.198.117.209
INFO[17:24:42 EET] Installing kubeadm… node=35.198.117.209 os=ubuntu
INFO[17:24:43 EET] Deploying configuration files… node=35.198.117.209 os=ubuntu
INFO[17:24:43 EET] Determine hostname… node=35.246.186.88
INFO[17:24:43 EET] Creating environment file… node=35.246.186.88
INFO[17:24:43 EET] Installing kubeadm… node=35.246.186.88 os=ubuntu
INFO[17:24:43 EET] Determine hostname… node=35.198.129.205
INFO[17:24:43 EET] Deploying configuration files… node=35.246.186.88 os=ubuntu
INFO[17:24:43 EET] Creating environment file… node=35.198.129.205
INFO[17:24:43 EET] Installing kubeadm… node=35.198.129.205 os=ubuntu
INFO[17:24:43 EET] Deploying configuration files… node=35.198.129.205 os=ubuntu
INFO[17:24:44 EET] Generating kubeadm config file…
INFO[17:24:45 EET] Configuring certs and etcd on first controller…
INFO[17:24:45 EET] Ensuring Certificates… node=35.246.186.88
INFO[17:24:47 EET] Downloading PKI files… node=35.246.186.88
INFO[17:24:49 EET] Creating local backup… node=35.246.186.88
INFO[17:24:49 EET] Deploying PKI…
INFO[17:24:49 EET] Uploading files… node=35.198.117.209
INFO[17:24:49 EET] Uploading files… node=35.198.129.205
INFO[17:24:52 EET] Configuring certs and etcd on consecutive controller…
INFO[17:24:52 EET] Ensuring Certificates… node=35.198.117.209
INFO[17:24:52 EET] Ensuring Certificates… node=35.198.129.205
INFO[17:24:54 EET] Initializing Kubernetes on leader…
INFO[17:24:54 EET] Running kubeadm… node=35.246.186.88
INFO[17:25:09 EET] Joining controlplane node…
INFO[17:26:36 EET] Copying Kubeconfig to home directory… node=35.198.117.209
INFO[17:26:36 EET] Copying Kubeconfig to home directory… node=35.246.186.88
INFO[17:26:36 EET] Copying Kubeconfig to home directory… node=35.198.129.205
INFO[17:26:37 EET] Building Kubernetes clientset…
INFO[17:26:39 EET] Applying canal CNI plugin…
INFO[17:26:43 EET] Installing machine-controller…
INFO[17:26:46 EET] Installing machine-controller webhooks…
INFO[17:26:47 EET] Waiting for machine-controller to come up…
INFO[17:27:12 EET] Creating worker machines…
```

Once it's finished in order in include 2 other control plane VMs into the LB:
```bash
terraform apply
```

KubeOne automatically downloads the Kubeconfig file for the cluster. It's named
as `cluster-name-kubeconfig`. You can use it with kubectl such as `kubectl
--kubeconfig cluster-name-kubeconfig` or export the `KUBECONFIG` variable
environment variable:
```bash
export KUBECONFIG=$PWD/cluster-name-kubeconfig
```

## Deleting The Cluster

Before deleting a cluster you should clean up all MachineDeployments, so all
worker nodes are deleted. You can do it with the `kubeone reset` command:

```bash
kubeone reset config.yaml --tfjson tf.json
```

This command will wait for all worker nodes to be gone. Once it's done you can
proceed and destroy the AWS infrastructure using Terraform:

```bash
terraform destroy
```

You'll be asked to enter `yes` to confirm your intention to destroy the cluster.

Congratulations! You're now running Kubernetes 1.13.4 HA cluster with three
control plane nodes and two worker nodes. If you want to learn more about
KubeOne and its features, such as [upgrades](upgrading_cluster.md), make sure to
check our
[documentation](https://github.com/kubermatic/kubeone/tree/master/docs).
40 changes: 40 additions & 0 deletions examples/terraform/gce/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# Terraform

## GCE Provider configuration

### Credentials

Per https://www.terraform.io/docs/providers/google/provider_reference.html#configuration-reference
ether of the following ENV variables should be accessible:
* `GOOGLE_CREDENTIALS`
* `GOOGLE_CLOUD_KEYFILE_JSON`
* `GCLOUD_KEYFILE_JSON`

## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|:----:|:-----:|:-----:|
| cluster\_name | prefix for cloud resources | string | n/a | yes |
| cluster\_network\_cidr | Cluster network subnet cidr | string | `"10.240.0.0/24"` | no |
| control\_plane\_count | Number of instances | string | `"3"` | no |
| control\_plane\_image\_family | Image family to use for provisioning instances | string | `"ubuntu-1804-lts"` | no |
| control\_plane\_image\_project | Project of the image to use for provisioning instances | string | `"ubuntu-os-cloud"` | no |
| control\_plane\_type | GCE instance type | string | `"n1-standard-1"` | no |
| control\_plane\_volume\_size | Size of the boot volume, in GB | string | `"100"` | no |
| project | Project to be used for all resources | string | n/a | yes |
| region | GCP region to speak to | string | `"europe-west3"` | no |
| ssh\_agent\_socket | SSH Agent socket, default to grab from $SSH_AUTH_SOCK | string | `"env:SSH_AUTH_SOCK"` | no |
| ssh\_port | SSH port | string | `"22"` | no |
| ssh\_public\_key\_file | SSH public key file | string | `"~/.ssh/id_rsa.pub"` | no |
| ssh\_username | Username to provision with the ssh_public_key_file | string | `"kubeadmin"` | no |
| workers\_type | GCE instance type | string | `"n1-standard-1"` | no |
| workers\_volume\_size | Size of the boot volume, in GB | string | `"100"` | no |

## Outputs

| Name | Description |
|------|-------------|
| kubeone\_api | kubernetes API loadbalancer |
| kubeone\_hosts | control plain nodes |
| kubeone\_workers | workers definitions translated into MachineDeployment ClusterAPI objects |

29 changes: 25 additions & 4 deletions examples/terraform/gce/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ resource "google_compute_network" "network" {
resource "google_compute_subnetwork" "subnet" {
name = "${var.cluster_name}-subnet"
network = "${google_compute_network.network.self_link}"
region = "${var.region}"
ip_cidr_range = "${var.cluster_network_cidr}"
}

Expand Down Expand Up @@ -110,7 +111,9 @@ resource "google_compute_target_pool" "control_plane_pool" {
name = "${var.cluster_name}-control-plane"

instances = [
"${google_compute_instance.control_plane.*.self_link}",
"${slice(
"${google_compute_instance.control_plane.*.self_link}",
0, "${var.control_plane_target_pool_members_count}")}",
]

health_checks = [
Expand All @@ -121,8 +124,8 @@ resource "google_compute_target_pool" "control_plane_pool" {
resource "google_compute_forwarding_rule" "control_plane" {
name = "${var.cluster_name}-apiserver"
target = "${google_compute_target_pool.control_plane_pool.self_link}"
port_range = "6443"
ip_address = "${google_compute_address.lb_ip.self_link}"
port_range = "6443-6443"
ip_address = "${google_compute_address.lb_ip.address}"
}

resource "google_compute_instance" "control_plane" {
Expand All @@ -132,6 +135,11 @@ resource "google_compute_instance" "control_plane" {
machine_type = "${var.control_plane_type}"
zone = "${data.google_compute_zones.available.names[count.index % local.zones_count]}"

# Changing the machine_type, min_cpu_platform, or service_account on an
# instance requires stopping it. To acknowledge this,
# allow_stopping_for_update = true is required
allow_stopping_for_update = true

boot_disk {
initialize_params {
size = "${var.control_plane_volume_size}"
Expand All @@ -141,13 +149,26 @@ resource "google_compute_instance" "control_plane" {

network_interface {
subnetwork = "${google_compute_subnetwork.subnet.self_link}"

access_config = {
nat_ip = ""
}
}

metadata = {
sshKeys = "${var.ssh_username}:${file(var.ssh_public_key_file)}"
}

# https://cloud.google.com/sdk/gcloud/reference/alpha/compute/instances/set-scopes#--scopes
# listing of possible scopes
service_account {
scopes = ["compute-rw", "storage-ro"]
scopes = [
"compute-rw",
"logging-write",
"monitoring-write",
"service-control",
"service-management",
"storage-ro",
]
}
}
Loading

0 comments on commit 1e88279

Please sign in to comment.