Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

EKS Support for kube-aws #1434

Closed
tylernisonoff opened this issue Aug 22, 2018 · 7 comments
Closed

EKS Support for kube-aws #1434

tylernisonoff opened this issue Aug 22, 2018 · 7 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@tylernisonoff
Copy link

From the kube-aws amazon blog post

kube-aws is specialized for AWS users. Naturally, our response to the recent introduction of AWS EKS AWS’s long-awaited managed Kubernetes service is that we’re planning to add first-class EKS support to kube-aws.

The EKS integration would look like this: EKS manages the Kubernetes control plane, which consists of etcd and controller nodes. All the etcd members and the Kubernetes system components like apiserver and controller-manager run on EKS managed nodes. kube-aws, however, manages only worker node pools. Compared to etcd and controller nodes, worker nodes tend to have more varying requirements such as auditing, logging, network security, and IAM permissions, because they may run user-facing containers. The integration would keep your Kubernetes operational cost at minimum thanks to EKS, while keeping maximum flexibility thanks to kube-aws managed worker nodes.

We've been using kube-aws for over 2 years now but are exploring EKS. I'd be interested in helping to contribute to bridging the two tools if this is a direction kube-aws would like to go.

Has there been any work / planning on this topic? What are next steps here?

@mumoshu
Copy link
Contributor

mumoshu commented Aug 24, 2018

Hey!
I'm definitely serious about adding the EKS support.

Basically I'd like an option in cluster.yaml to turn on the EKS integration.

The EKS integration would bootstrap a EKS cluster, and then worker nodes via cfn. This should be implemented just by adding a big {{ if ... }} block to define an EKS cluster into stack-template.json for the kube-aws's control-stack.

In each worker node we provision with kube-aws, we need:

  • specifically authored kubelet.service
  • kubeconfig for the kubelet
  • cluster CA cert fetched by AWS EKS API
  • aws-iam-authenticator binary installed on the node

to connect the node to the control-plane managed by EKS.

We, as the k8s on aws community as a whole, already have awesome tools to support provisioning EKS clusters, such as eksctl and ekstrap. They can be great references for implementation.

The next step would be to build a POC 😉
Would you mind contributing it, by any chance?

@mumoshu
Copy link
Contributor

mumoshu commented Sep 17, 2018

Perhaps I'll be refactoring some parts of kube-aws so that it can be used to add Contrainer Linux based worker nodes to EKS clusters, including ones created by eksctl.

@mumoshu
Copy link
Contributor

mumoshu commented Sep 29, 2018

We also need integrating amazon-vpc-cni-k8s into kube-aws.

mumoshu added a commit to mumoshu/kube-aws that referenced this issue Oct 1, 2018
Add the below configuration to your `cluster.yaml` and then `amazon-vpc-cni-k8s` is installed as a daemonset for assigning VPC private IP addresses to your K8S pods.

```yaml
kubernetes:
  networking:
    amazonVPC:
      enabled: true
```

Controller nodes have a k8s manifest file for installing amazon-vpc-cni-k8s daemonset. It adds an init container to copy all the cni binaries bundled to the hyperkube image. Otherwise amazon-vpc-cni-k8s doesn't work due to missing the `loopback` cni bin.

kubelet on worker and controller nodes now have appropriate `--node-ip` and `--max-pods` settings to make amazon-vpc-cni-k8s reliably work.

This is one of prerequisites towards the EKS support.

ref kubernetes-retired#1434
mumoshu added a commit to mumoshu/kube-aws that referenced this issue Oct 1, 2018
Add the below configuration to your `cluster.yaml` and then `amazon-vpc-cni-k8s` is installed as a daemonset for assigning VPC private IP addresses to your K8S pods.

```yaml
kubernetes:
  networking:
    amazonVPC:
      enabled: true
```

Controller nodes have a k8s manifest file for installing amazon-vpc-cni-k8s daemonset. It adds an init container to copy all the cni binaries bundled to the hyperkube image. Otherwise amazon-vpc-cni-k8s doesn't work due to missing the `loopback` cni bin.

kubelet on worker and controller nodes now have appropriate `--node-ip` and `--max-pods` settings to make amazon-vpc-cni-k8s reliably work.

This is one of prerequisites towards the EKS support.

ref kubernetes-retired#1434
mumoshu added a commit to mumoshu/kube-aws that referenced this issue Nov 9, 2018
This is the successor of kubernetes-retired#1473. I'll keep that one for recording purposes.

Resolves kubernetes-retired#1469

 ## What do we get from this?

- Better, documented kube-aws plugin system. Use builtin plugins or write your own with a single file called `plugin.yaml`, to customize every aspect of kube-aws clusters.
- aws-iam-authenticator for kubelet auth, as a kube-aws plugin
- (near future) EKS support kubernetes-retired#1434 powered by the plugin system

 ## Story

Take this as a continuation of kubernetes-retired#509 (comment)

Towards the EKS integration(kubernetes-retired#1434), I have been working on the support for aws-iam-authenticator for kubelet authentication. And I was very annoyed by that I was forced to add small snippets of bash, cloud-config and go into several diverse locations of kube-aws's code-base, plus adding the feature to somehow obtain and transfer the `aws-iam-authenticator` "binary" onto kube-aws worker and controller nodes.

To summarize, things that I had to consider were:

- Creating a single keypair whose cert has two purposes - CA and Server Authn, encrypting it inside the `credentials/` dir, transferring to worker and controller nodes
- Somehow downloading and installing `aws-iam-authenticator` binary onto nodes
- Installing the aws-iam-authenticator deployment onto the k8s cluster
- Creating a configmap for aws-iam-authenticator that refers to IAM roles that are managed by kube-aws(Templating the configmap with the results of cloudformation functions? I'm tired of writing a long `{{ "Fn::Join": ["", ["long", "bash", "script", "separated and enclosed within double-quotes", "per", "line"]]}}` thing.

Being pretty annoyed, recalling kubernetes-retired#751 kubernetes-retired#791  kubernetes-retired#509, I decided to enhance the secret feature of kube-aws - "plugins" - to cover the use-case. It is resulting in a major refactoring of kube-aws.

It doesn't affect existing features of kube-aws at all. But it does affect public members exposed by several kube-aws go packages, and how golang packages are organized.

So to whom using kube-aws a golang library, this refactoring may affect you. Please feel free to send any feedback on that.

 # Notables user-visible changes

 ## Improved plugin system

Create a `plugin.yaml` at `PROJECT_ROOT/plugins/NAME/plugin.yaml` and edit according to your needs. [The aws-iam-authenticator plugin](https://github.com/mumoshu/kube-aws/blob/aws-iam-node-auth/builtin/files/plugins/aws-iam-authenticator/plugin.yaml) would be the most informative source for learning how it write it.

It can be used for:

 ### Existing features

- Installing Kubernetes manifests
- Installing files selected from the `plugins/NAME` directory onto nodes
- Injecting custom CloudFormation resources into kube-aws managed stacks

 ### New features

 #### Automatically turning yaml templates to cfn expressions

Imagine you have a YAML template file that looks like the below in your plugin's directory:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
{{ range $i, $role := .Config.IAMRoleARNs }},
  - rolearn: {{$role}}
    username: system:node:{{`{{EC2PrivateDNSName}}`}}
    groups:
    - system:bootstrappers
    - system:nodes
{{ end }}
```

Firstly this is rendered with golang's text/template:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
  - rolearn: {"Fn::GetAtt": ["IAMRoleController", "Arn"]}
    username: system:node:{{`{{EC2PrivateDNSName}}`}}
    groups:
    - system:bootstrappers
    - system:nodes
```

And this is where kube-was becomes fancy. It detects `{"Fn::GetAtt": ["IAMRoleController", "Arn"]}` that looks like a CFN stack template expression, and converts it to:

```json
{ "Fn::Join": ["", [
"apiVersion: v1\n",
"kind: ConfigMap\n",
"metadata:\n",
"  name: aws-auth\n",
"  namespace: kube-system\n",
"data:\n",
"  mapRoles: |\n",
"  - rolearn: ", {"Fn::GetAtt": ["IAMRoleController", "Arn"]}, "\n",
"    username: system:node:{{`{{EC2PrivateDNSName}}`}}\n",
"    groups:\n",
"    - system:bootstrappers\n",
"    - system:nodes\n",
]]}
```

This is then injected into the `AWS::CloudFormation::Init` so that `cfn-init` can rendered the file with the `{"Fn::GetAtt": ["IAMRoleController", "Arn"]}` replaced with the actual ARN of the IAM role managed by kube-aws.

```json
"Metadata" : {
  "AWS::CloudFormation::Init" : {
    "configSets: {
      "path-to-file": {
        "files": {
          "/path/to/file": { "Fn::Join": ["", [
            "apiVersion: v1\n",
            "kind: ConfigMap\n",
            "metadata:\n",
            "  name: aws-auth\n",
            "  namespace: kube-system\n",
            "data:\n",
            "  mapRoles: |\n",
            "  - rolearn: ", {"Fn::GetAtt": ["IAMRoleController", "Arn"]}, "\n",
            "    username: system:node:{{`{{EC2PrivateDNSName}}`}}\n",
            "    groups:\n",
            "    - system:bootstrappers\n",
            "    - system:nodes\n",
            ]]}
        }
    }
  }
}
```

 #### Automatically downloading files from source URLS

```yaml
spec:
  cluster:
    machine:
      roles:
        controller:
          files:
          - path: "/opt/bin/heptio-authenticator-aws"
            permissions: 0755
            type: binary
            source:
              path: "files/aws-iam-auth/opt/bin/heptio-authenticator-aws"
              url: "https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_0.3.0_linux_amd64"
```

 #### Defining additional keypairs used by k8s apps that the plugin supports:

```yaml
spec:
  cluster:
    pki:
      keypairs:
      - name: aws-iam-authenticator
        dnsNames:
        - localhost
        ipAddresses:
        - 0.0.0.0
        - 127.0.0.1
        duration: 8760h
        usages:
        - ca
        - server
```

For example, the above plugin.yaml results in `kube-aws render credential` creatning `plugins/aws-iam-authenticator/credentials/aws-iam-authenticator{-key,}.pem` files, that are then installed on nodes for use by the aws-iam-authenticator pods.

 ## `kube-aws render stack`

It now renders a set of files under the `plugins/aws-iam-authenticator` diretory:

- `plugin.yaml`
- `manifests/*.yaml` for the daemonset and the configmap
- `files/*.yaml` for kubeconfig files for the webhook authentication and kubelets

 ## `kube-aws render stack`

It now renders a set of credentials(keypairs) as defined in `plugins/NAME/plugin.yaml` files.

 # Notable internal changes

 ## Single place to store embedded files

kube-aws after this work will have a single place to store files embedded into kube-aws binaries. Here's how it looks https://github.com/mumoshu/kube-aws/tree/aws-iam-node-auth/builtin/files.

```
$ tree builtin/files
builtin/files
├── cluster.yaml.tmpl
├── credentials
├── etcdadm
│   ├── Makefile
│   ├── README.md
│   ├── etcdadm
│   └── test
├── kubeconfig.tmpl
├── manifests
├── plugins
│   └── aws-iam-authenticator
│       ├── files
│       │   ├── authentication-token-webhook-config.yaml
│       │   ├── controller-kubeconfig.yaml
│       │   └── worker-kubeconfig.yaml
│       ├── manifests
│       │   ├── aws-auth-cm.yaml
│       │   └── daemonset.yaml
│       └── plugin.yaml
├── stack-templates
│   ├── control-plane.json.tmpl
│   ├── etcd.json.tmpl
│   ├── network.json.tmpl
│   ├── node-pool.json.tmpl
│   └── root.json.tmpl
└── userdata
    ├── cloud-config-controller
    ├── cloud-config-etcd
    └── cloud-config-worker

9 directories, 20 files
```

 # Changelog since 1473

The changes between this and kubernetes-retired#1473 are as follows - just reviving accidentally removed files:

```
$ git diff aws-iam-node-auth
diff --git a/.gitignore b/.gitignore
index 62e8dc26..b33d23d0 100644
--- a/.gitignore
+++ b/.gitignore
@@ -10,6 +10,8 @@ kube-aws
 coverage.txt
 profile.out
 test-result.json
+pkg/model/cache
+builtin/a_builtin-packr.go

 # gitbook docs
 _book
diff --git a/hack/relnote b/hack/relnote
new file mode 100755
index 00000000..f01e9fd6
--- /dev/null
+++ b/hack/relnote
@@ -0,0 +1,7 @@
+#!/usr/bin/env bash
+
+go get golang.org/x/oauth2
+go get golang.org/x/net/context
+go get github.com/google/go-github/github
+
+VERSION=$(hack/version) go run hack/relnote.go
diff --git a/hack/version b/hack/version
new file mode 100755
index 00000000..0f4c6780
--- /dev/null
+++ b/hack/version
@@ -0,0 +1,6 @@
+#!/usr/bin/env bash
+
+COMMIT=$(git rev-parse HEAD)
+TAG=$(git describe --exact-match --abbrev=0 --tags "${COMMIT}" 2> /dev/null || true)
+
+echo "${TAG}"
diff --git a/kube-aws-bot-git-ssh-key.enc b/kube-aws-bot-git-ssh-key.enc
new file mode 100644
index 00000000..fef826ba
```
mumoshu added a commit that referenced this issue Nov 12, 2018
* update vendor

* refactoring kube-aws / experimental IAM-based kubelet auth

This is the successor of #1473. I'll keep that one for recording purposes.

Resolves #1469

 ## What do we get from this?

- Better, documented kube-aws plugin system. Use builtin plugins or write your own with a single file called `plugin.yaml`, to customize every aspect of kube-aws clusters.
- aws-iam-authenticator for kubelet auth, as a kube-aws plugin
- (near future) EKS support #1434 powered by the plugin system

 ## Story

Take this as a continuation of #509 (comment)

Towards the EKS integration(#1434), I have been working on the support for aws-iam-authenticator for kubelet authentication. And I was very annoyed by that I was forced to add small snippets of bash, cloud-config and go into several diverse locations of kube-aws's code-base, plus adding the feature to somehow obtain and transfer the `aws-iam-authenticator` "binary" onto kube-aws worker and controller nodes.

To summarize, things that I had to consider were:

- Creating a single keypair whose cert has two purposes - CA and Server Authn, encrypting it inside the `credentials/` dir, transferring to worker and controller nodes
- Somehow downloading and installing `aws-iam-authenticator` binary onto nodes
- Installing the aws-iam-authenticator deployment onto the k8s cluster
- Creating a configmap for aws-iam-authenticator that refers to IAM roles that are managed by kube-aws(Templating the configmap with the results of cloudformation functions? I'm tired of writing a long `{{ "Fn::Join": ["", ["long", "bash", "script", "separated and enclosed within double-quotes", "per", "line"]]}}` thing.

Being pretty annoyed, recalling #751 #791  #509, I decided to enhance the secret feature of kube-aws - "plugins" - to cover the use-case. It is resulting in a major refactoring of kube-aws.

It doesn't affect existing features of kube-aws at all. But it does affect public members exposed by several kube-aws go packages, and how golang packages are organized.

So to whom using kube-aws a golang library, this refactoring may affect you. Please feel free to send any feedback on that.

 # Notables user-visible changes

 ## Improved plugin system

Create a `plugin.yaml` at `PROJECT_ROOT/plugins/NAME/plugin.yaml` and edit according to your needs. [The aws-iam-authenticator plugin](https://github.com/mumoshu/kube-aws/blob/aws-iam-node-auth/builtin/files/plugins/aws-iam-authenticator/plugin.yaml) would be the most informative source for learning how it write it.

It can be used for:

 ### Existing features

- Installing Kubernetes manifests
- Installing files selected from the `plugins/NAME` directory onto nodes
- Injecting custom CloudFormation resources into kube-aws managed stacks

 ### New features

 #### Automatically turning yaml templates to cfn expressions

Imagine you have a YAML template file that looks like the below in your plugin's directory:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
{{ range $i, $role := .Config.IAMRoleARNs }},
  - rolearn: {{$role}}
    username: system:node:{{`{{EC2PrivateDNSName}}`}}
    groups:
    - system:bootstrappers
    - system:nodes
{{ end }}
```

Firstly this is rendered with golang's text/template:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
  - rolearn: {"Fn::GetAtt": ["IAMRoleController", "Arn"]}
    username: system:node:{{`{{EC2PrivateDNSName}}`}}
    groups:
    - system:bootstrappers
    - system:nodes
```

And this is where kube-was becomes fancy. It detects `{"Fn::GetAtt": ["IAMRoleController", "Arn"]}` that looks like a CFN stack template expression, and converts it to:

```json
{ "Fn::Join": ["", [
"apiVersion: v1\n",
"kind: ConfigMap\n",
"metadata:\n",
"  name: aws-auth\n",
"  namespace: kube-system\n",
"data:\n",
"  mapRoles: |\n",
"  - rolearn: ", {"Fn::GetAtt": ["IAMRoleController", "Arn"]}, "\n",
"    username: system:node:{{`{{EC2PrivateDNSName}}`}}\n",
"    groups:\n",
"    - system:bootstrappers\n",
"    - system:nodes\n",
]]}
```

This is then injected into the `AWS::CloudFormation::Init` so that `cfn-init` can rendered the file with the `{"Fn::GetAtt": ["IAMRoleController", "Arn"]}` replaced with the actual ARN of the IAM role managed by kube-aws.

```json
"Metadata" : {
  "AWS::CloudFormation::Init" : {
    "configSets: {
      "path-to-file": {
        "files": {
          "/path/to/file": { "Fn::Join": ["", [
            "apiVersion: v1\n",
            "kind: ConfigMap\n",
            "metadata:\n",
            "  name: aws-auth\n",
            "  namespace: kube-system\n",
            "data:\n",
            "  mapRoles: |\n",
            "  - rolearn: ", {"Fn::GetAtt": ["IAMRoleController", "Arn"]}, "\n",
            "    username: system:node:{{`{{EC2PrivateDNSName}}`}}\n",
            "    groups:\n",
            "    - system:bootstrappers\n",
            "    - system:nodes\n",
            ]]}
        }
    }
  }
}
```

 #### Automatically downloading files from source URLS

```yaml
spec:
  cluster:
    machine:
      roles:
        controller:
          files:
          - path: "/opt/bin/heptio-authenticator-aws"
            permissions: 0755
            type: binary
            source:
              path: "files/aws-iam-auth/opt/bin/heptio-authenticator-aws"
              url: "https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_0.3.0_linux_amd64"
```

 #### Defining additional keypairs used by k8s apps that the plugin supports:

```yaml
spec:
  cluster:
    pki:
      keypairs:
      - name: aws-iam-authenticator
        dnsNames:
        - localhost
        ipAddresses:
        - 0.0.0.0
        - 127.0.0.1
        duration: 8760h
        usages:
        - ca
        - server
```

For example, the above plugin.yaml results in `kube-aws render credential` creatning `plugins/aws-iam-authenticator/credentials/aws-iam-authenticator{-key,}.pem` files, that are then installed on nodes for use by the aws-iam-authenticator pods.

 ## `kube-aws render stack`

It now renders a set of files under the `plugins/aws-iam-authenticator` diretory:

- `plugin.yaml`
- `manifests/*.yaml` for the daemonset and the configmap
- `files/*.yaml` for kubeconfig files for the webhook authentication and kubelets

 ## `kube-aws render stack`

It now renders a set of credentials(keypairs) as defined in `plugins/NAME/plugin.yaml` files.

 # Notable internal changes

 ## Single place to store embedded files

kube-aws after this work will have a single place to store files embedded into kube-aws binaries. Here's how it looks https://github.com/mumoshu/kube-aws/tree/aws-iam-node-auth/builtin/files.

```
$ tree builtin/files
builtin/files
├── cluster.yaml.tmpl
├── credentials
├── etcdadm
│   ├── Makefile
│   ├── README.md
│   ├── etcdadm
│   └── test
├── kubeconfig.tmpl
├── manifests
├── plugins
│   └── aws-iam-authenticator
│       ├── files
│       │   ├── authentication-token-webhook-config.yaml
│       │   ├── controller-kubeconfig.yaml
│       │   └── worker-kubeconfig.yaml
│       ├── manifests
│       │   ├── aws-auth-cm.yaml
│       │   └── daemonset.yaml
│       └── plugin.yaml
├── stack-templates
│   ├── control-plane.json.tmpl
│   ├── etcd.json.tmpl
│   ├── network.json.tmpl
│   ├── node-pool.json.tmpl
│   └── root.json.tmpl
└── userdata
    ├── cloud-config-controller
    ├── cloud-config-etcd
    └── cloud-config-worker

9 directories, 20 files
```

 # Changelog since 1473

The changes between this and #1473 are as follows - just reviving accidentally removed files:

```
$ git diff aws-iam-node-auth
diff --git a/.gitignore b/.gitignore
index 62e8dc26..b33d23d0 100644
--- a/.gitignore
+++ b/.gitignore
@@ -10,6 +10,8 @@ kube-aws
 coverage.txt
 profile.out
 test-result.json
+pkg/model/cache
+builtin/a_builtin-packr.go

 # gitbook docs
 _book
diff --git a/hack/relnote b/hack/relnote
new file mode 100755
index 00000000..f01e9fd6
--- /dev/null
+++ b/hack/relnote
@@ -0,0 +1,7 @@
+#!/usr/bin/env bash
+
+go get golang.org/x/oauth2
+go get golang.org/x/net/context
+go get github.com/google/go-github/github
+
+VERSION=$(hack/version) go run hack/relnote.go
diff --git a/hack/version b/hack/version
new file mode 100755
index 00000000..0f4c6780
--- /dev/null
+++ b/hack/version
@@ -0,0 +1,6 @@
+#!/usr/bin/env bash
+
+COMMIT=$(git rev-parse HEAD)
+TAG=$(git describe --exact-match --abbrev=0 --tags "${COMMIT}" 2> /dev/null || true)
+
+echo "${TAG}"
diff --git a/kube-aws-bot-git-ssh-key.enc b/kube-aws-bot-git-ssh-key.enc
new file mode 100644
index 00000000..fef826ba
```
kevtaylor pushed a commit to HotelsDotCom/kube-aws that referenced this issue Jan 9, 2019
Add the below configuration to your `cluster.yaml` and then `amazon-vpc-cni-k8s` is installed as a daemonset for assigning VPC private IP addresses to your K8S pods.

```yaml
kubernetes:
  networking:
    amazonVPC:
      enabled: true
```

Controller nodes have a k8s manifest file for installing amazon-vpc-cni-k8s daemonset. It adds an init container to copy all the cni binaries bundled to the hyperkube image. Otherwise amazon-vpc-cni-k8s doesn't work due to missing the `loopback` cni bin.

kubelet on worker and controller nodes now have appropriate `--node-ip` and `--max-pods` settings to make amazon-vpc-cni-k8s reliably work.

This is one of prerequisites towards the EKS support.

ref kubernetes-retired#1434
kevtaylor pushed a commit to HotelsDotCom/kube-aws that referenced this issue Jan 9, 2019
…s-retired#1490)

* update vendor

* refactoring kube-aws / experimental IAM-based kubelet auth

This is the successor of kubernetes-retired#1473. I'll keep that one for recording purposes.

Resolves kubernetes-retired#1469

 ## What do we get from this?

- Better, documented kube-aws plugin system. Use builtin plugins or write your own with a single file called `plugin.yaml`, to customize every aspect of kube-aws clusters.
- aws-iam-authenticator for kubelet auth, as a kube-aws plugin
- (near future) EKS support kubernetes-retired#1434 powered by the plugin system

 ## Story

Take this as a continuation of kubernetes-retired#509 (comment)

Towards the EKS integration(kubernetes-retired#1434), I have been working on the support for aws-iam-authenticator for kubelet authentication. And I was very annoyed by that I was forced to add small snippets of bash, cloud-config and go into several diverse locations of kube-aws's code-base, plus adding the feature to somehow obtain and transfer the `aws-iam-authenticator` "binary" onto kube-aws worker and controller nodes.

To summarize, things that I had to consider were:

- Creating a single keypair whose cert has two purposes - CA and Server Authn, encrypting it inside the `credentials/` dir, transferring to worker and controller nodes
- Somehow downloading and installing `aws-iam-authenticator` binary onto nodes
- Installing the aws-iam-authenticator deployment onto the k8s cluster
- Creating a configmap for aws-iam-authenticator that refers to IAM roles that are managed by kube-aws(Templating the configmap with the results of cloudformation functions? I'm tired of writing a long `{{ "Fn::Join": ["", ["long", "bash", "script", "separated and enclosed within double-quotes", "per", "line"]]}}` thing.

Being pretty annoyed, recalling kubernetes-retired#751 kubernetes-retired#791  kubernetes-retired#509, I decided to enhance the secret feature of kube-aws - "plugins" - to cover the use-case. It is resulting in a major refactoring of kube-aws.

It doesn't affect existing features of kube-aws at all. But it does affect public members exposed by several kube-aws go packages, and how golang packages are organized.

So to whom using kube-aws a golang library, this refactoring may affect you. Please feel free to send any feedback on that.

 # Notables user-visible changes

 ## Improved plugin system

Create a `plugin.yaml` at `PROJECT_ROOT/plugins/NAME/plugin.yaml` and edit according to your needs. [The aws-iam-authenticator plugin](https://github.com/mumoshu/kube-aws/blob/aws-iam-node-auth/builtin/files/plugins/aws-iam-authenticator/plugin.yaml) would be the most informative source for learning how it write it.

It can be used for:

 ### Existing features

- Installing Kubernetes manifests
- Installing files selected from the `plugins/NAME` directory onto nodes
- Injecting custom CloudFormation resources into kube-aws managed stacks

 ### New features

 #### Automatically turning yaml templates to cfn expressions

Imagine you have a YAML template file that looks like the below in your plugin's directory:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
{{ range $i, $role := .Config.IAMRoleARNs }},
  - rolearn: {{$role}}
    username: system:node:{{`{{EC2PrivateDNSName}}`}}
    groups:
    - system:bootstrappers
    - system:nodes
{{ end }}
```

Firstly this is rendered with golang's text/template:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
  - rolearn: {"Fn::GetAtt": ["IAMRoleController", "Arn"]}
    username: system:node:{{`{{EC2PrivateDNSName}}`}}
    groups:
    - system:bootstrappers
    - system:nodes
```

And this is where kube-was becomes fancy. It detects `{"Fn::GetAtt": ["IAMRoleController", "Arn"]}` that looks like a CFN stack template expression, and converts it to:

```json
{ "Fn::Join": ["", [
"apiVersion: v1\n",
"kind: ConfigMap\n",
"metadata:\n",
"  name: aws-auth\n",
"  namespace: kube-system\n",
"data:\n",
"  mapRoles: |\n",
"  - rolearn: ", {"Fn::GetAtt": ["IAMRoleController", "Arn"]}, "\n",
"    username: system:node:{{`{{EC2PrivateDNSName}}`}}\n",
"    groups:\n",
"    - system:bootstrappers\n",
"    - system:nodes\n",
]]}
```

This is then injected into the `AWS::CloudFormation::Init` so that `cfn-init` can rendered the file with the `{"Fn::GetAtt": ["IAMRoleController", "Arn"]}` replaced with the actual ARN of the IAM role managed by kube-aws.

```json
"Metadata" : {
  "AWS::CloudFormation::Init" : {
    "configSets: {
      "path-to-file": {
        "files": {
          "/path/to/file": { "Fn::Join": ["", [
            "apiVersion: v1\n",
            "kind: ConfigMap\n",
            "metadata:\n",
            "  name: aws-auth\n",
            "  namespace: kube-system\n",
            "data:\n",
            "  mapRoles: |\n",
            "  - rolearn: ", {"Fn::GetAtt": ["IAMRoleController", "Arn"]}, "\n",
            "    username: system:node:{{`{{EC2PrivateDNSName}}`}}\n",
            "    groups:\n",
            "    - system:bootstrappers\n",
            "    - system:nodes\n",
            ]]}
        }
    }
  }
}
```

 #### Automatically downloading files from source URLS

```yaml
spec:
  cluster:
    machine:
      roles:
        controller:
          files:
          - path: "/opt/bin/heptio-authenticator-aws"
            permissions: 0755
            type: binary
            source:
              path: "files/aws-iam-auth/opt/bin/heptio-authenticator-aws"
              url: "https://github.com/kubernetes-sigs/aws-iam-authenticator/releases/download/v0.3.0/heptio-authenticator-aws_0.3.0_linux_amd64"
```

 #### Defining additional keypairs used by k8s apps that the plugin supports:

```yaml
spec:
  cluster:
    pki:
      keypairs:
      - name: aws-iam-authenticator
        dnsNames:
        - localhost
        ipAddresses:
        - 0.0.0.0
        - 127.0.0.1
        duration: 8760h
        usages:
        - ca
        - server
```

For example, the above plugin.yaml results in `kube-aws render credential` creatning `plugins/aws-iam-authenticator/credentials/aws-iam-authenticator{-key,}.pem` files, that are then installed on nodes for use by the aws-iam-authenticator pods.

 ## `kube-aws render stack`

It now renders a set of files under the `plugins/aws-iam-authenticator` diretory:

- `plugin.yaml`
- `manifests/*.yaml` for the daemonset and the configmap
- `files/*.yaml` for kubeconfig files for the webhook authentication and kubelets

 ## `kube-aws render stack`

It now renders a set of credentials(keypairs) as defined in `plugins/NAME/plugin.yaml` files.

 # Notable internal changes

 ## Single place to store embedded files

kube-aws after this work will have a single place to store files embedded into kube-aws binaries. Here's how it looks https://github.com/mumoshu/kube-aws/tree/aws-iam-node-auth/builtin/files.

```
$ tree builtin/files
builtin/files
├── cluster.yaml.tmpl
├── credentials
├── etcdadm
│   ├── Makefile
│   ├── README.md
│   ├── etcdadm
│   └── test
├── kubeconfig.tmpl
├── manifests
├── plugins
│   └── aws-iam-authenticator
│       ├── files
│       │   ├── authentication-token-webhook-config.yaml
│       │   ├── controller-kubeconfig.yaml
│       │   └── worker-kubeconfig.yaml
│       ├── manifests
│       │   ├── aws-auth-cm.yaml
│       │   └── daemonset.yaml
│       └── plugin.yaml
├── stack-templates
│   ├── control-plane.json.tmpl
│   ├── etcd.json.tmpl
│   ├── network.json.tmpl
│   ├── node-pool.json.tmpl
│   └── root.json.tmpl
└── userdata
    ├── cloud-config-controller
    ├── cloud-config-etcd
    └── cloud-config-worker

9 directories, 20 files
```

 # Changelog since 1473

The changes between this and kubernetes-retired#1473 are as follows - just reviving accidentally removed files:

```
$ git diff aws-iam-node-auth
diff --git a/.gitignore b/.gitignore
index 62e8dc26..b33d23d0 100644
--- a/.gitignore
+++ b/.gitignore
@@ -10,6 +10,8 @@ kube-aws
 coverage.txt
 profile.out
 test-result.json
+pkg/model/cache
+builtin/a_builtin-packr.go

 # gitbook docs
 _book
diff --git a/hack/relnote b/hack/relnote
new file mode 100755
index 00000000..f01e9fd6
--- /dev/null
+++ b/hack/relnote
@@ -0,0 +1,7 @@
+#!/usr/bin/env bash
+
+go get golang.org/x/oauth2
+go get golang.org/x/net/context
+go get github.com/google/go-github/github
+
+VERSION=$(hack/version) go run hack/relnote.go
diff --git a/hack/version b/hack/version
new file mode 100755
index 00000000..0f4c6780
--- /dev/null
+++ b/hack/version
@@ -0,0 +1,6 @@
+#!/usr/bin/env bash
+
+COMMIT=$(git rev-parse HEAD)
+TAG=$(git describe --exact-match --abbrev=0 --tags "${COMMIT}" 2> /dev/null || true)
+
+echo "${TAG}"
diff --git a/kube-aws-bot-git-ssh-key.enc b/kube-aws-bot-git-ssh-key.enc
new file mode 100644
index 00000000..fef826ba
```
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 25, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 25, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants