Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests #16

Closed
wants to merge 11 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,21 +201,21 @@ Available targets:

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 0.13 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.2 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 0.14.11 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.4.1 |

## Providers

| Name | Version |
|------|---------|
| <a name="provider_helm"></a> [helm](#provider\_helm) | >= 2.2 |
| <a name="provider_helm"></a> [helm](#provider\_helm) | >= 2.4.1 |

## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_eks_iam_policy"></a> [eks\_iam\_policy](#module\_eks\_iam\_policy) | cloudposse/iam-policy/aws | 0.2.3 |
| <a name="module_eks_iam_role"></a> [eks\_iam\_role](#module\_eks\_iam\_role) | cloudposse/eks-iam-role/aws | 0.10.3 |
| <a name="module_eks_iam_policy"></a> [eks\_iam\_policy](#module\_eks\_iam\_policy) | cloudposse/iam-policy/aws | 0.3.0 |
| <a name="module_eks_iam_role"></a> [eks\_iam\_role](#module\_eks\_iam\_role) | cloudposse/eks-iam-role/aws | 0.11.0 |
| <a name="module_this"></a> [this](#module\_this) | cloudposse/label/null | 0.25.0 |

## Resources
Expand Down
10 changes: 5 additions & 5 deletions docs/terraform.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,21 +3,21 @@

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 0.13 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.2 |
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 0.14.11 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.4.1 |

## Providers

| Name | Version |
|------|---------|
| <a name="provider_helm"></a> [helm](#provider\_helm) | >= 2.2 |
| <a name="provider_helm"></a> [helm](#provider\_helm) | >= 2.4.1 |

## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_eks_iam_policy"></a> [eks\_iam\_policy](#module\_eks\_iam\_policy) | cloudposse/iam-policy/aws | 0.2.3 |
| <a name="module_eks_iam_role"></a> [eks\_iam\_role](#module\_eks\_iam\_role) | cloudposse/eks-iam-role/aws | 0.10.3 |
| <a name="module_eks_iam_policy"></a> [eks\_iam\_policy](#module\_eks\_iam\_policy) | cloudposse/iam-policy/aws | 0.3.0 |
| <a name="module_eks_iam_role"></a> [eks\_iam\_role](#module\_eks\_iam\_role) | cloudposse/eks-iam-role/aws | 0.11.0 |
| <a name="module_this"></a> [this](#module\_this) | cloudposse/label/null | 0.25.0 |

## Resources
Expand Down
12 changes: 8 additions & 4 deletions examples/complete/fixtures.us-east-2.tfvars
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,10 @@ name = "helm"

## eks related

# Dynamic subnets must be set in at least 2 AZs
availability_zones = ["us-east-2a", "us-east-2b"]

kubernetes_version = "1.19"
kubernetes_version = "1.21"

oidc_provider_enabled = true

Expand All @@ -28,12 +29,15 @@ max_size = 3

min_size = 2

disk_size = 20

kubernetes_labels = {}

cluster_encryption_config_enabled = true

# no role to assume
kube_exec_auth_enabled = false
# use data auth
kube_data_auth_enabled = true

## helm related

repository = "https://charts.helm.sh/incubator"
Expand All @@ -44,7 +48,7 @@ chart_version = "0.2.5"

create_namespace = true

kubernetes_namespace = "echo"
kubernetes_namespace = "aws-node-termination-handler"

atomic = true

Expand Down
27 changes: 21 additions & 6 deletions examples/complete/main-eks.tf
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ provider "aws" {

module "label" {
source = "cloudposse/label/null"
version = "0.24.1"
version = "0.25.0"
attributes = ["cluster"]

context = module.this.context
Expand Down Expand Up @@ -34,7 +34,7 @@ locals {

module "vpc" {
source = "cloudposse/vpc/aws"
version = "0.21.1"
version = "0.28.1"

cidr_block = "172.16.0.0/16"
tags = local.tags
Expand All @@ -44,7 +44,7 @@ module "vpc" {

module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
version = "0.38.0"
version = "0.39.8"

availability_zones = var.availability_zones
vpc_id = module.vpc.vpc_id
Expand All @@ -61,7 +61,7 @@ module "subnets" {

module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
version = "0.39.0"
version = "0.44.0"

region = var.region
vpc_id = module.vpc.vpc_id
Expand All @@ -72,6 +72,22 @@ module "eks_cluster" {
enabled_cluster_log_types = var.enabled_cluster_log_types
cluster_log_retention_period = var.cluster_log_retention_period

create_eks_service_role = true

# kube_data_auth_enabled = var.kube_data_auth_enabled
# exec_auth is more reliable than data_auth when the aws CLI is available
# Details at https://github.com/cloudposse/terraform-aws-eks-cluster/releases/tag/0.42.0
# kube_exec_auth_enabled = !var.kubeconfig_file_enabled
# If using `exec` method (recommended) for authentication, provide an explict
# IAM role ARN to exec as for authentication to EKS cluster.
# kube_exec_auth_role_arn = var.kube_exec_auth_role_arn
# kube_exec_auth_role_arn_enabled = var.kube_exec_auth_role_arn_enabled
# Path to KUBECONFIG file to use to access the EKS cluster
# kubeconfig_path = var.kubeconfig_file
# kubeconfig_path_enabled = var.kubeconfig_file_enabled

aws_auth_yaml_strip_quotes = true

cluster_encryption_config_enabled = var.cluster_encryption_config_enabled
cluster_encryption_config_kms_key_id = var.cluster_encryption_config_kms_key_id
cluster_encryption_config_kms_key_enable_key_rotation = var.cluster_encryption_config_kms_key_enable_key_rotation
Expand All @@ -96,7 +112,7 @@ data "null_data_source" "wait_for_cluster_and_kubernetes_configmap" {

module "eks_node_group" {
source = "cloudposse/eks-node-group/aws"
version = "0.19.0"
version = "0.27.0"

subnet_ids = module.subnets.private_subnet_ids
cluster_name = data.null_data_source.wait_for_cluster_and_kubernetes_configmap.outputs["cluster_name"]
Expand All @@ -105,7 +121,6 @@ module "eks_node_group" {
min_size = var.min_size
max_size = var.max_size
kubernetes_labels = var.kubernetes_labels
disk_size = var.disk_size

context = module.this.context
}
21 changes: 7 additions & 14 deletions examples/complete/main.tf
Original file line number Diff line number Diff line change
@@ -1,22 +1,10 @@
data "aws_eks_cluster_auth" "kubernetes" {
name = module.eks_cluster.eks_cluster_id
}

provider "helm" {
kubernetes {
host = module.eks_cluster.eks_cluster_endpoint
token = data.aws_eks_cluster_auth.kubernetes.token
cluster_ca_certificate = base64decode(module.eks_cluster.eks_cluster_certificate_authority_data)
}
locals {
enabled = module.this.enabled
}

module "helm_release" {
source = "../../"

# source = "cloudposse/helm-release/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"

repository = var.repository
chart = var.chart
chart_version = var.chart_version
Expand All @@ -32,4 +20,9 @@ module "helm_release" {
values = [
file("${path.module}/values.yaml")
]

depends_on = [
module.eks_cluster,
module.eks_node_group,
]
}
138 changes: 138 additions & 0 deletions examples/complete/provider-helm.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
##################
#
# This file is a drop-in to provide a helm provider.
#
# All the following variables are just about configuring the Kubernetes provider
# to be able to modify EKS cluster. The reason there are so many options is
# because at various times, each one of them has had problems, so we give you a choice.
#
# The reason there are so many "enabled" inputs rather than automatically
# detecting whether or not they are enabled based on the value of the input
# is that any logic based on input values requires the values to be known during
# the "plan" phase of Terraform, and often they are not, which causes problems.
#
variable "kubeconfig_file_enabled" {
type = bool
default = false
description = "If `true`, configure the Kubernetes provider with `kubeconfig_file` and use that kubeconfig file for authenticating to the EKS cluster"
}

variable "kubeconfig_file" {
type = string
default = ""
description = "The Kubernetes provider `config_path` setting to use when `kubeconfig_file_enabled` is `true`"
}

variable "kubeconfig_context" {
type = string
default = ""
description = "Context to choose from the Kubernetes kube config file"
}

variable "kube_data_auth_enabled" {
type = bool
default = false
description = <<-EOT
If `true`, use an `aws_eks_cluster_auth` data source to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled` or `kube_exec_auth_enabled`.
EOT
}

variable "kube_exec_auth_enabled" {
type = bool
default = true
description = <<-EOT
If `true`, use the Kubernetes provider `exec` feature to execute `aws eks get-token` to authenticate to the EKS cluster.
Disabled by `kubeconfig_file_enabled`, overrides `kube_data_auth_enabled`.
EOT
}

variable "kube_exec_auth_role_arn" {
type = string
default = ""
description = "The role ARN for `aws eks get-token` to use"
}

variable "kube_exec_auth_role_arn_enabled" {
type = bool
default = true
description = "If `true`, pass `kube_exec_auth_role_arn` as the role ARN to `aws eks get-token`"
}

variable "kube_exec_auth_aws_profile" {
type = string
default = ""
description = "The AWS config profile for `aws eks get-token` to use"
}

variable "kube_exec_auth_aws_profile_enabled" {
type = bool
default = false
description = "If `true`, pass `kube_exec_auth_aws_profile` as the `profile` to `aws eks get-token`"
}

variable "kubeconfig_exec_auth_api_version" {
type = string
default = "client.authentication.k8s.io/v1alpha1"
description = "The Kubernetes API version of the credentials returned by the `exec` auth plugin"
}


locals {
kubeconfig_file_enabled = local.enabled && var.kubeconfig_file_enabled
kube_exec_auth_enabled = local.kubeconfig_file_enabled ? false : local.enabled && var.kube_exec_auth_enabled
kube_data_auth_enabled = local.kube_exec_auth_enabled ? false : local.enabled && var.kube_data_auth_enabled

# Eventually we might try to get this from an environment variable
kubeconfig_exec_auth_api_version = var.kubeconfig_exec_auth_api_version

exec_profile = local.kube_exec_auth_enabled && var.kube_exec_auth_aws_profile_enabled ? [
"--profile", var.kube_exec_auth_aws_profile
] : []

kube_exec_auth_role_arn = local.enabled # ? coalesce(var.kube_exec_auth_role_arn, var.import_role_arn, module.iam_roles.terraform_role_arn) : null
exec_role = local.kube_exec_auth_enabled && var.kube_exec_auth_role_arn_enabled ? [
"--role-arn", local.kube_exec_auth_role_arn
] : []

certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
eks_cluster_id = module.eks_cluster.eks_cluster_id
eks_cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
}

data "aws_eks_cluster_auth" "eks" {
count = local.kube_data_auth_enabled ? 1 : 0
name = local.eks_cluster_id
}

provider "helm" {
kubernetes {
# Without a dummy API server configured, the provider will throw an error and prevent a "plan" from succeeding
# in situations where Terraform does not provide it with the cluster endpoint before triggering an API call.
# Since those situations are limited to ones where we do not care about the failure, such as fetching the
# ConfigMap before the cluster has been created or in preparation for deleting it, and the worst that will
# happen is that the aws-auth ConfigMap will be unnecessarily updated, it is just better to ignore the error
# so we can proceed with the task of creating or destroying the cluster.
#
# If this solution bothers you, you can disable it by setting var.dummy_kubeapi_server = null
host = local.eks_cluster_endpoint
cluster_ca_certificate = local.enabled ? base64decode(local.certificate_authority_data) : null
token = local.kube_data_auth_enabled ? data.aws_eks_cluster_auth.eks[0].token : null
# The Kubernetes provider will use information from KUBECONFIG if it exists, but if the default cluster
# in KUBECONFIG is some other cluster, this will cause problems, so we override it always.
config_path = local.kubeconfig_file_enabled ? var.kubeconfig_file : ""
config_context = var.kubeconfig_context


dynamic "exec" {
for_each = local.kube_exec_auth_enabled ? ["exec"] : []
content {
api_version = local.kubeconfig_exec_auth_api_version
command = "aws"
args = concat(local.exec_profile, [
"eks", "get-token", "--cluster-name", local.eks_cluster_id
], local.exec_role)
}
}
}
}
5 changes: 0 additions & 5 deletions examples/complete/variables-eks.tf
Original file line number Diff line number Diff line change
Expand Up @@ -68,11 +68,6 @@ variable "local_exec_interpreter" {
description = "shell to use for local_exec"
}

variable "disk_size" {
type = number
description = "Disk size in GiB for worker nodes. Defaults to 20. Terraform will only perform drift detection if a configuration value is provided"
}

variable "instance_types" {
type = list(string)
description = "Set of instance types associated with the EKS Node Group. Defaults to [\"t3.medium\"]. Terraform will only perform drift detection if a configuration value is provided"
Expand Down
6 changes: 3 additions & 3 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ locals {

module "eks_iam_policy" {
source = "cloudposse/iam-policy/aws"
version = "0.2.3"
version = "0.3.0"

enabled = local.iam_role_enabled

Expand All @@ -18,12 +18,12 @@ module "eks_iam_policy" {

module "eks_iam_role" {
source = "cloudposse/eks-iam-role/aws"
version = "0.10.3"
version = "0.11.0"

enabled = local.iam_role_enabled

aws_account_number = var.aws_account_number
aws_iam_policy_document = local.iam_role_enabled ? module.eks_iam_policy.json : "{}"
aws_iam_policy_document = local.iam_role_enabled ? [module.eks_iam_policy.json] : ["{}"]
aws_partition = var.aws_partition
eks_cluster_oidc_issuer_url = var.eks_cluster_oidc_issuer_url
service_account_name = var.service_account_name
Expand Down
Loading