Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to apply or refresh when oke module is passed with vcn id #963

Open
ddevadat opened this issue Nov 20, 2024 · 2 comments
Open

Unable to apply or refresh when oke module is passed with vcn id #963

ddevadat opened this issue Nov 20, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@ddevadat
Copy link
Contributor

This is the scenario

i use oci vcn module to create a network and oke module to create oke but will pass the vcn id created from vcn module. In other words i dont want oke module to create a vcn for me. I will create it in a different module and pass the vcn id

The below is an example code snippet. I am passing create_vcn=false in the oke module and passing the vcn id retrieved from the vcn module. All in the same run


module "vcn" {
  source                   = "oracle-terraform-modules/vcn/oci"
  version                  = "3.6.0"
  compartment_id           = var.compartment_id
  create_internet_gateway  = true
  create_nat_gateway       = true
  create_service_gateway   = true
  freeform_tags            = merge({}, local.common_tags)
  subnets                  = local.subnet_maps
  vcn_cidrs                = [var.vcn_cidr]
  vcn_name                 = local.cluster_domain
  lockdown_default_seclist = false
}

module "k8s_infra" {
  source                             = "oracle-terraform-modules/oke/oci"
  version                            = "5.2.0"
  compartment_id                     = var.compartment_id
  worker_compartment_id              = var.compartment_id
  home_region                        = local.home_region
  region                             = var.region
  create_vcn                         = "false"
  vcn_id                             = local.vcn_id
  subnets                            = local.oke_subnets
  pods_cidr                          = local.oke_pod_cidrs
  cni_type                           = local.cni
  cluster_type                       = local.cluster_type
  cluster_name                       = local.oke_name
  create_cluster                     = true
  kubernetes_version                 = local.kubernetes_version
  control_plane_is_public            = false
  create_bastion                     = true
  create_operator                    = true
  bastion_allowed_cidrs              = var.public_allow_cidr_ranges
  operator_image_id                  = local.operator_image_id
  operator_install_istioctl          = true
  operator_install_helm              = true
  operator_install_k9s               = true
  operator_install_kubectl_from_repo = true
  operator_install_kubectx           = true
  ssh_private_key                    = var.generate_ssh_keys ? tls_private_key.compute_ssh_key.private_key_pem : var.ssh_private_key
  ssh_public_key                     = var.generate_ssh_keys ? tls_private_key.compute_ssh_key.public_key_openssh : var.ssh_public_key
  allow_bastion_cluster_access       = false
  allow_pod_internet_access          = true
  cluster_freeform_tags              = var.tags
  control_plane_allowed_cidrs        = local.cp_allowed_cidrs
  load_balancers                     = "public"
  worker_image_os_version            = local.worker_image_os_version
  worker_shape                       = local.worker_shape_properties
  worker_pools                       = local.worker_pools
  allow_worker_ssh_access            = true
  state_id                           = local.oke_name
  allow_rules_public_lb              = var.k8s_allow_rules_public_lb
  # worker_cloud_init                  = local.worker_cloud_init
  cilium_install                     = true
  cilium_helm_version                = "1.16.3"
  cilium_helm_values                 = var.cilium_helm_values
  cluster_addons_to_remove           = var.cluster_addons_to_remove

  providers = {
    oci      = oci.current_region
    oci.home = oci.home
  }

}

locals {
  vcn_id                               = module.vcn.vcn_id
  base_infra_private_subnet_cidr_block = module.vcn.subnet_all_attributes.private_sub1.cidr_block
  base_infra_public_subnet_cidr_block  = module.vcn.subnet_all_attributes.public_sub1.cidr_block
  base_infra_private_subnet_id         = module.vcn.subnet_id.private-subnet
  base_infra_public_subnet_id          = module.vcn.subnet_id.public-subnet
  oke_pod_cidrs                        = (lookup(var.k8s_cluster_properties, "cni", "flannel") == "flannel") ? var.flannel_pods_cidr : local.base_infra_private_subnet_cidr_block
  cp_allowed_cidrs                     = ["${local.base_infra_private_subnet_cidr_block}", "${local.base_infra_public_subnet_cidr_block}"]
  home_region                          = lookup(data.oci_identity_regions.home_region.regions[0], "name")
  oke_name                             = lookup(var.k8s_cluster_properties, "cluster_name", "cilium-oke")
}

This doesnt work. If i try to apply, i will get an error

│ Error: Invalid for_each argument
│
│   on .terraform/modules/k8s_infra/modules/network/rules.tf line 47, in resource "oci_core_network_security_group_security_rule" "oke":
│   47:   for_each                  = local.all_rules
│     ├────────────────
│     │ local.all_rules will be known only after apply

Workaround is i need to run apply in two parts

terraform apply  --target=module.vcn
terroform apply 

Is there a better way to do it

@ddevadat ddevadat added the bug Something isn't working label Nov 20, 2024
@robo-cap
Copy link
Member

I can't reproduce this issue with Terraform v1.8.5 and the following terraform config:

locals {
  common_tags        = {}
  tags               = {}
  cluster_domain     = "test"
  oke_name           = "test"
  cni                = "flannel"
  cluster_type       = "enhanced"
  kubernetes_version = "v1.28.2"
  cp_allowed_cidrs   = ["0.0.0.0/0"]
  oke_pod_cidrs      = "10.244.0.0/16"
  vcn_id             = module.vcn.vcn_id
  vcn_cidr           = "10.0.0.0/16"
  home_region        = "us-ashburn-1"
  region             = "eu-frankfurt-1"
  subnet_maps = {
    bastion  = { name = "bastion", cidr_block = "10.0.0.0/29" }
    operator = { name = "operator", cidr_block = "10.0.0.64/29", type = "private" }
    cp       = { name = "cp", cidr_block = "10.0.0.8/29", type = "private" }
    int_lb   = { name = "int_lb", cidr_block = "10.0.0.32/27", type = "private" }
    pub_lb   = { name = "pub_lb", cidr_block = "10.0.128.0/27" }
    workers  = { name = "workers", cidr_block = "10.0.144.0/20", type = "private" }
  }
  oke_subnets = {
    bastion  = { id = module.vcn.subnet_id.bastion, create = "never" }
    operator = { id = module.vcn.subnet_id.operator, create = "never" }
    cp       = { id = module.vcn.subnet_id.cp, create = "never" }
    int_lb   = { id = module.vcn.subnet_id.int_lb, create = "never" }
    pub_lb   = { id = module.vcn.subnet_id.pub_lb, create = "never" }
    workers  = { id = module.vcn.subnet_id.workers, create = "never" }
  }

  k8s_allow_rules_public_lb = {
    "Allow TCP ingress to public load balancers for SSL traffic from anywhere" : {
      protocol = 6, port = 443, source = "0.0.0.0/0", source_type = "CIDR_BLOCK",
    },
  }
  public_allow_cidr_ranges = ["0.0.0.0/0"]
}


module "vcn" {
  source                   = "oracle-terraform-modules/vcn/oci"
  version                  = "3.6.0"
  compartment_id           = var.compartment_id
  create_internet_gateway  = true
  create_nat_gateway       = true
  create_service_gateway   = true
  freeform_tags            = merge({}, local.common_tags)
  subnets                  = local.subnet_maps
  vcn_cidrs                = [local.vcn_cidr]
  vcn_name                 = local.cluster_domain
  lockdown_default_seclist = false
}

module "k8s_infra" {
  source                  = "oracle-terraform-modules/oke/oci"
  version                 = "5.2.0"
  compartment_id          = var.compartment_id
  worker_compartment_id   = var.compartment_id
  home_region             = local.home_region
  region                  = local.region
  create_vcn              = "false"
  vcn_id                  = local.vcn_id
  subnets                 = local.oke_subnets
  pods_cidr               = local.oke_pod_cidrs
  cni_type                = local.cni
  cluster_type            = local.cluster_type
  cluster_name            = local.oke_name
  create_cluster          = true
  kubernetes_version      = local.kubernetes_version
  control_plane_is_public = false
  create_bastion          = true
  create_operator         = true
  bastion_allowed_cidrs   = local.public_allow_cidr_ranges
  #operator_image_id                  = local.operator_image_id
  operator_install_istioctl          = true
  operator_install_helm              = true
  operator_install_k9s               = true
  operator_install_kubectl_from_repo = true
  operator_install_kubectx           = true
  ssh_private_key_path               = "~/.ssh/id_rsa"
  ssh_public_key_path                = "~/.ssh/id_rsa.pub"
  allow_bastion_cluster_access       = false
  allow_pod_internet_access          = true
  cluster_freeform_tags              = local.tags
  control_plane_allowed_cidrs        = local.cp_allowed_cidrs
  load_balancers                     = "public"
  worker_pools = {
    oke-vm-standard-e4 = {
      description      = "OKE-managed Node Pool with OKE Oracle Linux 8 image",
      shape            = "VM.Standard.E4.Flex",
      create           = true,
      ocpus            = 1,
      memory           = 8,
      boot_volume_size = 50,
      # os               = "Oracle Linux",
      # os_version       = "8",
      size      = 0,
      min_size  = 1,
      max_size  = 3,
      autoscale = true,
    },
  }

  allow_worker_ssh_access = true
  state_id                = local.oke_name
  allow_rules_public_lb   = local.k8s_allow_rules_public_lb

  providers = {
    oci.home = oci.home
  }
}



terraform {
  required_providers {
    oci = {
      configuration_aliases = [oci.home]
      source                = "oracle/oci"
      version               = ">= 4.119.0"
    }
  }
}

provider "oci" {
  config_file_profile = "DEFAULT"
  region              = "eu-frankfurt-1"
}

provider "oci" {
  alias               = "home"
  region              = "us-ashburn-1"
  config_file_profile = "DEFAULT"
}
$ terraform plan
...
  # module.vcn.module.subnet[0].oci_core_subnet.vcn_subnet["workers"] will be created
  + resource "oci_core_subnet" "vcn_subnet" {
      + availability_domain        = (known after apply)
      + cidr_block                 = "10.0.144.0/20"
      + compartment_id             = "ocid1.compartment.oc1..aaaaaaaaqi3if6t4n24qyabx5pjzlw6xovcbgugcmatavjvapyq3jfb4diqq"
      + defined_tags               = (known after apply)
      + dhcp_options_id            = (known after apply)
      + display_name               = "workers"
      + dns_label                  = (known after apply)
      + freeform_tags              = (known after apply)
      + id                         = (known after apply)
      + ipv6cidr_block             = (known after apply)
      + ipv6cidr_blocks            = (known after apply)
      + ipv6virtual_router_ip      = (known after apply)
      + prohibit_internet_ingress  = (known after apply)
      + prohibit_public_ip_on_vnic = true
      + route_table_id             = (known after apply)
      + security_list_ids          = (known after apply)
      + state                      = (known after apply)
      + subnet_domain_name         = (known after apply)
      + time_created               = (known after apply)
      + vcn_id                     = (known after apply)
      + virtual_router_ip          = (known after apply)
      + virtual_router_mac         = (known after apply)
    }

Plan: 64 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply"
now.

@ddevadat
Copy link
Contributor Author

ddevadat commented Dec 1, 2024

in your examples , you are fixing the subnet cidr block. In my case, its getting evaluated. I think this is what is causing the issue. To elaborate, i am only providing the vcn cidr range to the vcn module. The subnet map for the vcn module is getting evaluated in my code. When it comes to oke module it will read the subnet attributes like cidr and id from the vcn module o/p and thats where we see this issue.

module "vcn" {
  source                   = "oracle-terraform-modules/vcn/oci"
  version                  = "3.6.0"
  compartment_id           = var.compartment_id
  create_internet_gateway  = true
  create_nat_gateway       = true
  create_service_gateway   = true
  subnets                  = local.subnet_maps
  vcn_cidrs                = [var.vcn_cidr]
  vcn_name                 = "demo_vcn"
  lockdown_default_seclist = false
}


locals {
  public_subnets_list   = ["public-subnet"]
  private_subnets_list  = ["private-subnet", "oke-control-plane"]
  network_cidr_blocks = {
    for idx, subnet in concat(local.public_subnets_list, local.private_subnets_list) : subnet =>
    (subnet == "oke-control-plane") ? cidrsubnet(var.vcn_cidr, 12, 0) : cidrsubnet(var.vcn_cidr, 8, idx + 1)
  }
  public_subnet_cidrs  = [for subnet_name in local.public_subnets_list : local.network_cidr_blocks[subnet_name]]
  private_subnet_cidrs = [for subnet_name in local.private_subnets_list : local.network_cidr_blocks[subnet_name]]
  public_subnets = { for idx, name in local.public_subnets_list : "public_sub${idx + 1}" => {
    name       = name
    cidr_block = local.public_subnet_cidrs[idx]
    type       = "public"
    dns_label  = "public"
  } }
  private_subnets = { for idx, name in local.private_subnets_list : "private_sub${idx + 1}" => {
    name       = name
    cidr_block = local.private_subnet_cidrs[idx]
    type       = "private"
    dns_label  = (name == "oke-control-plane") ? "okectrp" : "private"
  } }
  subnet_maps      = merge(local.public_subnets, local.private_subnets)
  public_subnet_id = lookup(module.vcn.subnet_id, "public-subnet", null)
  ssh_keys         = []

}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants