Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alabama Water Institute CIROH deployment #1553

Merged
merged 12 commits into from
Jul 25, 2022
Merged

Conversation

sgibson91
Copy link
Member

@sgibson91 sgibson91 commented Jul 22, 2022

related #1444

TODO:

  • Add the relevant GitHub Teams to allowed_organizations

@sgibson91
Copy link
Member Author

sgibson91 commented Jul 22, 2022

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # google_container_cluster.cluster will be created
  + resource "google_container_cluster" "cluster" {
      + cluster_ipv4_cidr           = (known after apply)
      + datapath_provider           = (known after apply)
      + default_max_pods_per_node   = (known after apply)
      + enable_binary_authorization = false
      + enable_intranode_visibility = (known after apply)
      + enable_kubernetes_alpha     = false
      + enable_l4_ilb_subsetting    = false
      + enable_legacy_abac          = false
      + enable_shielded_nodes       = true
      + enable_tpu                  = false
      + endpoint                    = (known after apply)
      + id                          = (known after apply)
      + initial_node_count          = 1
      + label_fingerprint           = (known after apply)
      + location                    = "us-central1"
      + logging_service             = (known after apply)
      + master_version              = (known after apply)
      + monitoring_service          = (known after apply)
      + name                        = "awi-ciroh-cluster"
      + network                     = "default"
      + networking_mode             = (known after apply)
      + node_locations              = [
          + "us-central1-b",
        ]
      + node_version                = (known after apply)
      + operation                   = (known after apply)
      + private_ipv6_google_access  = (known after apply)
      + project                     = "awi-ciroh"
      + remove_default_node_pool    = true
      + self_link                   = (known after apply)
      + services_ipv4_cidr          = (known after apply)
      + subnetwork                  = (known after apply)
      + tpu_ipv4_cidr_block         = (known after apply)

      + addons_config {
          + cloudrun_config {
              + disabled           = (known after apply)
              + load_balancer_type = (known after apply)
            }

          + config_connector_config {
              + enabled = (known after apply)
            }

          + dns_cache_config {
              + enabled = (known after apply)
            }

          + gce_persistent_disk_csi_driver_config {
              + enabled = (known after apply)
            }

          + gcp_filestore_csi_driver_config {
              + enabled = (known after apply)
            }

          + horizontal_pod_autoscaling {
              + disabled = true
            }

          + http_load_balancing {
              + disabled = true
            }

          + istio_config {
              + auth     = (known after apply)
              + disabled = (known after apply)
            }

          + kalm_config {
              + enabled = (known after apply)
            }

          + network_policy_config {
              + disabled = (known after apply)
            }
        }

      + authenticator_groups_config {
          + security_group = (known after apply)
        }

      + cluster_autoscaling {
          + autoscaling_profile = "OPTIMIZE_UTILIZATION"
          + enabled             = false

          + auto_provisioning_defaults {
              + image_type       = (known after apply)
              + min_cpu_platform = (known after apply)
              + oauth_scopes     = (known after apply)
              + service_account  = (known after apply)
            }
        }

      + cluster_telemetry {
          + type = (known after apply)
        }

      + confidential_nodes {
          + enabled = (known after apply)
        }

      + database_encryption {
          + key_name = (known after apply)
          + state    = (known after apply)
        }

      + default_snat_status {
          + disabled = (known after apply)
        }

      + identity_service_config {
          + enabled = (known after apply)
        }

      + ip_allocation_policy {
          + cluster_ipv4_cidr_block       = (known after apply)
          + cluster_secondary_range_name  = (known after apply)
          + services_ipv4_cidr_block      = (known after apply)
          + services_secondary_range_name = (known after apply)
        }

      + logging_config {
          + enable_components = (known after apply)
        }

      + master_auth {
          + client_certificate     = (known after apply)
          + client_key             = (sensitive value)
          + cluster_ca_certificate = (known after apply)

          + client_certificate_config {
              + issue_client_certificate = (known after apply)
            }
        }

      + monitoring_config {
          + enable_components = (known after apply)
        }

      + network_policy {
          + enabled = true
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = (known after apply)
          + local_ssd_count   = (known after apply)
          + machine_type      = (known after apply)
          + metadata          = (known after apply)
          + oauth_scopes      = (known after apply)
          + preemptible       = false
          + service_account   = (known after apply)
          + spot              = false
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = (known after apply)
            }
        }

      + node_pool {
          + initial_node_count          = (known after apply)
          + instance_group_urls         = (known after apply)
          + managed_instance_group_urls = (known after apply)
          + max_pods_per_node           = (known after apply)
          + name                        = (known after apply)
          + name_prefix                 = (known after apply)
          + node_count                  = (known after apply)
          + node_locations              = (known after apply)
          + version                     = (known after apply)

          + autoscaling {
              + max_node_count = (known after apply)
              + min_node_count = (known after apply)
            }

          + management {
              + auto_repair  = (known after apply)
              + auto_upgrade = (known after apply)
            }

          + network_config {
              + create_pod_range    = (known after apply)
              + pod_ipv4_cidr_block = (known after apply)
              + pod_range           = (known after apply)
            }

          + node_config {
              + boot_disk_kms_key = (known after apply)
              + disk_size_gb      = (known after apply)
              + disk_type         = (known after apply)
              + guest_accelerator = (known after apply)
              + image_type        = (known after apply)
              + labels            = (known after apply)
              + local_ssd_count   = (known after apply)
              + machine_type      = (known after apply)
              + metadata          = (known after apply)
              + min_cpu_platform  = (known after apply)
              + node_group        = (known after apply)
              + oauth_scopes      = (known after apply)
              + preemptible       = (known after apply)
              + service_account   = (known after apply)
              + spot              = (known after apply)
              + tags              = (known after apply)
              + taint             = (known after apply)

              + ephemeral_storage_config {
                  + local_ssd_count = (known after apply)
                }

              + gcfs_config {
                  + enabled = (known after apply)
                }

              + kubelet_config {
                  + cpu_cfs_quota        = (known after apply)
                  + cpu_cfs_quota_period = (known after apply)
                  + cpu_manager_policy   = (known after apply)
                }

              + linux_node_config {
                  + sysctls = (known after apply)
                }

              + sandbox_config {
                  + sandbox_type = (known after apply)
                }

              + shielded_instance_config {
                  + enable_integrity_monitoring = (known after apply)
                  + enable_secure_boot          = (known after apply)
                }

              + workload_metadata_config {
                  + mode = (known after apply)
                }
            }

          + upgrade_settings {
              + max_surge       = (known after apply)
              + max_unavailable = (known after apply)
            }
        }

      + notification_config {
          + pubsub {
              + enabled = (known after apply)
              + topic   = (known after apply)
            }
        }

      + release_channel {
          + channel = "UNSPECIFIED"
        }

      + workload_identity_config {
          + workload_pool = "awi-ciroh.svc.id.goog"
        }
    }

  # google_container_node_pool.core will be created
  + resource "google_container_node_pool" "core" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 1
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "core-pool"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 5
          + min_node_count = 1
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = 30
          + disk_type         = (known after apply)
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "hub.jupyter.org/node-purpose" = "core"
              + "k8s.dask.org/node-purpose"    = "core"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-highmem-4"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = []
          + taint             = (known after apply)

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = (known after apply)
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.dask_worker["huge"] will be created
  + resource "google_container_node_pool" "dask_worker" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "dask-huge"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "k8s.dask.org/node-purpose" = "worker"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-16"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = true
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "k8s.dask.org_dedicated"
                  + value  = "worker"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.dask_worker["large"] will be created
  + resource "google_container_node_pool" "dask_worker" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "dask-large"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "k8s.dask.org/node-purpose" = "worker"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-8"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = true
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "k8s.dask.org_dedicated"
                  + value  = "worker"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.dask_worker["medium"] will be created
  + resource "google_container_node_pool" "dask_worker" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "dask-medium"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "k8s.dask.org/node-purpose" = "worker"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-4"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = true
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "k8s.dask.org_dedicated"
                  + value  = "worker"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.dask_worker["small"] will be created
  + resource "google_container_node_pool" "dask_worker" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "dask-small"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "k8s.dask.org/node-purpose" = "worker"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-2"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = true
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "k8s.dask.org_dedicated"
                  + value  = "worker"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.notebook["huge"] will be created
  + resource "google_container_node_pool" "notebook" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "nb-huge"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "hub.jupyter.org/node-purpose" = "user"
              + "k8s.dask.org/node-purpose"    = "scheduler"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-16"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "hub.jupyter.org_dedicated"
                  + value  = "user"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.notebook["large"] will be created
  + resource "google_container_node_pool" "notebook" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "nb-large"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "hub.jupyter.org/node-purpose" = "user"
              + "k8s.dask.org/node-purpose"    = "scheduler"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-8"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "hub.jupyter.org_dedicated"
                  + value  = "user"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.notebook["medium"] will be created
  + resource "google_container_node_pool" "notebook" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "nb-medium"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "hub.jupyter.org/node-purpose" = "user"
              + "k8s.dask.org/node-purpose"    = "scheduler"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-4"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "hub.jupyter.org_dedicated"
                  + value  = "user"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_container_node_pool.notebook["small"] will be created
  + resource "google_container_node_pool" "notebook" {
      + cluster                     = "awi-ciroh-cluster"
      + id                          = (known after apply)
      + initial_node_count          = 0
      + instance_group_urls         = (known after apply)
      + location                    = "us-central1"
      + managed_instance_group_urls = (known after apply)
      + max_pods_per_node           = (known after apply)
      + name                        = "nb-small"
      + name_prefix                 = (known after apply)
      + node_count                  = (known after apply)
      + node_locations              = (known after apply)
      + operation                   = (known after apply)
      + project                     = "awi-ciroh"
      + version                     = (known after apply)

      + autoscaling {
          + max_node_count = 100
          + min_node_count = 0
        }

      + management {
          + auto_repair  = true
          + auto_upgrade = false
        }

      + node_config {
          + disk_size_gb      = (known after apply)
          + disk_type         = "pd-balanced"
          + guest_accelerator = (known after apply)
          + image_type        = (known after apply)
          + labels            = {
              + "hub.jupyter.org/node-purpose" = "user"
              + "k8s.dask.org/node-purpose"    = "scheduler"
            }
          + local_ssd_count   = (known after apply)
          + machine_type      = "n1-standard-2"
          + metadata          = (known after apply)
          + oauth_scopes      = [
              + "https://www.googleapis.com/auth/cloud-platform",
            ]
          + preemptible       = false
          + service_account   = (known after apply)
          + tags              = []
          + taint             = [
              + {
                  + effect = "NO_SCHEDULE"
                  + key    = "hub.jupyter.org_dedicated"
                  + value  = "user"
                },
            ]

          + shielded_instance_config {
              + enable_integrity_monitoring = (known after apply)
              + enable_secure_boot          = (known after apply)
            }

          + workload_metadata_config {
              + mode = "GKE_METADATA"
            }
        }

      + upgrade_settings {
          + max_surge       = (known after apply)
          + max_unavailable = (known after apply)
        }
    }

  # google_filestore_instance.homedirs[0] will be created
  + resource "google_filestore_instance" "homedirs" {
      + create_time = (known after apply)
      + etag        = (known after apply)
      + id          = (known after apply)
      + location    = "us-central1-b"
      + name        = "awi-ciroh-homedirs"
      + project     = "awi-ciroh"
      + tier        = "BASIC_HDD"
      + zone        = (known after apply)

      + file_shares {
          + capacity_gb = 1024
          + name        = "homes"
        }

      + networks {
          + ip_addresses      = (known after apply)
          + modes             = [
              + "MODE_IPV4",
            ]
          + network           = "default"
          + reserved_ip_range = (known after apply)
        }
    }

  # google_project_iam_custom_role.requestor_pays will be created
  + resource "google_project_iam_custom_role" "requestor_pays" {
      + deleted     = (known after apply)
      + description = "Minimal role for hub users on awi-ciroh to identify as current project"
      + id          = (known after apply)
      + name        = (known after apply)
      + permissions = [
          + "serviceusage.services.use",
        ]
      + project     = "awi-ciroh"
      + role_id     = "awi_ciroh_requestor_pays"
      + stage       = "GA"
      + title       = "Identify as project role for users in awi-ciroh"
    }

  # google_project_iam_member.cd_sa_roles["roles/artifactregistry.writer"] will be created
  + resource "google_project_iam_member" "cd_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "awi-ciroh"
      + role    = "roles/artifactregistry.writer"
    }

  # google_project_iam_member.cd_sa_roles["roles/container.admin"] will be created
  + resource "google_project_iam_member" "cd_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "awi-ciroh"
      + role    = "roles/container.admin"
    }

  # google_project_iam_member.cluster_sa_roles["roles/artifactregistry.reader"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "awi-ciroh"
      + role    = "roles/artifactregistry.reader"
    }

  # google_project_iam_member.cluster_sa_roles["roles/logging.logWriter"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "awi-ciroh"
      + role    = "roles/logging.logWriter"
    }

  # google_project_iam_member.cluster_sa_roles["roles/monitoring.metricWriter"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "awi-ciroh"
      + role    = "roles/monitoring.metricWriter"
    }

  # google_project_iam_member.cluster_sa_roles["roles/monitoring.viewer"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "awi-ciroh"
      + role    = "roles/monitoring.viewer"
    }

  # google_project_iam_member.cluster_sa_roles["roles/stackdriver.resourceMetadata.writer"] will be created
  + resource "google_project_iam_member" "cluster_sa_roles" {
      + etag    = (known after apply)
      + id      = (known after apply)
      + member  = (known after apply)
      + project = "awi-ciroh"
      + role    = "roles/stackdriver.resourceMetadata.writer"
    }

  # google_service_account.cd_sa will be created
  + resource "google_service_account" "cd_sa" {
      + account_id   = "awi-ciroh-cd-sa"
      + disabled     = false
      + display_name = "Continuous Deployment SA for awi-ciroh"
      + email        = (known after apply)
      + id           = (known after apply)
      + name         = (known after apply)
      + project      = "awi-ciroh"
      + unique_id    = (known after apply)
    }

  # google_service_account.cluster_sa will be created
  + resource "google_service_account" "cluster_sa" {
      + account_id   = "awi-ciroh-cluster-sa"
      + disabled     = false
      + display_name = "Service account used by nodes of cluster awi-ciroh"
      + email        = (known after apply)
      + id           = (known after apply)
      + name         = (known after apply)
      + project      = "awi-ciroh"
      + unique_id    = (known after apply)
    }

  # google_service_account.workload_sa["prod"] will be created
  + resource "google_service_account" "workload_sa" {
      + account_id   = "awi-ciroh-prod"
      + disabled     = false
      + display_name = "Service account for user pods in hub prod in awi-ciroh"
      + email        = (known after apply)
      + id           = (known after apply)
      + name         = (known after apply)
      + project      = "awi-ciroh"
      + unique_id    = (known after apply)
    }

  # google_service_account.workload_sa["staging"] will be created
  + resource "google_service_account" "workload_sa" {
      + account_id   = "awi-ciroh-staging"
      + disabled     = false
      + display_name = "Service account for user pods in hub staging in awi-ciroh"
      + email        = (known after apply)
      + id           = (known after apply)
      + name         = (known after apply)
      + project      = "awi-ciroh"
      + unique_id    = (known after apply)
    }

  # google_service_account_iam_binding.workload_identity_binding["prod"] will be created
  + resource "google_service_account_iam_binding" "workload_identity_binding" {
      + etag               = (known after apply)
      + id                 = (known after apply)
      + members            = [
          + "serviceAccount:awi-ciroh.svc.id.goog[prod/user-sa]",
        ]
      + role               = "roles/iam.workloadIdentityUser"
      + service_account_id = (known after apply)
    }

  # google_service_account_iam_binding.workload_identity_binding["staging"] will be created
  + resource "google_service_account_iam_binding" "workload_identity_binding" {
      + etag               = (known after apply)
      + id                 = (known after apply)
      + members            = [
          + "serviceAccount:awi-ciroh.svc.id.goog[staging/user-sa]",
        ]
      + role               = "roles/iam.workloadIdentityUser"
      + service_account_id = (known after apply)
    }

  # google_service_account_key.cd_sa will be created
  + resource "google_service_account_key" "cd_sa" {
      + id                 = (known after apply)
      + key_algorithm      = "KEY_ALG_RSA_2048"
      + name               = (known after apply)
      + private_key        = (sensitive value)
      + private_key_type   = "TYPE_GOOGLE_CREDENTIALS_FILE"
      + public_key         = (known after apply)
      + public_key_type    = "TYPE_X509_PEM_FILE"
      + service_account_id = (known after apply)
      + valid_after        = (known after apply)
      + valid_before       = (known after apply)
    }

  # google_storage_bucket.user_buckets["scratch"] will be created
  + resource "google_storage_bucket" "user_buckets" {
      + force_destroy               = false
      + id                          = (known after apply)
      + location                    = "US-CENTRAL1"
      + name                        = "awi-ciroh-scratch"
      + project                     = "awi-ciroh"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = (known after apply)
      + url                         = (known after apply)

      + lifecycle_rule {
          + action {
              + type = "Delete"
            }

          + condition {
              + age                   = 7
              + matches_storage_class = []
              + with_state            = (known after apply)
            }
        }
    }

  # google_storage_bucket.user_buckets["scratch-staging"] will be created
  + resource "google_storage_bucket" "user_buckets" {
      + force_destroy               = false
      + id                          = (known after apply)
      + location                    = "US-CENTRAL1"
      + name                        = "awi-ciroh-scratch-staging"
      + project                     = "awi-ciroh"
      + self_link                   = (known after apply)
      + storage_class               = "STANDARD"
      + uniform_bucket_level_access = (known after apply)
      + url                         = (known after apply)

      + lifecycle_rule {
          + action {
              + type = "Delete"
            }

          + condition {
              + age                   = 7
              + matches_storage_class = []
              + with_state            = (known after apply)
            }
        }
    }

  # google_storage_bucket_iam_member.member["prod.scratch"] will be created
  + resource "google_storage_bucket_iam_member" "member" {
      + bucket = "awi-ciroh-scratch"
      + etag   = (known after apply)
      + id     = (known after apply)
      + member = (known after apply)
      + role   = "roles/storage.admin"
    }

  # google_storage_bucket_iam_member.member["staging.scratch-staging"] will be created
  + resource "google_storage_bucket_iam_member" "member" {
      + bucket = "awi-ciroh-scratch-staging"
      + etag   = (known after apply)
      + id     = (known after apply)
      + member = (known after apply)
      + role   = "roles/storage.admin"
    }

Plan: 30 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + ci_deployer_key           = (sensitive value)
  + kubernetes_sa_annotations = {
      + prod    = (known after apply)
      + staging = (known after apply)
    }

@github-actions
Copy link

Support and Staging deployments

Cloud Provider Cluster Name Upgrade Support? Reason for Support Redeploy Upgrade Staging? Reason for Staging Redeploy
gcp awi-ciroh Yes Following helm chart values files were modified: enc-support.secret.values.yaml, support.values.yaml Yes Following helm chart values files were modified: common.values.yaml, enc-staging.secret.values.yaml, staging.values.yaml

Production deployments

Cloud Provider Cluster Name Hub Name Reason for Redeploy
gcp awi-ciroh prod Following helm chart values files were modified: prod.values.yaml, common.values.yaml, enc-prod.secret.values.yaml

@2i2c-org 2i2c-org deleted a comment from github-actions bot Jul 25, 2022
@sgibson91 sgibson91 merged commit 9e6c50a into 2i2c-org:master Jul 25, 2022
@sgibson91 sgibson91 deleted the awi-ciroh branch July 25, 2022 15:20
@github-actions
Copy link

🎉🎉🎉🎉

Monitor the deployment of the hubs here 👉 https://github.com/2i2c-org/infrastructure/actions/workflows/deploy-hubs.yaml?query=branch%3Amaster

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants