From ba31278896babfcbda9adc2b7583fbe946bdd10b Mon Sep 17 00:00:00 2001 From: Tanner Doshier Date: Tue, 1 Oct 2024 09:48:52 -0400 Subject: [PATCH 1/3] docs: Update broken Terraform documentation links (#758) HashiCorp looks to have rearranged some of their documentation. Considered linking to an archived version of the webpage for decision docs, but current page felt close enough. Resolves https://github.com/navapbc/template-infra/issues/750 --- ...rate-terraform-backend-configs-into-separate-config-files.md | 2 +- docs/infra/set-up-aws-account.md | 2 +- infra/README.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/decisions/infra/0004-separate-terraform-backend-configs-into-separate-config-files.md b/docs/decisions/infra/0004-separate-terraform-backend-configs-into-separate-config-files.md index 18b5284f..39e46a2d 100644 --- a/docs/decisions/infra/0004-separate-terraform-backend-configs-into-separate-config-files.md +++ b/docs/decisions/infra/0004-separate-terraform-backend-configs-into-separate-config-files.md @@ -8,7 +8,7 @@ Up until now, most projects adopted an infrastructure module architecture that is structured as follows: Each application environment (prod, staging, etc) is a separate root module that calls a template module. The template module defines all the application infra resources needed for an environment. Things that could be different per environment (e.g. desired ECS task count) are template variables, and each environment can have local vars (or somewhat equivalently, a tfvars file) that customizes those variables. Importantly, each environment has its own backend tfstate file, and the backend config is stored in the environment module’s `main.tf`. -An alternative approach exists to managing the backend configs. Rather than saving the backend config directly in `main.tf`, `main.tf` could contain a [partial configuration](https://developer.hashicorp.com/terraform/language/settings/backends/configuration#partial-configuration), and the rest of the backend config would be passed in during terraform init with a command like `terraform init --backend-config=prod.s3.tfbackend`. There would no longer be a need for separate root modules for each environment. What was previously the template module would instead act as the root module, and engineers would work with different environments solely through separate tfbackend files and tfvar files. Doing this would greatly simplify the module architecture at the cost of some complexity when executing terraform commands due to the extra command line parameters. To manage the extra complexity of running terraform commands, a wrapper script (such as with Makefile commands) can be introduced. +An alternative approach exists to managing the backend configs. Rather than saving the backend config directly in `main.tf`, `main.tf` could contain a [partial configuration](https://developer.hashicorp.com/terraform/language/backend#partial-configuration), and the rest of the backend config would be passed in during terraform init with a command like `terraform init --backend-config=prod.s3.tfbackend`. There would no longer be a need for separate root modules for each environment. What was previously the template module would instead act as the root module, and engineers would work with different environments solely through separate tfbackend files and tfvar files. Doing this would greatly simplify the module architecture at the cost of some complexity when executing terraform commands due to the extra command line parameters. To manage the extra complexity of running terraform commands, a wrapper script (such as with Makefile commands) can be introduced. The approach can be further extended to per-environment variable configurations via an analogous approach with [variable definitions files](https://developer.hashicorp.com/terraform/language/values/variables#variable-definitions-tfvars-files) which can be passed in with the `-var-file` command line option to terraform commands. diff --git a/docs/infra/set-up-aws-account.md b/docs/infra/set-up-aws-account.md index 784f7a1f..827246d7 100644 --- a/docs/infra/set-up-aws-account.md +++ b/docs/infra/set-up-aws-account.md @@ -2,7 +2,7 @@ The AWS account setup process will: -1. Create the [Terraform backend](https://www.terraform.io/language/settings/backends/configuration) resources needed to store Terraform's infrastructure state files. The project uses an [S3 backend](https://www.terraform.io/language/settings/backends/s3). +1. Create the [Terraform backend](https://developer.hashicorp.com/terraform/language/backend) resources needed to store Terraform's infrastructure state files. The project uses an [S3 backend](https://www.terraform.io/language/settings/backends/s3). 2. Create the [OpenID connect provider in AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_create_oidc.html) to allow GitHub Actions to access AWS account resources. 3. Create the IAM role and policy that GitHub Actions will use to manage infrastructure resources. diff --git a/infra/README.md b/infra/README.md index 92cf5454..0d104cac 100644 --- a/infra/README.md +++ b/infra/README.md @@ -55,7 +55,7 @@ This project has the following AWS environments: - `staging` - `prod` -The environments share the same root modules but will have different configurations. Backend configuration is saved as [`.tfbackend`](https://developer.hashicorp.com/terraform/language/settings/backends/configuration#file) files. Most `.tfbackend` files are named after the environment. For example, the `[app_name]/service` infrastructure resources for the `dev` environment are configured via `dev.s3.tfbackend`. Resources for a module that are shared across environments, such as the build-repository, use `shared.s3.tfbackend`. Resources that are shared across the entire account (e.g. /infra/accounts) use `..s3.tfbackend`. +The environments share the same root modules but will have different configurations. Backend configuration is saved as [`.tfbackend`](https://developer.hashicorp.com/terraform/language/backend#file) files. Most `.tfbackend` files are named after the environment. For example, the `[app_name]/service` infrastructure resources for the `dev` environment are configured via `dev.s3.tfbackend`. Resources for a module that are shared across environments, such as the build-repository, use `shared.s3.tfbackend`. Resources that are shared across the entire account (e.g. /infra/accounts) use `..s3.tfbackend`. ### 🔀 Project workflow From b7a46771b2154ca28eb9b66e67d12a983d117855 Mon Sep 17 00:00:00 2001 From: Kevin Boyer Date: Tue, 1 Oct 2024 14:28:10 -0400 Subject: [PATCH 2/3] Retry wait for stable service in deploy release (#761) ## Ticket n/a ## Changes - If waiting for a stable ECS service fails during deploy, try it exactly one more time ## Context for reviewers - For two applications using the template-infra, the Nava Labs Decision Support Tool project, and an internal Nava tool, the ECS service takes slightly more than 10 minutes to become stable (typically about 11 or 13). - The AWS wait command can't be configured to allow more than 10 minutes - Other approaches considered: - Sleeping. This is probably the simplest solution but doesn't seem as robust as simply trying the command twice. - Retrying a configurable number of times in a loop. This seems like premature complexity. ## Testing Tested on internal tool (posted in Slack) --- bin/deploy-release | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/bin/deploy-release b/bin/deploy-release index e5b56d82..59405ca1 100755 --- a/bin/deploy-release +++ b/bin/deploy-release @@ -25,6 +25,13 @@ echo "::endgroup::" cluster_name=$(terraform -chdir="infra/${app_name}/service" output -raw service_cluster_name) service_name=$(terraform -chdir="infra/${app_name}/service" output -raw service_name) echo "Wait for service ${service_name} to become stable" -aws ecs wait services-stable --cluster "${cluster_name}" --services "${service_name}" +wait_for_service_stability() { + aws ecs wait services-stable --cluster "${cluster_name}" --services "${service_name}" +} + +if ! wait_for_service_stability; then + echo "Retrying" + wait_for_service_stability +fi echo "Completed ${app_name} deploy of ${image_tag} to ${environment}" From 2511aea57e1b2507eba20eb63c1a736c5e2d371f Mon Sep 17 00:00:00 2001 From: lamroger-nava <164910391+lamroger-nava@users.noreply.github.com> Date: Wed, 16 Oct 2024 13:00:45 -0700 Subject: [PATCH 3/3] Playwright baseURL var should be URL (#748) The typo made the merge not work --- e2e/app/playwright.config.js | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/e2e/app/playwright.config.js b/e2e/app/playwright.config.js index 12a24297..77a72fca 100644 --- a/e2e/app/playwright.config.js +++ b/e2e/app/playwright.config.js @@ -3,10 +3,10 @@ import { deepMerge } from '../util'; import { defineConfig } from '@playwright/test'; export default defineConfig(deepMerge( - baseConfig, - { - use: { - baseUrl: baseConfig.use.baseUrl || "localhost:3000" - }, - } - )); + baseConfig, + { + use: { + baseURL: baseConfig.use.baseURL || "localhost:3000" + }, + } +));