diff --git a/docs/2.0/docs/accountfactory/architecture/index.md b/docs/2.0/docs/accountfactory/architecture/index.md index ed5300b16..240d9a76e 100644 --- a/docs/2.0/docs/accountfactory/architecture/index.md +++ b/docs/2.0/docs/accountfactory/architecture/index.md @@ -4,37 +4,36 @@ Account Factory builds upon Gruntwork's [AWS Control Tower Multi Account Factory](/reference/modules/terraform-aws-control-tower/control-tower-multi-account-factory/) and Pipelines to provide automated account creation, baselining, and managed IAM policies. -Within your `infrastructure-live-root` account, the `_new-account-requests` directory serves as an input to the Gruntwork Control Tower Module. This module runs within your management account and uses AWS Control Tower to provision new accounts. +In your `infrastructure-live-root` repository, the `_new-account-requests` directory acts as input for the Gruntwork Control Tower Module. This module runs within your management account and uses AWS Control Tower to provision new accounts and manage existing ones. -Each provisioned account is tracked in your `infrastructure-live-root` repository as a new base directory containing Terragrunt units that are automatically tracked by Pipelines. +Pipelines tracks each provisioned account as a new base directory containing Terragrunt units in your `infrastructure-live-root` repository. ![Architecture Overview Diagram](/img/accountfactory/architecture.png) ## Account Vending -Account Vending begins by using the Account Factory Workflow to generate a Pull Request against `infrastructure-live-root` that adds a file to the `_new-account-requests` directory. Pipelines detects these new account requests and begins executing terragrunt plan/apply on this module in the management account. +Account Vending starts when the Account Factory Workflow generates a Pull Request against `infrastructure-live-root`, adding a file to the `_new-account-requests` directory. Pipelines detects these new account requests and runs terragrunt plan/apply commands on the `control-tower-multi-account-factory` unit in the management account. -Once the account has been created Pipelines can begin provisioning resources into the account, this includes the IaC controlled OIDC authentication Pipelines will use to deploy infrastructure changes within the account, and IAM policies used to restrict the scope of changes that Pipelines can deploy. +After creating the account(s), Pipelines provisions resources, including IaC-controlled OIDC authenticated roles, which Pipelines can later use to deploy infrastructure changes within the account, and IAM policies that define the scope of changes Pipelines can deploy. -Once this infrastructure has been added to the repository Pipelines deploys it into the AWS account, and runs account baselines in the logs, security, and shared accounts to finish provisioning the new account. +After adding this infrastructure to the repository, Pipelines deploys the resources into the AWS account and runs account baselines in the logs, security, and shared accounts to complete the provisioning process. ```mermaid sequenceDiagram - Account Factory Workflow ->> Infra Live Repository: Create account request file; - Infra Live Repository ->> Pipelines: Trigger Account Requested; - Pipelines ->> AWS Control Tower Module: Execute terragrunt to create account - AWS Control Tower Module ->> Pipelines: Account Created - Pipelines ->> Infra Live Repository: Add Account Infrastructure - Infra Live Repository ->> Pipelines: Trigger Account Added - Pipelines ->> Core Accounts: Execute terragrunt to baseline account + Account Factory Workflow ->> Infra Live Repository: Create account request file; + Infra Live Repository ->> Pipelines: Trigger Account Requested; + Pipelines ->> AWS Control Tower Module: Execute terragrunt to create account + AWS Control Tower Module ->> Pipelines: Account Created + Pipelines ->> Infra Live Repository: Add Account Infrastructure + Infra Live Repository ->> Pipelines: Trigger Account Added + Pipelines ->> Core Accounts: Execute terragrunt to baseline account ``` - ## IAM Roles -Each new account has a set of IAM policies that determine the scope of changes Pipelines can plan/apply within AWS. Pipelines will automatically assume the appropriate roles for each account when changes are detected. Read about the [roles in full here](/2.0/docs/pipelines/architecture/security-controls#roles-provisioned-by-devops-foundations). +Each new account includes IAM policies that define the scope of changes Pipelines can make within AWS. Pipelines automatically assumes the appropriate roles for each account when changes are detected. Read about the [roles in full here](/2.0/docs/pipelines/architecture/security-controls#roles-provisioned-by-devops-foundations). ## Delegated Repositories -Delegated repositories provide additional control over your infrastructure by expanding on the above architecture. When vending delegated repositories new account security baselines are still tracked in your `infrastructure-live-root` repository, however other infrastructure is tracked in a new repository specific to this account(s). New IAM roles are added to your `infrastructure-live-access-control` repository that are inherited by pipelines when deploying infrastructure in the delegated repositories, allowing the central platform team to control what changes can be implemented via Pipelines in the delegated repository. +Delegated repositories expand the architecture of your infrastructure estate management and provide additional access control for your infrastructure. When vending delegated repositories, Pipelines continues tracking new account security baselines in your `infrastructure-live-root` repository, while other infrastructure is tracked in a new repository specific to the account(s). Pipelines inherits new IAM roles from your `infrastructure-live-access-control` repository when deploying infrastructure in delegated repositories. This setup allows the central platform team to control what changes individual teams can make via Pipelines in the delegated repository. -![Delegated Architecture Overview Diagram](/img/accountfactory/delegated-architecture.png) \ No newline at end of file +![Delegated Architecture Overview Diagram](/img/accountfactory/delegated-architecture.png) diff --git a/docs/2.0/docs/accountfactory/architecture/logging.md b/docs/2.0/docs/accountfactory/architecture/logging.md index 7c9efc9b0..22c53858a 100644 --- a/docs/2.0/docs/accountfactory/architecture/logging.md +++ b/docs/2.0/docs/accountfactory/architecture/logging.md @@ -1,40 +1,40 @@ # Logging -Gruntwork Account Factory sets up [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) for all accounts in your [AWS Organization](https://aws.amazon.com/organizations/). CloudTrail allows you to answer the question of _who_ did _what_ and _when_ in each of your AWS accounts. +Gruntwork Account Factory configures [AWS CloudTrail](https://aws.amazon.com/cloudtrail/) for all accounts in your [AWS Organization](https://aws.amazon.com/organizations/). CloudTrail helps you determine _who_ did _what_ and _when_ in each of your AWS accounts. ## Where you can find logs -AWS CloudTrail is automatically configured to log all operations in your AWS accounts when you use Gruntwork Account Factory. By default, CloudTrail maintains your data for 90 days and is queryable using CloudTrail UI. +Gruntwork Account Factory automatically configures AWS CloudTrail to log all operations in your AWS accounts. By default, CloudTrail maintains your data for 90 days and is queryable using the AWS Console CloudTrail UI. -Account Factory sets up CloudTrail to output all events from all of your AWS accounts to an S3 bucket in your `logs` AWS account with a default rule to expire objects after 1 year. Once logs are in S3, you may set up an additional tool for [querying the logs](#querying-data). +Account Factory sets up CloudTrail to forward all events from all of your AWS accounts to an S3 bucket in your `logs` AWS account with a default rule to expire objects after 1 year. After logs reach S3, you can set up an additional tool for [querying the logs](#querying-data). ### CloudTrail -Logs can be viewed in the CloudTrail UI in each of your AWS accounts. To access the CloudTrail UI, navigate to the AWS Console, search `CloudTrail` in the search bar, select CloudTrail from the search results, then select **Event History** from the left side panel. +The CloudTrail UI in each AWS account provides access to logs. To access the CloudTrail UI, navigate to the AWS Console, search `CloudTrail` in the search bar, select CloudTrail from the search results, and then select **Event History** from the left side panel. ### S3 -CloudTrail logs are delivered to S3 approximately every 5 minutes. If you are using an S3 bucket that was created by AWS Control Tower, the bucket will be named `aws-controltower-logs--`. At the top level of the bucket is a single prefix with a random id, which contains additional prefixes to distinguish between logs for CloudTrail and AWS Config. CloudTrail logs for each account can be found in the prefix `/AWSLogs//`. +S3 receives CloudTrail logs approximately every 5 minutes. If AWS Control Tower created your S3 bucket, it will be named `aws-controltower-logs--`. At the top level of the bucket is a single prefix with a random ID, which contains additional prefixes to distinguish between logs for CloudTrail and AWS Config. Find CloudTrail logs for each account in the prefix `/AWSLogs//`. -For each account, CloudTrail delivers logs to region, year, month, and day specific prefixes in the bucket. For example, logs for an account with the id `123456789012` on September 26th, 2023 in the `us-west-2` region, would be in a prefix named `123456789012/us-west-2/2023/09/26`. +For each account, CloudTrail delivers logs to region, year, month, and day-specific prefixes in the bucket. For example, logs for an account with the id `123456789012` on September 26th, 2023 in the `us-west-2` region would be in a prefix named `123456789012/us-west-2/2023/09/26`. -If you configured your logs bucket while setting up AWS Control Tower, you will need access to the KMS key you created to encrypt the objects to download any objects. See [Logs bucket access](#logs-bucket-access) for more information. +If you configured your logs bucket while setting up AWS Control Tower, you will need access to the KMS key you created to encrypt the objects before you download any objects. See [Logs bucket access](#logs-bucket-access) for more information. For more information about querying data in S3, see [querying in S3](#querying-in-s3). -## Data access +## Data access -Granting access to the audit logs requires security configurations in both the originating account (e.g., the account in which the events are occurring) and the `logs` account. The originating account contains the CloudTrail trail itself, which should only be viewable by account administrators. The `logs` account contains the AWS S3 bucket that contains synchronized CloudTrail logs from all logs. +Granting access to the audit logs requires security configurations in the originating account (e.g., the account in which the events are occurring) and the `logs` account. The originating account contains the CloudTrail trail itself, which should only be viewable by account administrators. The `logs` account contains the AWS S3 bucket that contains synchronized CloudTrail logs from all logs. ### CloudTrail access -Access to CloudTrail is controlled by AWS IAM policies that are assigned to individual IAM users (not recommended) or IAM roles than can be assumed by users (recommended) in AWS accounts. +Access to CloudTrail is controlled by AWS IAM policies that are assigned to individual IAM users (not recommended) or IAM roles that can be assumed by users (recommended) in AWS accounts. :::tip Gruntwork recommends that only those with administrative access to an AWS account have access to view CloudTrail logs, as they contain a record of every single API operation that was performed in the account, which may expose the name or configuration of resources an individual user may otherwise not have access to. ::: -Further, the configuration of CloudTrail trails should be defined as code, with all changes reviewed in a pull request before being applied automatically by [Gruntwork Pipelines](/2.0/docs/pipelines/concepts/overview). +Furthermore, you should define the configuration of CloudTrail trails as code, with all changes reviewed in a pull request before being applied automatically by [Gruntwork Pipelines](/2.0/docs/pipelines/concepts/overview). See [Identity-based policy examples for AWS CloudTrail](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/security_iam_id-based-policy-examples.html) to learn more about granting access to CloudTrail. @@ -48,22 +48,22 @@ Access to the objects containing CloudTrail events in S3 is controlled by IAM po Gruntwork recommends that only a select group of trusted individuals on your security team have direct access to objects in the S3 bucket. Whenever possible, the data should be accessed by [querying](#querying-data) it using the CloudTrail UI or a query service such as [Amazon Athena](https://aws.amazon.com/athena/). ::: -## Querying data +## Querying data -You can query CloudTrail data in two ways - in the originating account or from the `logs` account. Querying in the originating account is done using the CloudTrail UI and is useful for quick checks that do not require in-depth analysis of usage and trends. If you require support for performing analytics to observe usage and trends, Gruntwork recommends querying the data in the S3 bucket in the `logs` account using a query service like [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html). +You can query CloudTrail data in two ways - in the originating account or from the `logs` account. Querying in the originating account is done using the CloudTrail UI, which is helpful for quick checks that do not require in-depth analysis of usage and trends. If you need support for performing analytics to observe usage and trends, Gruntwork recommends querying the data in the S3 bucket in the `logs` account using a query service like [Amazon Athena](https://docs.aws.amazon.com/athena/latest/ug/what-is.html). ### Querying in CloudTrail -CloudTrail supports simple queries based on a pre-set lookup attributes, including the event source, event name, user name, and resource type. A full list of filters can be found in [filtering CloudTrail events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.html#filtering-cloudtrail-events). The filters in the CloudTrail allow you to perform coarse grained queries over a single attribute filter and time range and view details on individual events. Using the CloudTrail UI can be a quick way to retrieve a lot of information, such as all the users that have performed a certain API call (e.g., ListBuckets), however it is ineffective when trying analyze data to understand usage patterns across multiple attributes, such as the usage of Gruntwork Pipelines by all users in your GitHub organization. +CloudTrail supports simple queries based on pre-set lookup attributes, including the event source, event name, user name, and resource type. You can find a complete list of filters in [filtering CloudTrail events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.html#filtering-cloudtrail-events). The filters in CloudTrail allow you to perform coarse-grained queries over a single attribute filter and time range and view details on individual events. Using the CloudTrail UI can be a quick way to retrieve a lot of information, such as all the users that have performed a specific API call (e.g., ListBuckets). However, it is ineffective when analyzing data to understand usage patterns across multiple attributes, such as the usage of Gruntwork Pipelines by all users in your GitHub organization. You can also [download events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/view-cloudtrail-events-console.html#downloading-events) from CloudTrail in CSV or JSON format and perform more in-depth analysis of events in another system such as a query service or using a script on your local machine. ### Querying in S3 -If CloudTrail is configured to output all logs to an S3 bucket, there are two approaches that can be taken to perform queries on the data - downloading the data directly (not recommended) and setting up a query service like [Amazon Athena](https://aws.amazon.com/athena/) to allow for more in-depth analysis of your data (recommended). +If you configure CloudTrail to output all logs to an S3 bucket, you can take two approaches to perform queries on the data - downloading the data directly (not recommended) and setting up a query service like [Amazon Athena](https://aws.amazon.com/athena/) to allow for more in-depth analysis of your data (recommended). -Amazon Athena is a popular choice for a query service because it is directly integrated in the AWS Console. Further, because CloudTrail logs have a known structure and prefix scheme in S3, you can set up [Athena with partition projection](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#create-cloudtrail-table-partition-projection), which will automatically create new partitions in Athena, reducing the work required to ensure the data is partitioned for optimal query support. While Athena is recommended because of its convenience, you may use any query service of your choosing to analyze the data, so long as the tool can pull data out of S3. See [example queries](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#query-examples-cloudtrail-logs) and [tips for querying CloudTrail logs](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#tips-for-querying-cloudtrail-logs) for more information on analyzing CloudTrail data using Athena. +Amazon Athena is a popular query service because it is integrated it into the AWS Console, and lets you perform queries on data in S3 directly. Furthermore, because CloudTrail logs have a known structure and prefix scheme in S3, you can set up [Athena with partition projection](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#create-cloudtrail-table-partition-projection), which will automatically create new partitions in Athena, reducing the work required to ensure data partitioning for optimal query support. We recommend Athena because of its convenience; you can use any query service you choose to analyze the data as long as the tool can pull data out of S3. See [example queries](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#query-examples-cloudtrail-logs) and [tips for querying CloudTrail logs](https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html#tips-for-querying-cloudtrail-logs) for more information on analyzing CloudTrail data using Athena. :::warning -Downloading CloudTrail event data from S3, while possible, is generally not recommended. Finding data requires downloading potentially many objects and writing scripts to parse an analyze them. Once the data is outside of S3, it is not possible to know what analysis is being performed. Query services like AWS Athena or similar allow you to see the history of queries performed and who performed the query. -::: \ No newline at end of file +While it is possible to download CloudTrail event data from S3, it is generally not recommended. Finding data requires downloading potentially many objects and writing scripts to parse and analyze them. Once the data is outside S3, it is impossible to know what analysis is performed. Query services like AWS Athena or similar allow you to see the history of queries performed and who performed the query. +::: diff --git a/docs/2.0/docs/accountfactory/architecture/network-topology.md b/docs/2.0/docs/accountfactory/architecture/network-topology.md index 0e7d1c597..b5451d1bf 100644 --- a/docs/2.0/docs/accountfactory/architecture/network-topology.md +++ b/docs/2.0/docs/accountfactory/architecture/network-topology.md @@ -1,29 +1,29 @@ # About Network Topology -The Network Topology component is focused on setting up the right best-practices network architecture for your organization. +The Network Topology component focuses on implementing best-practices network architecture for your organization. -Our standard network architecture includes: +The standard network architecture includes: -- The VPC itself -- Subnets, which are isolated subdivisions within the VPC. There are 3 "tiers" of subnets: public, private app, and private persistence. -- Route tables, which provide routing rules for the subnets. -- Internet Gateways to route traffic to the public Internet from public subnets. -- NATs to route traffic to the public Internet from private subnets. -- VPC peering to a management VPC -- DNS forwarding for a management VPC -- Optionally, tags for an EKS cluster +- The Virtual Private Cloud (VPC) itself +- Subnets, isolated subdivisions within the VPC, are organized into three "tiers": public, private app, and private persistence +- Route tables, which define routing rules for subnets +- Internet Gateways, to manage traffic between public subnets and the Internet +- Network Address Translation Gateways (NAT Gateways), to handle traffic between private subnets and the Internet +- VPC peering connections to a management VPC, to allow for centralized network routing +- DNS forwarding, for communication with a management VPC +- Optional tags for an EKS cluster ## Out-of-the-box setup -Gruntwork will generate the IaC code you need to set up our standard, recommended VPC configuration, as detailed in our [VPC service catalog module](/reference/services/networking/virtual-private-cloud-vpc). +Gruntwork generates the IaC code required to implement an opinionated, standard, recommended VPC configuration. Details are available in the [VPC service catalog module](/reference/services/networking/virtual-private-cloud-vpc). ## Extending the standard VPC -You can extend this configuration by using the "building block" modules from the VPC topic in the Gruntwork IaC Library to further extend your VPC, adding functionality such as: +You can expand the configuration using "building block" modules from the VPC topic in the Gruntwork IaC Library. These modules enable additional functionality such as: - [Enabling IPv6](/reference/modules/terraform-aws-vpc/vpc-app/#ipv6-design) - [Adding a Transit Gateway](/reference/modules/terraform-aws-vpc/transit-gateway/) - [Enabling DNS forwarding](/reference/modules/terraform-aws-vpc/vpc-dns-forwarder/) - [Setting up Tailscale](/reference/services/security/tailscale-subnet-router) -This is done by directly working with the OpenTofu/Terraform modules from Gruntwork IaC Library to accomplish the particular configuration you need. +This process involves working directly with the OpenTofu/Terraform modules in the Gruntwork IaC Library. diff --git a/docs/2.0/docs/library/architecture/opentofu-terraform-compatibility.md b/docs/2.0/docs/library/architecture/opentofu-terraform-compatibility.md index b35b3317b..32dc7135a 100644 --- a/docs/2.0/docs/library/architecture/opentofu-terraform-compatibility.md +++ b/docs/2.0/docs/library/architecture/opentofu-terraform-compatibility.md @@ -1,24 +1,24 @@ # Compatibility with OpenTofu and Terraform -All code in Gruntwork IaC Library is compatible with: +All code in the Gruntwork IaC Library is compatible with: - All versions of [OpenTofu](https://opentofu.org/) -- HashiCorp Terraform versions v1.5.7 and below +- HashiCorp Terraform versions up to and including v1.5.7 -## Why the split? +## Reason for the split -See our blog post [The Future of Terraform Must Be Open](https://blog.gruntwork.io/the-future-of-terraform-must-be-open-ab0b9ba65bca) for details. +For additional context, refer to the blog post [The Future of Terraform Must Be Open](https://blog.gruntwork.io/the-future-of-terraform-must-be-open-ab0b9ba65bca). ## What's special about HashiCorp Terraform v1.5.7? -This is the last version of HashiCorp Terraform that is licensed under the MPLv2 open source license. Any version of Terraform at or below v1.5.7 remains licensed under the MPLv2 license and will continue to work as it always has. +Version 1.5.7 is the final open source release of HashiCorp Terraform, licensed under the MPLv2 open-source license. Versions up to and including v1.5.7 remain MPLv2 licensed and thus can continue to be used with Gruntwork. ## What if I want to use a version of Terraform above v1.5.7? -Going forward, we recommend that all Gruntwork customers adopt [OpenTofu](https://opentofu.org/) as a "drop-in" replacement for HashiCorp Terraform. We will be developing against OpenTofu releases, testing for compatibility with OpenTofu, and offering full support for any issues you experience with our modules and OpenTofu. +Gruntwork advises all customers to adopt [OpenTofu](https://opentofu.org/) as a "drop-in" replacement for HashiCorp Terraform. We will prioritize development with OpenTofu releases, test for compatibility, and provide full support for any issues related to our modules and OpenTofu. -## As a user of Gruntwork IaC Library, do I need to change anything? +## As a user of the Gruntwork IaC Library, do I need to make changes? -No. You can continue using any version of HashiCorp Terraform up to and including v1.5.7. +No immediate changes are necessary. You can continue using any version of HashiCorp Terraform up to and including v1.5.7. -When you wish to upgrade your Terraform binary, you should replace HashiCorp Terraform with [OpenTofu](https://opentofu.org/). +When ready to upgrade your Terraform binary, replace HashiCorp Terraform with [OpenTofu](https://opentofu.org/). diff --git a/docs/2.0/docs/library/architecture/overview.md b/docs/2.0/docs/library/architecture/overview.md index bcfb96a8c..722dedbd7 100644 --- a/docs/2.0/docs/library/architecture/overview.md +++ b/docs/2.0/docs/library/architecture/overview.md @@ -4,50 +4,40 @@ import OpenTofuNotice from "/src/components/OpenTofuNotice" ## How modules are structured -The code in the module repos are organized into three primary folders: +The code in the module repositories is organized into three primary folders: -1. `modules`: The core implementation code. All of the modules that you will use and deploy are defined within. For example to ECS cluster module in the `terraform-aws-ecs` repo in `modules/ecs-cluster`. +1. `modules`: This folder contains the core implementation code. All modules you use and deploy are defined here. For example, you can locate the ECS cluster module in the `terraform-aws-ecs` repository within the `modules/ecs-cluster` folder. -1. `examples`: Sample code that shows how to use the modules in the `modules` folder and allows you to try them out without having to write any code: `cd` into one of the folders, follow a few steps in the README (e.g. run `terraform apply`), and you’ll have a fully working module up and running. In other words, this is executable documentation. +1. `examples`: This folder includes sample code demonstrating how to use the modules in the `modules` folder. These examples allow you to try the modules without writing code. Navigate to one of the example directories, follow the steps in the README (e.g., run `tofu apply`), and you will have a working module. These examples serve as executable documentation. -1. `test`: Automated tests for the code in modules and examples. +1. `test`: This folder contains automated tests for the code in both the `modules` and `examples` folders. -We follow Hashicorp's [Standard Model Structure](https://developer.hashicorp.com/terraform/language/modules/develop/structure) for our files (`main.tf`, `variables.tf`, `outputs.tf`). In the `variables.tf` file we always put the required variables at the top of the file, followed by the optional variables. Although there are often a lot of ways to configure our modules, we set reasonable defaults and try to minimize the effort required to configure the modules to the most common use cases. +The structure of these files follows HashiCorp's [Standard Module Structure](https://developer.hashicorp.com/terraform/language/modules/develop/structure), including `main.tf`, `variables.tf`, and `outputs.tf`. In the `variables.tf` file, required variables are listed first, followed by optional ones. Although many configurations are possible, the modules are designed with reasonable defaults to simplify setup for the most common use cases. ## How services are structured -The code in the `terraform-aws-service-catalog` repo is organized into three primary folders: +The `terraform-aws-service-catalog` repository organizes its code into three main folders: -1. `modules`: The core implementation code of this repo. All the services that you will use and deploy are defined within, such as the EKS cluster service in modules/services/eks-cluster. +1. `modules`: This folder contains the core implementation code for the services you use and deploy. For instance, the EKS cluster service resides in `modules/services/eks-cluster`. -1. `examples`: Sample code that shows how to use the services in the modules folder and allows you to try the services out without having to write any code: you `cd` into one of the folders, follow a few steps in the README (e.g., run `terraform apply`), and you’ll have fully working infrastructure up and running. In other words, this is executable documentation. Note that the examples folder contains two sub-folders: +1. `examples`: This folder provides sample code demonstrating how to use the services in the `modules` folder. These examples enable you to deploy services without writing code. Navigate to a directory, follow the README instructions (e.g., run `tofu apply`), and you'll have working infrastructure. This folder contains two sub-folders: - 1. `for-learning-and-testing`: Example code that is optimized for learning, experimenting, and testing, but not - direct production usage. Most of these examples use Terraform directly to make it easy to fill in dependencies - that are convenient for testing, but not necessarily those you’d use in production: e.g., default VPCs or mock - database URLs. +1. `for-learning-and-testing`: These examples are optimized for experimentation and testing but not for direct production use. They often rely on default VPCs or mock database URLs for convenience. - 1. `for-production`: Example code optimized for direct usage in production. This is code from the [Gruntwork Reference - Architecture](https://gruntwork.io/reference-architecture/), and it shows you how we build an end-to-end, - integrated tech stack on top of the Gruntwork Service Catalog. To keep the code DRY and manage dependencies - between modules, the code is deployed using [Terragrunt](https://terragrunt.gruntwork.io/). However, Terragrunt - is NOT required to use the Gruntwork Service Catalog: you can alternatively use vanilla Terraform or Terraform - Cloud / Enterprise, as described [here](https://docs.gruntwork.io/reference/services/intro/deploy-new-infrastructure#how-to-deploy-terraform-code-from-the-service-catalog). +1. `for-production`: These examples are optimized for direct production use. They showcase how Gruntwork's Reference Architecture integrates a complete tech stack using the Gruntwork Service Catalog. To keep the code DRY and manage dependencies, you can deploy these examples using [Terragrunt](https://terragrunt.gruntwork.io/). Terragrunt is not required to use the Gruntwork Service Catalog; you can use OpenTofu, Terraform or Terraform Cloud/Enterprise, as described [here](https://docs.gruntwork.io/reference/services/intro/deploy-new-infrastructure#how-to-deploy-terraform-code-from-the-service-catalog). - 1. Not all modules have a `for-production` example, but you can still create a production-grade configuration by - using the template provided in this discussion question, [How do I use the modules in terraform-aws-service-catalog - if there is no example?](https://github.com/gruntwork-io/knowledge-base/discussions/360#discussioncomment-25705480). +1. Not all modules include a `for-production` example. However, you can create a production-grade configuration using the template provided in [this discussion](https://github.com/gruntwork-io/knowledge-base/discussions/360#discussioncomment-25705480). -1. `test`: Automated tests for the code in modules and examples. +1. `test`: This folder includes automated tests for the code in the `modules` and `examples` folders. ## Tools used in Library -Gruntwork IaC Library has been created using the following tools: +Gruntwork built its IaC Library using the following tools: -1. [Terraform](https://www.terraform.io/). The Library contains nearly 300 Terraform modules that cover a range of common use cases in AWS. All library modules can be used with vanilla [Terraform](https://www.terraform.io/), [Terragrunt](https://terragrunt.gruntwork.io/), or third-party Terraform pipeline tools such as [Terraform Cloud](https://www.hashicorp.com/blog/announcing-terraform-cloud/) and [Terraform Enterprise](https://www.terraform.io/docs/enterprise/index.html). +1. [OpenTofu](https://opentofu.org/)/[Terraform](https://www.terraform.io/). The Library contains nearly 300 OpenTofu/Terraform modules covering common AWS use cases. All modules are compatible with [OpenTofu](https://opentofu.org/), [Terraform](https://www.terraform.io/), [Terragrunt](https://terragrunt.gruntwork.io/), or third-party pipeline tools like [Terraform Cloud](https://www.hashicorp.com/blog/announcing-terraform-cloud/) and [Terraform Enterprise](https://www.terraform.io/docs/enterprise/index.html). -1. [Packer](https://www.packer.io/). The Library defines _machine images_ (e.g., VM images) using Packer, where the main use case is building Amazon Machine Images (AMIs) that run on EC2 instances whose configuration is all defined in code. Once you’ve built an AMI, you can use Terraform to deploy it into AWS. +1. [Packer](https://www.packer.io/). The Library includes definitions for _machine images_ (e.g., VM images) using Packer. A common use case is creating Amazon Machine Images (AMIs) for EC2 instances, where configuration is defined entirely in code. After building an AMI, you can deploy it using OpenTofu/Terraform. 1. [Terratest](https://terratest.gruntwork.io/). All modules are functionally validated with automated tests written using Terratest. diff --git a/docs/2.0/docs/library/concepts/module-defaults.md b/docs/2.0/docs/library/concepts/module-defaults.md index e68b57db0..dede46f53 100644 --- a/docs/2.0/docs/library/concepts/module-defaults.md +++ b/docs/2.0/docs/library/concepts/module-defaults.md @@ -1,20 +1,18 @@ # Module Defaults -Module defaults is a pattern that allows infrastructure as code developers to reference a terraform module, set locals, and set default (but mutable) variable values. This pattern helps keep your Terragrunt architecture DRY, reducing the likelihood of errors when making changes across environments. +Module defaults allow infrastructure as code developers to reference an OpenTofu/Terraform module, set locals, and set default (but mutable) variable values. This pattern helps keep Terragrunt architecture DRY, reducing the likelihood of errors when making changes across environments. -This patterns has benefits for both module developers and consumers: +This pattern benefits both module developers and consumers: -- Module developers can centrally define defaults that can be applied to all usage. -- Module consumers don’t have to repeat as much code when leveraging the module. +- Module developers can centrally define defaults for consistent usage. +- Module consumers reduce repetitive code when leveraging the module. :::note -The module defaults pattern was previously known as the "envcommon" pattern (and stored in an `_envcommon` directory of the `infrastructure-live` repo. These are equivalent concepts, so any knowledge base posts or other material referencing "envcommon" can be directly mapped to the concept of module defaults.) +The module defaults pattern was previously known as the "envcommon" pattern, stored in an `_envcommon` directory within the `infrastructure-live` repository. These are equivalent concepts, and references to "envcommon" can be directly mapped to module defaults. ::: -One analogy is to think about module defaults in the same way you might think about purchasing a new car. The manufacturer offers a "base model" with several configurable options, such as interior upgrades, but at the end of the purchase you will have a car. As the purchaser, you might just need the base model without any upgrades, or you may upgrade the stereo to a premium option. +A helpful analogy is purchasing a car. The manufacturer offers a "base model" with several configurable options, such as interior upgrades, while ensuring the vehicle remains functional. Similarly, module defaults allow you to define a "base" resource, like an AWS RDS for a PostgreSQL instance. For example, consumers might receive a `db.t3.medium` instance with a `50GB` general-purpose SSD as the default. Consumers can override variables for increased memory, CPU, or storage in production without altering other configurations. -Similarly, with module defaults you may define a "base" resource, such as an AWS RDS for PostgreSQL instance. By default, all consumers of the module might get a `db.t3.medium` instance with a `50gb` general purpose SSD. While this might work in the majority of your environments, for a production deployment you might need an instance with more memory, CPU, and storage space. With module defaults, you would simply override the variable names for the instance size/type and the amount of desired storage. Everything else remains the same. - -Now that we’ve established what the module defaults pattern is and how it can help simplify your infrastructure as code, let’s dive into how you can define a "defaults module" that implements the pattern. +With module defaults established, the next step is defining a "defaults module" to implement this pattern effectively. diff --git a/docs/2.0/docs/library/concepts/modules.md b/docs/2.0/docs/library/concepts/modules.md index 9ab0a2925..46e87f108 100644 --- a/docs/2.0/docs/library/concepts/modules.md +++ b/docs/2.0/docs/library/concepts/modules.md @@ -1,48 +1,44 @@ # Modules -Modules are reusable "infrastructure building blocks" that describe how to deploy and manage a specific piece of infrastructure, such as a VPC, ECS cluster, or Auto Scaling Group. +Modules are reusable "infrastructure building blocks" describing how to deploy and manage specific pieces of infrastructure, such as a VPC, ECS cluster, or Auto Scaling Group. -Most modules are written in Terraform and define several AWS resources. +Most modules are written in OpenTofu/Terraform and define multiple AWS resources. ## Example -Let’s look at an example module. The [rds module](/reference/modules/terraform-aws-data-storage/rds) is a Terraform module that creates an RDS database, the IAM roles needed to operate that database, optional read replicas, database subnet groups, and the relevant security groups. +Consider the [`rds` module](/reference/modules/terraform-aws-data-storage/rds). This OpenTofu/Terraform module creates an RDS database, the IAM roles required to operate it, optional read replicas, database subnet groups, and relevant security groups. -The module deploys a key element of an overall RDS deployment, but it's not a _complete_ RDS deployment. That's because the `rds` module does not include backup policies using AWS Backup (for disaster recovery), or RDS Proxy (to pool database connections), or CloudWatch alarms (to alert you when something goes wrong). These missing pieces are best thought of as building block modules themselves. Gruntwork has modules for `backup-plan`, `backup-vault`, and `rds-proxy` that can all be used in combination with the `rds` module. +While the module addresses key elements of an RDS deployment, it does not provide a _complete_ solution. It excludes features like backup policies using AWS Backup, RDS Proxy for connection pooling, and CloudWatch alarms for monitoring. These missing elements are available as separate building block modules, such as `backup-plan`, `backup-vault`, and `rds-proxy`, which you can use alongside the `rds` module. -To see how Gruntwork gives you an off-the-shelf overall deployment with all the elements included, see [Service Modules](/2.0/docs/library/concepts/service-modules). +To explore complete solutions combining building blocks, refer to [Service Modules](/2.0/docs/library/concepts/service-modules). ## Modules are optimized for control -A module is designed to be small, narrow in scope, and highly reusable, like a building block. Modules give you _control_, but they may not give you _convenience_. You can use the building block modules for all kinds of use cases (high control), but if you want to deploy a complete piece of infrastructure, you still have to do the work of assembling the right modules (low convenience). +Modules are designed to be small, narrow in scope, and highly reusable. They prioritize _control_ over _convenience_, making them suitable for diverse use cases. Deploying a complete infrastructure solution often requires assembling multiple modules. -To learn how you can optimize for convenience, see [Service Modules](/2.0/docs/library/concepts/service-modules). +Consider [Service Modules](/2.0/docs/library/concepts/service-modules) when optimizing for greater convenience. -To learn more about the overall thought process behind building block modules versus service modules, see [Introducing: The Gruntwork Module, Service, and Architecture Catalogs](https://blog.gruntwork.io/introducing-the-gruntwork-module-service-and-architecture-catalogs-eb3a21b99f70). +For insights on building block versus service modules, see [Introducing: The Gruntwork Module, Service, and Architecture Catalogs](https://blog.gruntwork.io/introducing-the-gruntwork-module-service-and-architecture-catalogs-eb3a21b99f70). ## When to use a building block module -Building block modules are fairly generic by design, so you won't typically deploy a single building block module directly. Instead, you write code that combines the building block modules you need for a specific use case. +Building block modules are typically generic. Instead of deploying a single module, users write code combining multiple modules for specific use cases. For instance, one module might deploy Kubernetes control planes while another deploys worker nodes. A Kubernetes cluster requires both modules. -For example, one module might deploy the control plane for Kubernetes and a separate module could deploy worker nodes; you may need to combine both modules together to deploy a Kubernetes cluster. - -We recommend our [Service Catalog](/2.0/docs/library/concepts/service-modules) for common use cases, but our full module catalog is available if you have a more complex use case. +We recommend using the [Service Catalog](/2.0/docs/library/concepts/service-modules) for everyday use cases, with the entire module catalog available for more complex needs. ## Where to find the building block modules -The module catalog features over 250 "building block" modules spanning three major use cases: +The module catalog features over 250 building block modules spanning three primary use cases: 1. AWS foundations 2. Running applications 3. Storing data -Each of these use cases covers one or more Subject Matter Expert (SME) topics such as AWS account management, VPC/Networking, EKS, ECS, and RDS. SME topics are a first-class concept within Gruntwork, but do not have much visibility in the product itself at this time. - -To browse the module catalog, see the [Library Reference](/library/reference) and look for "Module Catalog" on the sidebar. You can also visit the list of [private Gruntwork GitHub repos](https://github.com/orgs/gruntwork-io/repositories?q=&type=private&language=&sort=). +To browse the module catalog, see the [Library Reference](/library/reference) or peruse the [private Gruntwork GitHub repositories using your subscription](https://github.com/orgs/gruntwork-io/repositories?q=&type=private&language=&sort=). ## How modules are updated -Gruntwork brings together AWS and Terraform experts around the world who track updates from AWS, Terraform, and the DevOps community at large, along with requests from the Gruntwork customer community. We translate the most important of these updates into new features, new optimizations, and ultimately new releases. +Gruntwork employs AWS and OpenTofu/Terraform experts who monitor updates from AWS, OpenTofu, Terraform, and the broader DevOps community. Feedback from the Gruntwork customer community also informs updates. They incorporate the most significant updates into new features and releases. -Refer to [Gruntwork releases](/guides/stay-up-to-date/#gruntwork-releases) for a comprehensive listing of all the updates. +Refer to [Gruntwork releases](/guides/stay-up-to-date/#gruntwork-releases) for a comprehensive listing of updates. diff --git a/docs/2.0/docs/library/concepts/overview.md b/docs/2.0/docs/library/concepts/overview.md index 7e74ffffa..f5d717b60 100644 --- a/docs/2.0/docs/library/concepts/overview.md +++ b/docs/2.0/docs/library/concepts/overview.md @@ -2,22 +2,22 @@ import OpenTofuNotice from "/src/components/OpenTofuNotice" # Gruntwork IaC Library -Gruntwork IaC Library is a collection of reusable Infrastructure as Code (IaC) modules that enables you to deploy and manage infrastructure quickly and reliably. +Gruntwork IaC Library is a collection of reusable Infrastructure as Code (IaC) modules designed to enable rapid, reliable infrastructure deployment and management. -It promotes code reusability, modularity, and consistency in infrastructure deployments. Essentially, we’ve taken the thousands of hours we spent building infrastructure on AWS and condensed all that experience and code into pre-built modules you can deploy into your own infrastructure. +The Library promotes code reusability, modularity, and consistency. It encapsulates years of experience building AWS infrastructure into pre-built modules you can integrate into your infrastructure management. ## Two types of modules -Gruntwork IaC Library contains two types of modules: +Gruntwork IaC Library contains two module types: ### "Building block" modules -"Building block" modules (which we call simply **modules**) are "infrastructure building blocks" authored by Gruntwork and written in OpenTofu configuration files. They capture a singular best-practice pattern for specific pieces of infrastructure and are designed to be both limited in scope and highly reusable. They typically represent one part of a use case you want to accomplish. For example, the `vpc-flow-logs` module does not create a VPC, it only adds the VPC Flow Logs functionality to an existing VPC. +"Building block" modules, referred to as **modules**, are authored by Gruntwork and written in OpenTofu/Terraform configuration files. They capture best-practice patterns for specific infrastructure components and are limited in scope yet highly reusable. For example, the `vpc-flow-logs` module adds VPC Flow Logs functionality to an existing VPC but does not create a VPC. -To learn more, refer to [Modules](/2.0/docs/library/concepts/modules) +Refer to [Modules](/2.0/docs/library/concepts/modules) for additional details. ### Service modules -**Service modules** are opinionated combinations of "building block" modules described above. They are designed to be used "off the shelf" with no need to assemble a collection of “building block” modules on your own. They typically represent a full use case to solve a business problem on their own. For example, the `vpc` service module deploys a VPC, VPC Flow Logs, and Network ACLs. If you agree with the opinions embedded in a service module, they’re the fastest way to deploy production-grade infrastructure. +Service modules combine "building block" modules into opinionated, "off-the-shelf" solutions requiring minimal assembly. These modules typically address complete business use cases. For example, the `vpc` service module deploys a VPC, VPC Flow Logs, and Network ACLs. If the embedded configurations align with your needs, service modules provide a fast path to production-grade infrastructure. -To learn more, refer to [Service Modules](/2.0/docs/library/concepts/service-modules) +Refer to [Service Modules](/2.0/docs/library/concepts/service-modules) to learn more. diff --git a/docs/2.0/docs/library/concepts/principles/be-judicious-with-new-features.md b/docs/2.0/docs/library/concepts/principles/be-judicious-with-new-features.md index d201b5880..66ce2866b 100644 --- a/docs/2.0/docs/library/concepts/principles/be-judicious-with-new-features.md +++ b/docs/2.0/docs/library/concepts/principles/be-judicious-with-new-features.md @@ -1,25 +1,17 @@ # Be Judicious With New Features -Sometimes new features in OpenTofu are released that make module authoring more convenient. +New OpenTofu features can streamline module authoring and provide more features, but may also require that consumers adopt newer OpenTofu versions. This requirement can pose challenges for organizations that cannot upgrade OpenTofu versions promptly, but want to keep using the latest version of our modules. -Leveraging them can make it convenient author modules, but it can be really inconvenient for module consumers, as they can be inadvertently forced into adopting the newer version of OpenTofu in order to use the new version of the module. Some organizations may not have the bandwidth to invest in upgrading to a newer version of OpenTofu just to use the latest version of a module. - -This is compounded by the way in which modules in the library can depend upon each other. If a module is updated to require a newer version of OpenTofu, all modules that depend on it will also need to be updated to require the newer version of OpenTofu. +Modules in the Library often depend on each other. All dependent modules must be updated to require a newer OpenTofu version if a dependent module update requires a newer OpenTofu version. ## How to Decide to Use Newer Features -Some individual judgement is required to decide when this trade-off is worth making. - -Qualities of OpenTofu features to keep in mind when deciding if they should be adopted for modules include: +Consider the following when deciding whether to adopt a new feature: ### Age of Feature -The older the feature, the more likely it is that consumers of the module will be able to use it without updating their versions of OpenTofu. - -It is not necessarily a good idea to only use features that are excessively old, as you don't want consumers to miss out on the latest and greatest from OpenTofu. - -### Impact to Module Consumers +Older features are more likely to be compatible with existing consumer environments. While it is unnecessary to avoid newer features altogether, prioritizing well-established features ensures broader compatibility. -Even when features are relatively new, however, there can be advantages to requiring that consumers upgrade OpenTofu to adopt newer versions of modules. +### Impact on Module Consumers -Take, for example, the [moved](https://opentofu.org/docs/v1.6/language/modules/develop/refactoring/#moved-block-syntax) block. Using this block can allow consumers to upgrade to newer versions of modules despite addresses of resources in modules changing without manual intervention. If the cost to a consumer to manually move state is greater than the cost of requiring that they upgrade to `v1.1` of Terraform at the earliest, it can be worth it to introduce the `moved` block in a release. +Requiring upgrades is sometimes justified. For example, the [moved](https://opentofu.org/docs/v1.6/language/modules/develop/refactoring/#moved-block-syntax) block enables seamless upgrades even when resource addresses change. If upgrading provides greater benefits than manual interventions, adopting newer versions can be a practical choice.