Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Introduction Before starting this course, please go through the Terraform Constructs course. Now, it is time to learn about Terraform in depth Here is the list of topics that will be covered . Terraform Module Inheriting variable between modules Terraform workspace Remote backends Passing values from modules Maintaining multiple environments Configuring multiple providers Introduction to Nested Modules Now, you are familiar with the modules. It's time to learn about nested modules Nested Modules: For any module, root module should exist. Root module is basically a directory that holds the terraform configuration files for the desired infrastructure. They also provide the entry point to the nested modules you are going to utilize. For any module main.tf, variables.tf, and outputs.tf naming conventions are recommended. If you are using nested modules, create them under the subdirectory with name modules. Let's see the file structure which you will be following for creating nested modules. File Structure In this topic, you will learn about nested modules depending on the below file structure. checking_modules |__modules | |____main.tf | |______virtual_network | |_main.tf | |_vars.tf | |______storage_account |_main.tf |_vars.tf You will create three folders with names modules, virtual_network, and storage account. Plan As mentioned in the file structure, first you will create a virtual network and storage account using the terraform configuration. Make sure that you are following the terraform standards like writing the terraform configuration in the main file and defining the variables alone in the variables(vars.tf) file. You can call the reference variables in the main files by using String interpolation syntax which you have learned in the terraform constructs course. First, let's create a storage account and virtual network. Create a Storage Account Create stoacct.tf file in the folder and write the code to create a storage account. You can get the storage account code from here and modify the parameters as required. stoacctmain.tf resource "azurerm_storage_account""storageacct" { name = "${var.name}" resource_group_name = "${var.rg}" location = "${var.loc}" account_replication_type = "${var.account_replication}" account_tier = "${var.account_tier}" } stoacctvars.tf variable "rg" { default = "user-wmle" } variable "loc" { default = "eastus" } variable "name" { default ="storagegh" } variable "account_replication" { default = "LRS" } variable "account_tier" { default = "Basic" } Create virtual network using terraform configuration as you have already learned them in terraform constructs course. root module main.tf file under checking_modules will look like this module "vnet" { source = "../virtual_network" name = "mymodvnet" } module "stoacct" { source = "../storage_account" name = "mymodstorageaccount" } Now, append this file to the root module, and you can pass the parameters from the root module and create the required infrastructure. Not only the parameters mentioned above, but you can also pass any parameter from root module to the child modules. You can pass the parameters from the module to avoid the duplicity between them. Terraform Module Registry You know that HashiCorp terraform is a tool to build the infrastructure using consistent workflow. HashiCorp Terraform Module Registry consists of templates that help in setting and running the infrastructure with verified community modules. Terraform registry makes the work easy on how to deal with modules. You can access the terraform module registry by clicking here. There are two kinds of modules available in the terraform registry: Verified modules Community modules Publishing Your Module You can also contribute to the community by publishing your own modules by signing from Github. Creating vnet and subnet using terraform module registry You got a requirement to create a virtual network and subnet of azure using terraform. Instead of creating them on your own. you can clone the registry from GitHub and use them Step: Go to Terraform Module Registry and click on browse verified. Click on azurerm and click on network. Click on source and you will be redirected to the git hub page Clone the github link into your local machine. Once it is done, you can run terraform init and the module is initialized. Run terraform plan, terraform apply and pass the necessary inputs and to configure the infrastructure. Within minutes, virtual network and subnet are up and running. You will learn how to pass the inputs to the modules in the following cards. .tf.json Terraform can read json files too. Save the files with .tf.json extension and run them. Prelude In this topic, you are going to learn about: Remote backends Creating a storage account and storage container Configuring remote backends Sensitive parameter Why Backend? In theTerraform Constructs course, you have learned that backend will be helpful when working in a team. It is better to store the file in a local when you are the only person in the project. When multiple persons are working in a project, it is not a good idea to store the state file in local. It is better to store the code in a central repository, GitHub is the first thought when it comes to the central repository. However, is having some disadvantages of doing so: You have to maintain push, pull configurations. If one pushes the wrong configuration and commits it, it becomes a problem. State file consists of sensitive information. So state files are non-editable. Then how to maintain a state file? Terraform has a feature of backend which helps to solve this issue. Read further to know about the backend. Backend Backend in terraform explains how the state file is loaded and operations are executed. Backend can get initialize only after running terraform init command. So, terraform init is required to be run every time when backend is configured. when any changes made to the backend when the backend configuration is removed completely terraform will auto-detect when the initialization is required and errors out in that situation. Terraform cannot perform auto-initialize because it may require additional info from the user, to perform state migrations, etc.. Creating Backend In this topic, you will learn how to create backend in azure using terraform. You can get the code snippet to create backend from here. Below are the steps which you will be maintaining for creating backend. Create a storage account Create a storage container Create a backend Get the storage account access key and resource group name, give it as a parameter to the backend. Proceed further to know how to create backend in azure using terraform File Structure for Creating Backend Below is the complete file structure for backend. You will create backend.tf in the end. Backend |____stoacct.tf |____stocontainer.tf |____backend.tf |____vars.tf |____output.tf Note: It is not mandatory to create the files and folder with the same name you can give the names on your own However, make sure that you are adding an extension of (.tf) terraform files. Because terraform loads only .tf files. Proceed further to create a storage account and storage container in Azure. storage account and storage container Create stoacct.tf file in the folder and write the code to create a storage account. You can get the storage account code from here and modify the parameters as required. resource "azurerm_storage_account" "storageaccount" { name = "storageaccountname" resource_group_name = "${var.resourcegroup}" location = "${var.location}" account_tier = "${var.accounttier}" account_replication_type = "GRS" tags { environment = "staging" } } You can do the same thing to create a storage container and get the code from here. resource "azurerm_storage_container" "storagecontainer" { name = "vhds" resource_group_name = "${var.resourcegroup}" storage_account_name = "${azurerm_storage_account.storageaccount.name}" container_access_type = "private" } Modify the storage container and storage account parameters as per your requirement. Creating vars.tf file You will pass the variables from vars.tf file, and your variables.tf file will look somewhat like this variable "resourcegroup" { default = "user-abcd" } variable "location" { default = "eastus" } variable "accounttier" { default = "Basic" } You can pass the variables in run time, and you can give the description of why it is used as shown above. output.tf file Here are some of the output parameters which you can get from the storage account and storage container. output "storageacctname" { value = "${azurerm_storage_account.storageaccount.name}" } output "storageacctcontainer" { value = "${azurerm_storage_account.storagecontainer.name}" } output "access_key" { value = "${azurerm_storage_account.storageaccount.primary_access_key" } Not only them, but you can also get the output for other parameters too. Sensitive There may be sensitive parameters which should be shown only when running the terraform plan but not terraform apply. output "sensitiveoutput" { sensitive = true value = VALUE } When you run terraform apply, the output is labeled as sensitive. If there is any sensitive information, then you can protect it by using a sensitive parameter. terraform plan and terraform apply terraform fmt help in aligning the format so, the configuration is aligned in a neat format. It's time to run terraform plan proceed and run the terraform plan command once it is done. You will get an idea of how the resources are going to be created. If the plan gives the output of what you expected, proceed further and run terraform apply. Successfully, you have created a storage account and storage container. Now, it's time to create backend. Before creating backend, run the command to get the storage account keys list(az storage account keys list --account-name storageacctname) and copy one of the key somewhere. It is useful for configuring the backend. terraform plan and terraform apply terraform fmt help in aligning the format so, the configuration is aligned in a neat format. It's time to run terraform plan proceed and run the terraform plan command once it is done. You will get an idea of how the resources are going to be created. If the plan gives the output of what you expected, proceed further and run terraform apply. Successfully, you have created a storage account and storage container. Now, it's time to create backend. Before creating backend, run the command to get the storage account keys list(az storage account keys list --account-name storageacctname) and copy one of the key somewhere. It is useful for configuring the backend. Points to Remember You cannot use interpolation syntax to configure backends. After creating backend run terraform init, it will be in the locked state. By running terraform apply command automatically, the lease status is changed to locked. After it is applied, it will come to an unlocked state. This backend supports consistency checking and state locking using the native capabilities of the Azure storage account. Terragrunt Terragrunt is referred to as a thin wrapper for Terraform which provides extra tools to keep the terraform configurations DRY, manage remote state and to work with multiple terraform modules. Commands Below are the commands for terragrunt. terragrunt get terragrunt plan terragrunt apply terragrunt output terragrunt destroy Why Terragrunt? Terragrunt supports the following use cases: Keeps terraform code DRY. Remote state configuration DRY. CLI flags DRY. Executing terraform commands on multiple modules at a time. In the following cards, you will learn how to utilize remote state using Terragrunt. Introduction In the previous topics, you have learned creating backend in Azure. Now let's learn how to store the state file in AWS. In AWS, the state file is stored in amazon s3 bucket. Even though the state file is stored in the remote backend, there is a problem that multiple users may modify the state file. There is an open source tool called terragrunt, which manages the remote state automatically and provides locking with the help of Amazon DynamoDB. Creating S3 The below code is used to configure the backend in AWS. terraform { backend "s3" { bucket = "mybucket" key = "my/keybucket" region = "us-east-2" } } In AWS, the above format is used to store the state file in S3 bucket. Now, you will learn how locking is achieved in the upcoming cards. .terragrunt Now, install terragrunt, create a file with the name .terragrunt and fill it with your configuration values. lock = { backend = "dynamodb" config { state_file_id = "app-name" } } remote_state = { backend = "s3" config { encrypt = "true" bucket = "mybucket" key = "terraform.tfstate" region = "us-east-2" } } In the first part, terragrunt is configured to use DynamoDB for locking (lock) In the second part, you are configuring terragrunt to automatically store the tfstate files in S3 (remote_state) .terragrunt files follow the same syntax as HCL. Running Files Once the file is saved, you can run the commands related to terragrunt like terragrunt plan, terragrunt apply, and check how it is working. Terragrunt forwards almost all arguments, commands and options to the terrafrom directly. Terragrunt ensures that the remote state setting is configured as per the settings in .terragrunt file. For apply and destroy commands, terragrunt acquires locking using DynamoDB. Proceed further to know how it is performed in action. terragrunt apply > terragrunt apply [terragrunt] Configuring remote state for the s3 backend [terragrunt] Running command: terraform remote config [terragrunt] Attempting to acquire lock in DynamoDB [terragrunt] Attempting to create lock item table terragrunt_locks [terragrunt] Lock acquired! [terragrunt] Running command: terraform apply terraform apply aws_instance.example: Creating… ami: “” => “ami-0d729a...” instance_type: “” => “t2.mi...” (...) Apply complete! Resources: 1 added, 0 changed, 0 destroyed. [terragrunt] Attempting to release lock [terragrunt] Lock released! From the above output, you can know that terragrunt automatically configures the remote backend as described in .terragrunt file. It uses DynamoDB for locking purpose and then run terraform apply. If any other person is already having a lock, terragrunt will wait until the lock is released to prevent race conditions. Advantages of Using Terragrunt Below are the advantages of using terragrunt: Provides locking mechanism. Allows you to use remote state always. Prelude In this topic, you are going to learn about some of the special functions available in Terraform. lookup local values data concat contains lookup This helps in performing a dynamic lookup into a map variable. The syntax for lookup variable is lookup(map,key, [default]). Parameters: map: The map parameter is a variable like var.name. key: The key parameter includes the key from where it should find the environment. If the key doesn't exist in the map, interpolation will fail, if it doesn't find a third argument (default). The lookup will not work on nested lists or maps. It only works on flat maps. Local Values Local values help in assigning names to the expressions, which can be used multiple times within a module without rewriting it. Local values are defined in the local blocks. If variables are function arguments (inputs) and outputs are function return values, then the local values are like a function local variables. It is recommended to group the logically related local values together as a single block, if there is a dependency between them. Defining local: locals { service_name = "Fresco" owner = "Team" } Here is an other example for you: locals { instance_ids = "${concat(aws_instance.blue.*.id, aws_instance.green.*.id)}" } You will learn what is the use of concat in the following cards. When to Use Locals? If a single value or a result is used in many places and it is likely to be changed in the future, then you can go with locals. It is recommended to not use many local values because it makes the read configurations hard to the future maintainers. Data Source Data source allows the data to be computed or fetched for use in Terraform configuration. Data sources allow the terraform configurations to build on the information defined outside the Terraform or by another Terraform configuration. Providers play a major role in implementing and defining data sources. Data source helps in two major ways: It provides a read-only view for the pre-existing data.It can compute new values on the fly. Configuring Data Source The data source can receive the data from Terraform enterprise, consul or look-up into a pre-existing azure resource by filtering on tags and attributes. Every data source in the Terraform is mapped to the provider depending on the longest-prefix-matching. data "azurerm_resource_group" "passed" { name = "${var.resource_group_name}" } Concat and Contains concat(list1,list2) It combines two or more lists into a single list. E.g.: concat(aws_instance.db..tags.Name, aws_instance.web..tags.Name) contains(list, element) It returns true if the element is present in the list or else false. E.g.: contains(var.list_of_strings, "an_element") Terraform Concepts You know that Terraform is a tool for creating immutable infrastructure. It allows you to write the code in declarative manner while tracking the state of infrastructure. Terraform allows you to make reusable code. In this topic, you will learn how to work with workspaces using Terraform. Workspaces Every terraform configurations has associate backends which defines how the operations are executed and where the persistent data like terraform state are stored. Persistent data stored in backend belongs to a workspace. Previously, the backend has only one workspace which is called as default and there will be only one state file associated with the configuration. Some of the backends support multiple workspaces, i.e. It allows multiple state files to be stored within a single configuration. The configuration is still having only one backend, and the multiple distinct instances can be deployed without configuring to a new backend or by changing authentication credentials. Multiple Workspaces Below are the backends that currently support multiple workspaces: AzureRM(azure) - Stores the state in blob container on Microsoft Azure Storage. S3 - Stores the state in bucket of Amazon S3. Consul - Stores state in consul KV store at a given path. Local - The local backend stores the state file in the local file system. GCS - Stores the state as object in configurable prefix and bucket on GCS (Google Cloud Storage). Manta - It stores the state as articraft in manta. Using Workspaces In the previous cards, you have learned that terraform starts with a single workspace named default. default workspace is special because you can't delete the default workspace. If you haven't created the workspace before, then you are working on the default workspace. Workspaces in Terraform are managed by the terraform workspace set of commands You can create a new workspace and switch to it whenever needed. In the following cards, you will learn how to create and work with workspaces. Terraform Workspace Commands Below are the commands related to Terraform in workspace. terraform workspace new <name> - It creates the workspace with specified name and switches to it. terraform workspace list - It lists the workspaces available. terraform workspace select <name> - It switches to the specified workspace. terraform workspace delete <name> - It deletes the mentioned workspace. Now, let's see something practical on how to create and use this workspace. Creating Workspace You can go to any terraform directory which you have created before and run the commands. If you run below command, you will get the output as default. The * indicates the current directory. terraform workspace list *default Now, create a workspace with the name myworkspace by executing the below command. terraform workspace new myworkspace default *myworkspace You can switch to the default workspace by terraform workspace select default. Points to Remember You cannot delete the default directory that throws an error. You cannot delete your active workspace (the workspace in which you are currently working). How Workspaces are Useful? Suppose, you have multiple environments - development, staging and production. You want to create the infrastructure for all of them by using terraform workspaces. Let's learn how. terraform workspace new dev After executing this command, a folder with name terraform.tfstate.d is created and in that dev subdirectory is created. terraform workspace new prod After executing this command, a subdirectory of name prod is created in terraform.tfstate.d. You can configure the development environment in dev and production environment can be configured in prod. For each environment separate state files are created. Hands-on scenario Login to Azure CLI with the provided credentials. Creating virtual network Create files and folders according to the below structure: modules |__mod | |__main.tf | |__providers.tf |__vnet |__vnet.tf |__vars.tf |__providers.tf Create a directory named modules, and then create a sub-directory named 'vnet' Create files named 'vnet.tf' and 'vars.tf' Write the terraform configuration in it to create a virtual network. Create a file named providers.tf, and paste the following code to avoid version conflicts: provider "azurerm" { version = ">= 1.25, < 1.26" } Run the commands 'terraform init' , 'terraform plan', and 'terraform apply' to configure the infrastructure. Creating Module and Passing variables Change to the sub-directory 'mod' in the 'modules' directory, and create a file named main.tf. Enter the source path of the previously configured virtual network. Create and pass variable information (name, resourcegroup, location, and address) from the file main.tf. Create the virtual network named "mymodvnet". Change the main.tf code to string interpolation type. For example, resource_group_name = "${var.resourcegroup}" Create a file named providers.tf, and copy the following code: provider "azurerm" { version = ">= 1.25, < 1.26" } Note:Use variable names as given above. Two virtual networks should be up and running. Launch azure CLI and install terraform using the below commands sudo wget https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip sudo unzip terraform_0.11.10_linux_amd64.zip sudo mv terraform /usr/local/bin/ terraform -version Multiple Providers Terraform is used to create and manage the infrastructure resources like virtual networks, subnets, and load balancers. Any infrastructure in Terraform is represented as a resource. A provider understands the API interactions and provides the right resources to terraform. You can get the list of providers from here. You can find the major cloud providers from here. They offer a large number of services including IaaS, PaaS and SaaS. Multiple Providers The above picture represents some of the cloud providers. Terraform manages multiple providers by making proper API calls to the respective cloud provider. The image explains how terraform can manage multiple clouds. In the previous card, you learned how to check official providers. You can get the official repositories of cloud providers from here. Configuring Multiple Providers In this topic, you will learn how to configure multiple resources(AWS and AzureRM). If you have an AWS account that will be helpful. If not, you can go on and create a free account to check the working with multiple providers. Below is the file structure that you will maintain for working with multiple providers. multiple_providers | |___awsmain.tf | |___azuremain.tf | |___vars.tf | |___out.tf Let's proceed and create configure multiple providers using terraform. resource "aws_s3_bucket" "b" { bucket = "my-tf-test-bucket-21" acl = "private" tags = { Name = "My bucket test" Environment = "Dev" } } Creating awsmain.tf file To continue with AWS, you need an access key and secret key, which are used to configure the AWS environment. Create an S3 bucket in AWS which helps to store and retrieve the data. Here is the configuration syntax for AWS to create an S3 bucket Explaining main.tf File Resource has two parameters: name(aws_s3_bucket) and type("b"). Name of the resource may vary from provider to provider. Type of the resource depends on the user. Bucket indicates the name of the bucket. If it is not assigned, terraform assigns a random unique name. acl (access control list) - It helps to manage access to the buckets and objects. tag - It is a label which is assigned to the AWS resource by you or AWS. Environment - On which environment do you like to deploy S3 bucket (Dev, Prod or Stage). Creating azuremain.tf file In this card, you will learn how to create storage account in Azure using Terraform. resource "azurerm_storage_account""storageacct" { name = "storageacct21" resource_group_name = "${var.rg}" location = "${var.loc}" account_replication_type = "LRS" account_tier = "Standard" } You can get the format of how to create a storage account from here. Parameters: name - It indicates the name of the storage account. resource_group_name and location- name of the resource group and location. account_replication_type - Type of the account replication. account_tier - Account tier type outputs.tf You can get the output by defining the format in output.tf file. For example, if you are creating a storage account and if you need the location, name and any details of the configuration, you can use the ouput command. Suppose, if you like to know the account_tier of your storage account, you can follow the below syntax. output "account_tier" { value = "azurerm_storage_Account.storagecct.account_tier" } You can check the output by using terraform output command. terraform plan and terraform apply When you run terraform plan, provide the region to create the resource in that region. If there are no errors, proceed further and run terraform apply to notice the changes Log in to Azure portal and navigate to the storage account. Your storage account will be up and running. Login to the aws console and navigate to the s3_bucket. Your s3 bucket will be up and running. In the same way, you can configure multiple service providers and run them.
- Loading branch information