This module creates an AWS DynamoDB table.
module "basic" {
source = "git@github.com:rackspace-infrastructure-automation/aws-terraform-dynamo/?ref=v0.12.2"
environment = "Test"
hash_key = "MyHashKey"
read_capacity_units = 10
name = "myexampletable"
tags = local.tags
write_capacity_units = 5
attributes = [
{
name = "TestHashKey"
type = "S"
},
]
}
When using the index maps for GSI's and LSI's there is a bug in the Terraform AWS provider which stores the order of non_key_attributes
in the state file in an order that may not match what you pass in.
If a subsequent plan shows changes for your indexes look for lines similar to the below in the output:
global_secondary_index.1708383685.non_key_attributes.0: "data2" => ""
global_secondary_index.1708383685.non_key_attributes.1: "data1" => ""
global_secondary_index.1708383685.non_key_attributes.2: "data3" => ""
The input data was ["data1", "data2", "data3"]
and we can see that attributes 0, 1, and 2 do not follow this same order.
The resolution at the moment is to edit the list in your map to match the order in the state file. This should result in a clean plan when no other changes are present.
References:
global secondary index always recreated #3828
DynamoDB Non-Key Attributes Ordering #3807
Several changes were required while adding terraform 0.12 compatibility. The following changes should be
made when upgrading from a previous release to version 0.12.0 or higher.
The following module variables were updated to better meet current Rackspace style guides:
table_name
->name
Name | Version |
---|---|
aws | >= 2.7.0 |
Name | Description | Type | Default | Required |
---|---|---|---|---|
attributes | List of nested attribute definitions. Only required for hash_key's (always) and range_key's (if used) attributes. Attributes have name and type. Type must be a scalar type: S, N, or B for (S)tring, (N)umber or (B)inary data. i.e. [{ name=<hash_key> type=<data_type>}]. Terraform documentation: A note about attributes | list(map(string)) |
n/a | yes |
enable_pay_per_request | Controls how you are charged for read and write throughput and how you manage capacity. If True, DynamoDB charges you for the data reads and writes your application performs on your tables. You do not need to specify how much read and write throughput you expect your application to perform because DynamoDB instantly accommodates your workloads as they ramp up or down. On-Demand Pricing If False, you specify the number of read_capacity_units and write_capacity_units per second that you expect your workload to require. Provisioned Pricing |
bool |
false |
no |
enable_ttl | Enable time to live on record. | bool |
false |
no |
environment | Application environment for which this resource is being created. Preferred values are Development, Integration, PreProduction, Production, QA, Staging, or Test. | string |
"Development" |
no |
global_secondary_index_maps | A list of maps for each global secondary index (GSI). Please see examples for usage. | any |
[] |
no |
hash_key | Forces new resource! Must contain only alphanumberic characters, dash (-), underscore (_) or dot (.). Needs to be defined by type in attributes. | string |
n/a | yes |
local_secondary_index_maps | A list of maps for each local secondary index (LSI). Please see examples for usage. | any |
[] |
no |
name | The name of the table, this needs to be unique within a region. | string |
n/a | yes |
point_in_time_recovery | Enable point in time recovery for the table. | bool |
false |
no |
range_key | Forces new resource! RangeType PrimaryKey Name. If used, it will need to be defined by type in attributes | string |
"" |
no |
read_capacity_units | Provisioned read throughput. Should be between 5 and 10000. Ignored if enable_pay_per_request is set to true . |
number |
5 |
no |
stream_enabled | Enable the stream setting on the table. | bool |
false |
no |
stream_view_type | If using stream_enabled, you can specify a valid DynamoDB StreamViewType; must be one of: KEYS_ONLY, NEW_IMAGE. OLD_IMAGE, NEW_AND_OLD_IMAGES |
string |
"" |
no |
table_encryption_cmk | You may choose to use an AWS Managed CMK by setting this to true . Otherwise, server side table encryption defaults to an AWS Owned CMK. |
bool |
false |
no |
tags | Custom tags to apply to all resources. | map(string) |
{} |
no |
ttl_attribute | The name of the table attribute to store the TTL timestamp in | string |
"" |
no |
write_capacity_units | Provisioned write throughput. Should be between 5 and 10000. Ignored if enable_pay_per_request is set to true . |
number |
10 |
no |
Name | Description |
---|---|
stream_arn | ARN for the stream if stream_enabled was set to true , otherwise returns a string of "null". |
table_arn | Table ARN |
table_name | Table Name |