Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove output splat syntax on nested map attributes #4

Merged
merged 2 commits into from
Jan 22, 2019

Conversation

joshmyers
Copy link
Contributor

When initially building this module, it was done piece meal adding the
outputs once the initial apply had been done. apply/plan etc all looked
good after the initial apply. However, there is a bug whereby the splat
output fails when dealing with nested map attributes[1]. This issue was
raised by @igor when an initial plan/apply fails as the below example:

module.amq.output.primary_console_url: Resource 'aws_mq_broker.default' does not have attribute 'instances.0.console_url' for variable 'aws_mq_broker.default.*.instances.0.console_url'

Removing the splat style syntax resolved this issue, but the splat was
necessary due to the count (read: enabled flag) on the aws_mq_broker
resource. This “fix” works when enabled = “true” but fails when
enabled = “false” as this ends up with a 0 count. It seems that trying
to use the splat syntax and asking for nested attributes (conditionally)
is not possible in current terraform. I think we likely have this issue
in other Terraform modules but don’t hit it as do not often set
enabled = “false”

[1] hashicorp/terraform#17048

When initially building this module, it was done piece meal adding the
outputs once the initial apply had been done. apply/plan etc all looked
good after the initial apply. However, there is a bug whereby the splat
output fails when dealing with nested map attributes[1]. This issue was
raised by @igor when an initial plan/apply fails as the below example:

```
module.amq.output.primary_console_url: Resource 'aws_mq_broker.default' does not have attribute 'instances.0.console_url' for variable 'aws_mq_broker.default.*.instances.0.console_url'
```

Removing the splat style syntax resolved this issue, but the splat was
necessary due to the `count` (read: enabled flag) on the `aws_mq_broker`
resource. This “fix” works when `enabled = “true”` but fails when 
`enabled = “false”` as this ends up with a 0 count. It seems that trying
to use the splat syntax and asking for nested attributes (conditionally)
is not possible in current terraform. I think we likely have this issue
in other Terraform modules but don’t hit it as do not often set 
`enabled = “false”`

[1] hashicorp/terraform#17048
aknysh
aknysh previously approved these changes Jan 21, 2019
Copy link
Member

@osterman osterman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do this, what happens when enabled=false? I guess my real question is, should we be getting rid of the enabled flag in this module?

@joshmyers
Copy link
Contributor Author

@osterman Agree re enabled flag. Have removed for now.

I tried the trick @aknysh used in https://github.com/cloudposse/terraform-aws-eks-cluster/blob/master/main.tf#L112-L117 which looked good on a plan, but failed on apply:

local.broker_1_console_url: local.broker_1_console_url: At column 33, line 1: map "local.broker_1_console_url_map" does not have homogenous types. found TypeString and then TypeList in:

As we do not have a safe way to disable the module. See 3b36149
@joshmyers joshmyers force-pushed the issue_3_no_attribute_for_resource branch from 4bdc2e2 to ba72e38 Compare January 22, 2019 10:52
@aknysh aknysh requested a review from osterman January 22, 2019 14:57
@joshmyers joshmyers merged commit 3b1912e into master Jan 22, 2019
@joshmyers joshmyers deleted the issue_3_no_attribute_for_resource branch January 22, 2019 18:32
joshmyers added a commit to cloudposse-archives/terraform-aws-codefresh-backing-services that referenced this pull request Jan 23, 2019
Bump aws-mq-broker to 0.3.0, which does not feature enabled flags, so 
remove them.

In its current incarnation the terraform-aws-mq-broker does not support
the enabled variable allowing for boolean creation of resources in the
module, see [1] for more context.

[1] cloudposse/terraform-aws-mq-broker#4
joshmyers added a commit to cloudposse-archives/terraform-aws-codefresh-backing-services that referenced this pull request Jan 25, 2019
* Add basic main.tf

This will hold our global config to be used in other resources

* Add default global variables

* Add Postgres Aurora

This commit has largely been lifted from our backing services module for
Postgres Aurora. It creates a Postgres Aurora cluster and writes the
outputs of that to SSM. If no master username / password is given, these
are generated as a random strings and saved to SSM using SecureString
where necessary. If not postgres_name is given, it defaults to using
using the terraform-null-label outputted id.

* Add VPC backing-services

This is part of terraform-root-modules backing service but I think we 
will likely want to create the VPC outside of this module and pass it 
in. I’ve left in for now but we can take out later.

* Add Postgres read replica

* Add Elasticache Redis backing service

* Add Kops metadata module

This is used to pull information from our kops cluser by doing a load of
data lookups based on some tags. We can then pass things like Kops node
security groups around to be used by other modules.

* Add AmazonMQ backing service for CodeFresh

This commit adds ActiveMQ broker and config for CodeFresh Enterprise. An
admin user is created and credentials stored encrypted in SSM. A DNS
hostname is created for the ActiveMQ endpoints.

Note that unlike the other backing modules in here, AmazonMQ resources
are not currently their own module. There are only a handful of
resources for this AmazonMQ stuff but we can pull it out into a module
if we so wish.

The default ActiveMQ config [1] has been added. This is optional in the
`aws_mq_broker` but due to Terraform not supporting conditional blocks
beyond a basic count, it is a pain to conditionally add this. The schema
can be found [2]

[1] http://svn.apache.org/repos/asf/activemq/trunk/assembly/src/release/conf/activemq.xml
[2] https://s3-us-west-2.amazonaws.com/amazon-mq-docs/XML/amazon-mq-active-mq-5.15.0.xsd

* Add docs

* Remove RDS Aurora Postgres replica

* Remove VPC and subnet modules

We will be injecting these resources into an existing VPC

* Remove need for kops metadata

* Move AmazonMQ into own module

* Add EFS

* Drop Redis Elasticache version to 3.2.6

3.2.10 doesn’t support encryption, see below:

```
Error creating Elasticache Replication Group: InvalidParameterCombination: Encryption features are not supported for engine version 3.2.10. Please use engine version 3.2.6
```

* Move aws_mq_broker users into module

These should have been in the module that calls this module however
there is a Terraform bug [1] meaning passing the list of user maps is
failing when trying to use a local value.

From the calling module, the below does work:

```
  users = [{
    "username"       = "admin"
    "password"       = "defaultpassword"
    "groups"         = ["admin"]
    "console_access" = true
  }]
```

however, using locals as we want to, it does not:

```
  users = [{
    "username"       = “${local.admin_user}”
    "password"       = “${local.admin_password}”
    "groups"         = ["admin"]
    "console_access" = true
  }]
```

[1] hashicorp/terraform#12263

* Update docs

* Remove deprecated mq_broker_name variable

* Pin aws-mq-broker module to 0.1.0 release

* Add global enabled variable for whole module

We can toggle individual components of the backing services by enabling
a specific service, or using the global flag to enable/disable all.

* Add s3 bucket to CodeFresh backing services.

* Rename node_security_groups to security_groups

These maybe the security groups of your kops nodes, they may not, so be
more generic.

* Add usage to README

* Pass only 1 or 2 subnets to mq.tf

Running ActiveMQ in ACTIVE_STANDBY_MULTI_AZ mode requires you only
supply 2 subnets. Any more and the resource will complain. Similarly
you must pass a single subnet if running in SINGLE_INSTANCE mode

* Actually use postgres_db_name if we pass it in

* Add full example

* Remove postgres_name variable

This should just be var.name as the postgres name is computed in the
RDS cluster module using labels.

* Pin mq broker module to latest 0.2.0 release

* Remove redis_name as this is calculated in module

* Update Redis variable descriptions

* overwrite SSM parameter is expected as a boolean

* Bump AmazonMQ default instance type

mq.t2.micro is likely not what you want in production.

* Remove null-label since not being used anymore

It is in use in all the submodule we call and we delegate down to them
for naming.

* Bump aws-efs module

To ensure we can use enabled flags on it

* Bump aws-s3-bucket to 0.1.0

Since we have the initial PR merged and have cut a release

* Remove aws-mq-broker enabled flags

Bump aws-mq-broker to 0.3.0, which does not feature enabled flags, so 
remove them.

In its current incarnation the terraform-aws-mq-broker does not support
the enabled variable allowing for boolean creation of resources in the
module, see [1] for more context.

[1] cloudposse/terraform-aws-mq-broker#4

* Add optional EFS VPC and subnet_id variables

This commit adds variables to enable overriding what VPC and subnet that
EFS runs in. If you don’t provide them, it default back to the 
`var.vpc_id` and `var.subnet_ids` values.

During testing of this module, we found something interesting. We were
deploying EFS to the backing services VPC, which is a different VPC to
the k8s cluster. Our pods were unable to resolve the DNS endpoint of the
EFS cluster, despite there being VPC peering between the two, with DNS
lookups between them enabled. AWS documentation [1] states that in this
scenario, you have a few options:

1) Use the EFS IP address directly (no thanks)
2) Create a DNS entry in Route53 CNAME’ing to the EFS DNS endpoint in 
   a private hosted zone

The underlying EFS module does already create a Route53 DNS entry CNAME
to the EFS DNS endpoint, but it isn’t in a private zone. The pod could
not resolve the Route53 DNS. Deploying EFS into the _same_ VPC as the 
k8s nodes worked a treat and finally the pods were able to resolve and 
mount EFS volumes.


[1] https://docs.aws.amazon.com/efs/latest/ug/manage-fs-access-vpc-peering.html

* Fix typos

* Fix typos

* Remove EFS + AmazonMQ from CodeFresh services

CodeFresh Enterprise is not compatible with AmazonMQ on the protocol
level and EFS will be moved into a general Kops backing service.

* Remove Terraform glue variables

These should be in the caller module, not this generic module.

* Update docs and pin example modules

* Update docs to remove TODO and add note on enabled
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants