Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws: export elasticache nodes #1965

Merged
merged 7 commits into from
May 14, 2015
Merged

Conversation

phinze
Copy link
Contributor

@phinze phinze commented May 14, 2015

Here's a start to a fix for #1918

Needs to wait for len(cluster.CacheNodes) == cluster.NumCacheNodes, since
apparently that takes a bit of time and the initial response always has
an empty collection of nodes.

I think that doing a complex Computed attribute is okay, and you can reference it like aws_elasticache_cluster.foo.cache_nodes.0.address.

pushing up the WIP in case @catsby has bandwidth to finish this first


@catsby is doing:

Needs to wait for len(cluster.CacheNodes) == cluster.NumCacheNodes, since
apparently that takes a bit of time and the initial response always has
an empty collection of nodes
Tags: tags,
}

// parameter groups are optional and can be defaulted by AWS
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ooo nice catch.

- rename test to have _basic suffix, so we can run it individually
- use us-east-1 for basic test, since that's probably the only region that has
  Classic
- update the indexing of nodes; cache nodes are 4 digits
@catsby
Copy link
Contributor

catsby commented May 14, 2015

Needs to wait for len(cluster.CacheNodes) == cluster.NumCacheNodes, since
apparently that takes a bit of time and the initial response always has
an empty collection of nodes.

You need to explicitly tell the API that you want node info with ShowCacheNodeInfo: true

@phinze
Copy link
Contributor Author

phinze commented May 14, 2015

Cool cool - LGTM!

@phinze phinze changed the title [WIP] export cache nodes provider/aws: export elasticache nodes May 14, 2015
catsby added a commit that referenced this pull request May 14, 2015
@catsby catsby merged commit 0b548a4 into master May 14, 2015
@catsby catsby deleted the f-export-cache-cluster-endpoints branch May 14, 2015 19:05
@saulshanabrook
Copy link

@catsby Thanks so much!

I am currently having a problem where sometimes terraform doesn't wait for the cluster to come up completely before continuing, so that its nodes don't exist.

The state looks like this:

        {
            "path": [
                "root",
                "redis-reports"
            ],
            "outputs": {
                "port": "0"
            },
            "resources": {
                "aws_elasticache_cluster.default": {
                    "type": "aws_elasticache_cluster",
                    "primary": {
                        "id": "redis-reports",
                        "attributes": {
                            "cache_nodes.#": "0",
                            "cluster_id": "redis-reports",
                            "engine": "redis",
                            "engine_version": "2.8.19",
                            "id": "redis-reports",
                            "node_type": "cache.t2.micro",
                            "num_cache_nodes": "1",
                            "port": "11211",
                            "security_group_ids.#": "0",
                            "security_group_names.#": "0",
                            "subnet_group_name": "default-new",
                            "tags.#": "0"
                        }
                    }
                }
            }
        }

So terraform apply will fail sometimes because it tries to then get the first node and fails. If I wait until the node comes up on amazon and then run terraform refresh, the state is populated fine.

@catsby
Copy link
Contributor

catsby commented May 15, 2015

So terraform apply will fail sometimes because it tries to then get the first node and fails.

I'm not sure I follow.. do you have any error messages you can share?

@saulshanabrook
Copy link

@catsby

Error applying plan:

1 error(s) occurred:

* 1 error(s) occurred:

* Resource 'aws_elasticache_cluster.default' does not have attribute 'cache_nodes.0.address' for variable 'aws_elasticache_cluster.default.cache_nodes.0.address'

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

That's because I reference 'aws_elasticache_cluster.default.cache_nodes.0.address afterword and the variable doesn't exist because elasticache isn't really done provisioning the cluster.

@saulshanabrook
Copy link

Then I wait for the cluster to finish provisioning, run terraform refresh and then terraform apply, and it will run without failing.

@catsby
Copy link
Contributor

catsby commented May 15, 2015

Understood, thanks! Phinze above suspected this, I'll check it out in the
morning

On Thursday, May 14, 2015, Saul Shanabrook notifications@github.com wrote:

Then I wait for the cluster to finish provisioning, run terraform refresh
and then terraform apply, and it will run without failing.


Reply to this email directly or view it on GitHub
#1965 (comment).

Clint

@juniorplenty
Copy link

@catsby I'm seeing the same race condition and error in 0.5.2, any update on this?

@ghost
Copy link

ghost commented May 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators May 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants