-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider/aws: elasticache cluster exports nodes before they exist #2051
Comments
Here are test configs, note the route53 record creation at the bottom that attempts to use a non-existant cluster node index:
|
👍 Blocked on this as well. |
Hello friends – I've tried on version 0.5.2, and current If you're still hitting this with Otherwise, I'm going to close this issue tomorrow. Thank you for the report, I'm hoping I can hear back from someone with a new example, or a confirmed "closed" on this. Thanks! |
I did open #2128 as an extra safety-net of sorts, take a look! |
It doesn't happen every time - don't know what to say, just hit this again today. #2128 probably handles it but hard to reproduce and test unless there's some way to mock the AWS calls during testing. |
@juniorplenty I've worked with AWS API enough to both be unable to reproduce this, and totally believe you that it happens 😄 I asked for other examples just incase there was some variable or otherwise external thing we weren't noticing that was the true cause, but it's probably just some AWS API weirdness that may have passed and won't resurface until it's least convenient. I imagine we'll merge 2128 and just roll with it |
I just merged #2128 to help here, please check out Thanks for reporting! |
@catsby couple of updates:
|
by the way here's the full error I'm getting:
|
I've also noticed the same issue (continuing despite "creating" state) with RDS resources just FYI |
@juniorplenty are you still using the same configuration? Can you gist some output that shows this, using |
@catsby Here's log output from a failed run using the plan generated by the exact configs above (with AWS keys added of course) https://gist.github.com/juniorplenty/b99a85bca4ecf1362a8d |
@catsby wouldn't just waiting for |
(Also - should this issue really be "closed"? The above logs verify that it's still broken in 0.5.3...) |
+1 |
FWIW, I'm still seeing the same issue as well. |
+1 |
@catsby can we at least get this issue re-opened? I posted the logs you asked for, they show it still happening in 0.5.3 - |
Reopening and taking a look! |
Sorry for the delay @juniorplenty – I'm taking another look here |
From TL;DR I can't reproduce this on master. @juniorplenty are you capable of building from Does anyone else who's reported this issue have a minimal config that reproduces it? Thanks! |
@juniorplenty can you checkout #2842 ? |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
Originally reported by @saulshanabrook here: #1965
Verified it still exists in
0.5.2
cache_nodes.0
doesn't exist because elasticache isn't really done provisioning the cluster, and terraform doesn't wait for it to do so.The text was updated successfully, but these errors were encountered: