-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider/aws: ElastiCache (Redis) snapshot window and retention limits #3707
provider/aws: ElastiCache (Redis) snapshot window and retention limits #3707
Conversation
…or Redis ElastiCache clusters
@@ -344,6 +372,15 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ | |||
requestUpdate = true | |||
} | |||
|
|||
if d.HasChange("snapshot_window") { | |||
req.EngineVersion = aws.String(d.Get("snapshot_window").(string)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably not EngineVersion
😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
D'OH!
…'t update the copy/pasted params
@catsby ok, sorry about that. these silly copy paste errors have been fixed now |
Thanks @stack72 – some where things here... I thought you'd need to specify I did try to update the * aws_elasticache_cluster.test: [WARN] Error updating ElastiCache cluster (test), error: InvalidParameterCombination: Instance type does not support snapshotting. Snapshot window parameter should not be specified.
status code: 400, request id: 7e2dcf24-81ab-11e5-8466-199fd0904dd9 Not sure what that's about, the debug output of the update didn't mention instance type.. Plan: ~ aws_elasticache_cluster.test
snapshot_window: "Wed:09:00-Wed:09:30" => "05:00-09:00" Log:
Do you get this behavior too? |
@catsby I added an acceptance test to show these in motion and to make sure that they are set as expected and both worked. I was able to control my instance based on this as well I think you are specifying a Memcached cluster? If so, it only works for Redis clusters |
Nah, I'm using Redis: provider "aws" {
region = "us-west-2"
}
resource "aws_vpc" "default" {
cidr_block = "10.250.0.0/16"
tags {
Name = "ec-sub-test"
}
}
resource "aws_subnet" "private" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "10.250.3.0/24"
}
resource "aws_subnet" "private2" {
vpc_id = "${aws_vpc.default.id}"
cidr_block = "10.250.2.0/24"
}
resource "aws_elasticache_subnet_group" "test" {
name = "test"
description = "Test Redis"
subnet_ids = [
"${aws_subnet.private.*.id}",
"${aws_subnet.private2.id}",
]
}
resource "aws_elasticache_cluster" "test" {
cluster_id = "test"
subnet_group_name = "${aws_elasticache_subnet_group.test.name}"
num_cache_nodes = 1
node_type = "cache.t2.micro"
engine = "redis"
port = 6379
snapshot_retention_limit = 1
snapshot_window = "05:00-09:00"
} The code seems to work, but adding debug statements in the |
I'll have another look at the debugging of this in the morning and print some info Thanks for checking this |
…work. Also added some debugging to show that the API returns the Elasticache retention period info
@catsby ok, I just added an extra test to make sure that updating works as expected. Also added some debug logic to the file to make sure that it writes INFO messages to the tf.log when running tests I definitely get results on my side and I am using this logic in my code from this morning as a check |
@@ -149,6 +204,76 @@ resource "aws_elasticache_cluster" "bar" { | |||
port = 11211 | |||
parameter_group_name = "default.memcached1.4" | |||
security_group_names = ["${aws_elasticache_security_group.bar.name}"] | |||
snapshot_window = "05:00-09:00" | |||
snapshot_retention_limit = 3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Engine memcached
does not support snapshotting, so the TestAccAWSElasticacheCluster_basic
test now fails with this addition :/
Do you get results if those attributes are omitted in the config? |
@@ -344,6 +374,14 @@ func resourceAwsElasticacheClusterUpdate(d *schema.ResourceData, meta interface{ | |||
requestUpdate = true | |||
} | |||
|
|||
if d.HasChange("snapshot_window") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
snapshot_window
and snapshot_retention_limit
are not toggling the requestUpdate
bit, so they're never actually being sent in any update
call.
ok, @catsby all is working as expected now :) I was able to code out the cache.t2 sizing as follows:
I was also able to get the updates working (was missing requestUpdate=true):
also made sure not to cause panics in the Read event:
The acceptance tests all work as expected now EXCEPT the vpc test which throws a panic. I just tried this on master and get the same error I'm afraid P. |
@@ -187,6 +205,16 @@ func resourceAwsElasticacheClusterCreate(d *schema.ResourceData, meta interface{ | |||
req.CacheParameterGroupName = aws.String(v.(string)) | |||
} | |||
|
|||
if !strings.Contains(d.Get("node_type").(string), "cache.t2") { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe this is the approach we should take. We should document that snapshotting does not work with cache.t2
's , and we should bubble up the error if a user uses t2's anyway.
Only applying these params in this block would (especially without any message) would be misleading the user, as without an error, anything in their config that doesn't throw an error in applying will be written to state, so from Terraforms view, they are applied. But in reality, they are not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So you thibk I should just log that snapshotting isn't available if the user specifies t2 and then just let the values be set? Or should I specifically check for snapahotting in conjunction with t2 (using a check like above) and throw an error?
Left a few additional notes, and we still need to document the snapshot limitation regarding t2s, otherwise looking good 😄 |
Yup, that's my fault 😦 it gets fixed in #3777 |
… We now allow the error to bubble up to the userr when the wrong instance type is used. The limitation for t2 instance types now allowing snapshotting is also now documented
ok, @catsby, I have documented the t2 issue and now let the error bubble up to the user. I now get the following:
This goes hand in hand with the NOTE added to the docs |
resource "aws_elasticache_cluster" "bar" { | ||
cluster_id = "tf-test-%03d" | ||
engine = "redis" | ||
node_type = "cache.t2.small" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this needs to be changed to cache.m1.small
, or snapshot_window
and snapshot_retention_limit
need to be removed. The test that uses this config doesn't pass
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Went ahead and made this change myself, thanks!
Thanks! |
resource/key_pair: drop custom ValidateFunc for name_prefix
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
This fixes #3075
Also tested the ValidateFunc:
Creates the values as can be seen in the console: