Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/aws aws_kinesis_stream recreates stream when shard_count changes #11816

Closed
chaliy opened this issue Feb 9, 2017 · 5 comments
Closed

Comments

@chaliy
Copy link

chaliy commented Feb 9, 2017

Terraform Version

Terraform v0.8.6

Affected Resource(s)

aws_kinesis_stream

Debug Output

https://gist.github.com/chaliy/683eb59c1bfc415d8b890aabedb92963

Expected Behavior

Shard count changed on AWS (shards spitted or merged). Items under retention period are safe.

Actual Behavior

Kinesis stream recreated. Effectively all items in stream is wiped out.

Steps to Reproduce

Any change to shard_count.

@stack72

@chaliy chaliy changed the title provider/aws aws_kinesis_stream recreate stream when shard count changes provider/aws aws_kinesis_stream recreates stream when shard_count changes Feb 9, 2017
@stack72 stack72 self-assigned this Feb 9, 2017
@apparentlymart
Copy link
Contributor

Thanks for this request, @chaliy.

It looks like there's a reasonable "default way" to do this via the UpdateShardCount API function, avoiding the complexities of manual resharding. But it has some constraints and best-practices that I guess we'd need to document to help users avoid apply-time errors or unexpected costs.

@DavidAntaramian
Copy link

Also, regarding this issue, by requiring shard_count, the shard_count in the Terraform file has to be updated to reflect changes made by external systems. Because Kinesis is used for real-time data, sharding can occur using automated services such as the Amazon Kinesis Scaling Utils rather than waiting for manual intervention by Ops teams.

If such an external system begins splitting and merging shards, the relevant Terraform file has to be updated to reflect the change. Otherwise, as described above, Terraform will delete the stream and recreate it, causing data loss. Since the shard_count parameter is required, there is no way to avoid this when managing a Kinesis stream in Terraform.

I think the shard_count parameter as it currently exists, should actually be an initial_shard_count. The initial_shard_count parameter would be required, but it would only be used when initially creating the stream. Subsequent management of the stream by automated services would then not cause the service state and the Terraform state to need reconciling.

To allow for explicit shard control through Terraform, the shard_count parameter could then be made optional and be used to trigger the referenced UpdateShardCount API function above when the service state and the Terraform state do not match.

@jessecollier
Copy link
Contributor

This is blocking me as well. Shard_count + manual scaling of shards are not happy when re-applying terraform

@cbroglie
Copy link
Contributor

Note that you can already make shard_count act like the initial_shard_count idea from above by telling Terraform to ignore changes to shard_count:

resource "aws_kinesis_stream" "stream" {
  name = "my-stream"
  shard_count = 4
  lifecycle {
    ignore_changes = ["shard_count"]
  }
}

And when #13562 is available, you can manually resize the stream with the UpdateShardCount API and still access the current number of shards in other parts of the Terraform config.

As @apparentlymart noted, UpdateShardCount has limitations (like only being able to be used twice per 24 hours per stream) which may not be suitable for the default behavior if shard_count changes.

@ghost
Copy link

ghost commented Apr 10, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost unassigned stack72 Apr 10, 2020
@ghost ghost locked and limited conversation to collaborators Apr 10, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants