Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add import support for kinesis firehose delivery stream #2082

Merged
merged 10 commits into from
Dec 20, 2017

Conversation

ApsOps
Copy link
Contributor

@ApsOps ApsOps commented Oct 27, 2017

@radeksimko Right now, I've just added import support just for Redshift destination. I want to make sure I'm on the right track.

cloudwatch_logging_options aren't working yet. Looks like it expects *schema.Set and I'm giving map[string]interface{}. How do we handle this?

EDIT: We use "flatteners" to create required *schema.Set for converting *awsservice.Struct to []map[string]interface{}.

TODO:

  • Add support for elasticsearch and s3 destinations
  • Add acceptance tests

@radeksimko radeksimko added the enhancement Requests to existing resources that expand the functionality or scope. label Oct 28, 2017
@ApsOps
Copy link
Contributor Author

ApsOps commented Oct 29, 2017

@radeksimko I'm able to import a firehose stream with this code.
There's something missing in the acceptance test though.

=== RUN   TestAccAWSKinesisFirehoseDeliveryStream_importBasic
--- FAIL: TestAccAWSKinesisFirehoseDeliveryStream_importBasic (210.44s)
	testing.go:434: Step 1 error: 1 error(s) occurred:

		* aws_kinesis_firehose_delivery_stream.test_stream (import id: arn:aws:firehose:us-west-2:264212557801:deliverystream/terraform-kinesis-firehose-basictest-8541682280680129001): 1 error(s) occurred:

		* import aws_kinesis_firehose_delivery_stream.test_stream result: arn:aws:firehose:us-west-2:264212557801:deliverystream/terraform-kinesis-firehose-basictest-8541682280680129001: aws_kinesis_firehose_delivery_stream.test_stream: [WARN] Error reading Kinesis Firehose Delivery Stream: ValidationException: 2 validation errors detected: Value 'arn:aws:firehose:us-west-2:264212557801:deliverystream/terraform-kinesis-firehose-basictest-8541682280680129001' at 'deliveryStreamName' failed to satisfy constraint: Member must satisfy regular expression pattern: [a-zA-Z0-9_.-]+; Value 'arn:aws:firehose:us-west-2:264212557801:deliverystream/terraform-kinesis-firehose-basictest-8541682280680129001' at 'deliveryStreamName' failed to satisfy constraint: Member must have length less than or equal to 64
			status code: 400, request id: 1ff2eb7e-bc87-11e7-a41d-d78f75b56b27
FAIL
FAIL	github.com/terraform-providers/terraform-provider-aws/aws	210.497s

It's putting the id (which is arn I believe) in place of name, even though I'm using a custom Importer instead of the pass-through one.

What am I missing here? 🤔

@ApsOps
Copy link
Contributor Author

ApsOps commented Nov 9, 2017

@radeksimko Ping 😄

@radeksimko radeksimko added the size/L Managed by automation to categorize the size of a PR. label Nov 15, 2017
@ApsOps
Copy link
Contributor Author

ApsOps commented Nov 25, 2017

@radeksimko I just realized that terraform import current works only with id, not name.
So, I've fixed that functionality and it works and acceptance tests pass as well.

Please let me know if something else is needed to get this merged.

EDIT: Not sure why the test is failing. testAccKinesisFirehosePreCheck function is present here.

EDIT2: Test was breaking due to upstream changes in #2055. Fixed now.

State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
resARN, err := arn.Parse(d.Id())
if err != nil {
return []*schema.ResourceData{}, err
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@radeksimko Is this okay or should it be return nil, err instead?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It likely doesn't matter if a non-nil error is returned - so returning nil here is nicer.

@ApsOps ApsOps changed the title WIP: Add import support for kinesis firehose delivery stream Add import support for kinesis firehose delivery stream Nov 25, 2017
Copy link
Contributor

@jen20 jen20 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A couple of minior changes and comments - other than that this looks good to me! If you could squash this down to one commit, I'll go ahead and merge it.

import (
"testing"

"fmt"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

goimports needs running here

@@ -6,9 +6,12 @@ import (
"strings"
"time"

"bytes"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here re goimports

State: func(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) {
resARN, err := arn.Parse(d.Id())
if err != nil {
return []*schema.ResourceData{}, err
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It likely doesn't matter if a non-nil error is returned - so returning nil here is nicer.

@ApsOps
Copy link
Contributor Author

ApsOps commented Dec 20, 2017

@jen20 Done changes as per the review. The "squash and merge" button should combine the commits as one before adding to master.
Please let me know if I should squash manually.

@jen20 jen20 merged commit 9c46cc6 into hashicorp:master Dec 20, 2017
@ApsOps ApsOps deleted the import-kinesis-firehose branch December 20, 2017 18:50
@dennisatspaceape
Copy link
Contributor

I was trying to add support for S3 Backup (#2699) when I found that this modification doesn't handle S3 basic vs extended configuration correctly.

Essentially when you create a Kinesis Delivery Stream with an S3 destination:

aws firehose create-delivery-stream --delivery-stream-name Test --extended-s3-destination-configuration '{"RoleARN":"arn:aws:iam::XXXXX:role/KinesisFirehoseTest","BucketARN":"arn:aws:s3:::terraform-test-kinesis"}'

and describe the Delivery Stream:

{
    "DeliveryStreamDescription": {
        "DeliveryStreamName": "Test",
        "DeliveryStreamARN": "arn:aws:firehose:us-east-1:XXXXX:deliverystream/Test",
        "DeliveryStreamStatus": "ACTIVE",
        "DeliveryStreamType": "DirectPut",
        "VersionId": "1",
        "CreateTimestamp": 1514224146.811,
        "Destinations": [
            {
                "DestinationId": "destinationId-000000000001",
                "S3DestinationDescription": {
                    "RoleARN": "arn:aws:iam::XXXXX:role/KinesisFirehoseTest",
                    "BucketARN": "arn:aws:s3:::terraform-test-kinesis",
                    "BufferingHints": {
                        "SizeInMBs": 5,
                        "IntervalInSeconds": 300
                    },
                    "CompressionFormat": "UNCOMPRESSED",
                    "EncryptionConfiguration": {
                        "NoEncryptionConfig": "NoEncryption"
                    },
                    "CloudWatchLoggingOptions": {
                        "Enabled": false
                    }
                },
                "ExtendedS3DestinationDescription": {
                    "RoleARN": "arn:aws:iam::XXXXX:role/KinesisFirehoseTest",
                    "BucketARN": "arn:aws:s3:::terraform-test-kinesis",
                    "BufferingHints": {
                        "SizeInMBs": 5,
                        "IntervalInSeconds": 300
                    },
                    "CompressionFormat": "UNCOMPRESSED",
                    "EncryptionConfiguration": {
                        "NoEncryptionConfig": "NoEncryption"
                    },
                    "CloudWatchLoggingOptions": {
                        "Enabled": false
                    },
                    "S3BackupMode": "Disabled"
                }
            }
        ],
        "HasMoreDestinations": false
    }
}

AWS returns configuration for both S3DestinationDescription (Basic) and ExtendedS3DestinationDescription (Extended), it looks like they cloned the settings from the extended section to the basic one. I've also checked that when you create with basic-only configuration (deprecated) AWS also creates the extended config equivalent.

The issue comes when creating the resource state at https://github.com/terraform-providers/terraform-provider-aws/blob/master/aws/resource_aws_kinesis_firehose_delivery_stream.go#L272 as S3DestinationDescription is always populated (even when created with extended configuration) the destination is forced to "s3" (basic) and the state doesn't reflect everything associated with the resource and prevents the use of functionality offered by the extended configuration (Lambda Processing, S3 Backup etc...). Subsequently, terraform plan and apply destroys and recreates the delivery stream incorrectly.

I was looking to create a patch for this but it's not clear to me how to choose the destination correctly as there's nothing in the description of the Delivery Stream which would denote whether it was created with basic or extended configuration. Part of me would suggest converting everything to extended configuration given basic is deprecated but my experience of terraform is less than 8 hours so not sure if that's the best approach :)

@ApsOps
Copy link
Contributor Author

ApsOps commented Dec 26, 2017

@dennisatspaceape Ah, I missed that. So that basically means if I create a stream via aws cli, and I forget if I used basic or extended, there's no way to find out?

@dennisatspaceape
Copy link
Contributor

@ApsOps I used the AWS CLI purely to see the metadata behind the Delivery Stream. If you create a Delivery Stream with extend_s3 configuration in terraform and then run plan without any changes terraform will flag that the destination should be changed from "extend_s3" to "s3" which is incorrect. Even if you apply the change, terraform will always see the Delivery Stream as created with basic configuration and always flag that a change is required.

Looking at the AWS metadata behind the Delivery Stream I don't believe it's possible to know if the creation was done with basic or extended configuration. If you create a basic S3 delivery stream (deprecated) AWS will also initialize the applicable extended configuration properties and vice versa.

@ApsOps
Copy link
Contributor Author

ApsOps commented Dec 27, 2017

@radeksimko @jen20 Do you have any ideas on how we can check if a stream was created with s3 or extended_s3 config, since AWS describe delivery stream output is same for both?

Or should we also make s3 destination as deprecated and ask users to create/migrate to extended_s3?

@dennisatspaceape
Copy link
Contributor

@radeksimko @jen20 Any chance of some feedback on this?

@bflad
Copy link
Contributor

bflad commented Jan 12, 2018

(Sorry I don't have feedback from the post-merge discussion as I'm just going through and post-release commenting, but my suggestion would be to ensure each bug captured in an issue somewhere rather than this PR for quick resolution. We are planning a 1.7.1 bugfix release.)

The functionality of this PR (which given by the post-merge comments above might not be 100%) has been released in terraform-provider-aws version 1.7.0. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

@bflad bflad added this to the v1.7.0 milestone Jan 12, 2018
bflad added a commit that referenced this pull request Jan 12, 2018
bflad added a commit that referenced this pull request Jan 12, 2018
@ApsOps
Copy link
Contributor Author

ApsOps commented Jan 12, 2018

@dennisatspaceape Sorry for the trouble. I've fixed the broken functionality in #2970. It would be great if you could test and confirm that it works for you. 🙂

@bflad Please see if #2970 could be merged soon since this is broken right now.

@ghost
Copy link

ghost commented Apr 8, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 8, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. size/L Managed by automation to categorize the size of a PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants