Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_kinesis_firehose_delivery_stream still doesn't save processors configuration properly #19936

Closed
dvishniakov opened this issue Jun 23, 2021 · 6 comments · Fixed by #35137
Closed
Labels
bug Addresses a defect in current functionality. service/firehose Issues and PRs that pertain to the firehose service.
Milestone

Comments

@dvishniakov
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform CLI and Terraform AWS Provider Version

Terraform: 0.14.11 (doesn't matter much because this is reproduced in other versions)
AWS Provider: 3.41

Affected Resource(s)

  • aws_kinesis_firehose_delivery_stream

Terraform Configuration Files

Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.

# https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kinesis_firehose_delivery_stream
resource "aws_kinesis_firehose_delivery_stream" "default" {
  name        = local.firehose_name
  tags        = var.tags
  destination = "extended_s3"

  extended_s3_configuration {
    role_arn   = var.firehose_role_arn
    bucket_arn = var.log_target_s3_bucket_arn

    prefix              = "prefix"
    error_output_prefix = "prefix"

    buffer_size        = var.buffer_size
    buffer_interval    = var.buffer_interval
    compression_format = var.compression_format
    kms_key_arn        = var.kms_key_arn

    # https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html
    processing_configuration {
      enabled = true

      processors {
        type = "Lambda"

        parameters {
          parameter_name  = "LambdaArn"
          parameter_value = var.log_processing_lambda_arn
        }

        # Up to 3 MB by default
        parameters {
          parameter_name  = "BufferSizeInMBs"
          parameter_value = "3"
        }

        parameters {
          parameter_name  = "BufferIntervalInSeconds"
          parameter_value = "300"
        }
      }
    }
  }

}

Debug Output

  ~ extended_s3_configuration {
        # (8 unchanged attributes hidden)


      ~ processing_configuration {
            # (1 unchanged attribute hidden)

          ~ processors {
                # (1 unchanged attribute hidden)

              ~ parameters {
                  ~ parameter_name  = "BufferIntervalInSeconds" -> "BufferSizeInMBs"
                  ~ parameter_value = "300" -> "3"
                }
              + parameters {
                  + parameter_name  = "BufferIntervalInSeconds"
                  + parameter_value = "300"
                }
                # (1 unchanged block hidden)
            }
        }
        # (1 unchanged block hidden)
    }

Panic Output

Expected Behavior

No changes in infra

Actual Behavior

Terraform adds BufferSizeInMBs each time.

Steps to Reproduce

  1. terraform apply

Important Factoids

References

#4392

  • See the previous to last comment there - seems didn't work after merge.
@github-actions github-actions bot added needs-triage Waiting for first response or review from a maintainer. service/firehose Issues and PRs that pertain to the firehose service. labels Jun 23, 2021
@tlkamp
Copy link
Contributor

tlkamp commented Jun 23, 2021

I am also experiencing this issue. It is also present when using nested processors blocks (i.e. under a splunk_configuration).

I did some investigating and found that the setting is being applied to the resource properly but the parameter is not being saved in the state file, resulting in the perpetual diff.

@ewbankkit ewbankkit added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Jun 28, 2021
@ewbankkit
Copy link
Contributor

Related:

@dkujawski
Copy link
Contributor

We are hitting this bug now.

I am pretty sure there is a bug in here where the default values are being evaluated. The logic needs to be reviewed, it doesn't seem like this will ever read the actual values from AWS.

	// It is necessary to explicitly filter this out
	// to prevent diffs during routine use and retain the ability
	// to show diffs if any field has drifted
	defaultLambdaParams := map[string]string{
		"NumberOfRetries":         "3",
		"RoleArn":                 roleArn,
		"BufferSizeInMBs":         "3",
		"BufferIntervalInSeconds": "60",
	}

	processors := make([]interface{}, len(pc.Processors))
	for i, p := range pc.Processors {
		t := aws.StringValue(p.Type)
		parameters := make([]interface{}, 0)

		for _, params := range p.Parameters {
			name := aws.StringValue(params.ParameterName)
			value := aws.StringValue(params.ParameterValue)

			if t == firehose.ProcessorTypeLambda {
				// Ignore defaults
				if v, ok := defaultLambdaParams[name]; ok && v == value {
					continue
				}
			}

@Iguanod
Copy link

Iguanod commented Mar 27, 2023

@dkujawski the values are read from AWS, if you follow the function calls the pc parameter in that function comes from the call to FindDeliveryStreamByName in resourceDeliveryStreamRead.

The problem is not in removing the values if they match the defaults, the logic is correct and I assume it is necessary to do so according to the comment. Besides, that behaviour is documented in https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/kinesis_firehose_delivery_stream (see the note in the parameters block configuration).

The issue here is that the default value for the buffer size is assumed to be 3MB, while in reality it is 1MB. I don't know if the default value was indeed 3MB at some point in the past, but right now new delivery streams are created with 1MB by default (as documented in https://docs.aws.amazon.com/firehose/latest/dev/data-transformation.html).

ewbankkit pushed a commit that referenced this issue Jan 5, 2024
This is the correct value, though 3 MB may have been at some point in
the past. This addresses the perpetual differences found in the state
currently due to the default creation being 1 MB, and state default
expecting 3 MB. References can be seen in #9827 and #19936. This also
fixes #33014.
@github-actions github-actions bot added this to the v5.32.0 milestone Jan 5, 2024
Copy link

This functionality has been released in v5.32.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 12, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/firehose Issues and PRs that pertain to the firehose service.
Projects
None yet
5 participants