diff --git a/.changelog/12141.txt b/.changelog/12141.txt new file mode 100644 index 00000000000..260bcc2c1f4 --- /dev/null +++ b/.changelog/12141.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_s3_bucket_cors_configuration +``` \ No newline at end of file diff --git a/.changelog/15806.txt b/.changelog/15806.txt new file mode 100644 index 00000000000..78b8e24ec1a --- /dev/null +++ b/.changelog/15806.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_vpn_connection: Mark `customer_gateway_configuration` as [`Sensitive`](https://www.terraform.io/plugin/sdkv2/best-practices/sensitive-state#using-the-sensitive-flag) +``` \ No newline at end of file diff --git a/.changelog/17031.txt b/.changelog/17031.txt new file mode 100644 index 00000000000..bfab8bf79bc --- /dev/null +++ b/.changelog/17031.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +resource/aws_vpn_connection: Add the ability to revert changes to unconfigured tunnel options made outside of Terraform to their [documented default values](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html) +``` \ No newline at end of file diff --git a/.changelog/17382.txt b/.changelog/17382.txt new file mode 100644 index 00000000000..d9da4cf9cf3 --- /dev/null +++ b/.changelog/17382.txt @@ -0,0 +1,3 @@ +```release-note:bug +data-source/aws_vpc_peering_connections: Return empty array instead of error when no connections found. +``` diff --git a/.changelog/21219.txt b/.changelog/21219.txt new file mode 100644 index 00000000000..4870302226c --- /dev/null +++ b/.changelog/21219.txt @@ -0,0 +1,67 @@ +```release-note:note +data-source/aws_security_groups: If no security groups match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_route_tables: The type of the `ids` attribute has changed from Set to List. If no route tables match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_network_interfaces: The type of the `ids` attribute has changed from Set to List. If no network interfaces match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_network_acls: The type of the `ids` attribute has changed from Set to List. If no NACLs match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ec2_transit_gateway_route_tables: The type of the `ids` attribute has changed from Set to List. If no transit gateway route tables match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ec2_coip_pools: The type of the `pool_ids` attribute has changed from Set to List. If no COIP pools match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ec2_local_gateway_route_tables: The type of the `ids` attribute has changed from Set to List. If no local gateway route tables match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ec2_local_gateway_virtual_interface_groups: The type of the `ids` and `local_gateway_virtual_interface_ids` attributes has changed from Set to List. If no local gateway virtual interface groups match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ec2_local_gateways: The type of the `ids` attribute has changed from Set to List. If no local gateways match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ebs_volumes: The type of the `ids` attribute has changed from Set to List. If no volumes match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_cognito_user_pools: The type of the `ids` and `arns` attributes has changed from Set to List. If no volumes match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ip_ranges: If no ranges match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_efs_access_points: The type of the `ids` and `arns` attributes has changed from Set to List. If no access points match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_emr_release_labels: The type of the `ids` attribute has changed from Set to List. If no release labels match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_inspector_rules_packages: If no rules packages match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_db_event_categories: The type of the `ids` attribute has changed from Set to List. If no event categories match the specified criteria an empty list is returned (previously an error was raised) +``` + +```release-note:note +data-source/aws_ssoadmin_instances: The type of the `identity_store_ids` and `arns` attributes has changed from Set to List. If no instances match the specified criteria an empty list is returned (previously an error was raised) +``` \ No newline at end of file diff --git a/.changelog/22043.txt b/.changelog/22043.txt new file mode 100644 index 00000000000..972b7c83293 --- /dev/null +++ b/.changelog/22043.txt @@ -0,0 +1,3 @@ +```release-note:enhancement +data-source/aws_cloudwatch_log_group: Automatically trim `:*` suffix from `arn` attribute +``` \ No newline at end of file diff --git a/.changelog/22253.txt b/.changelog/22253.txt new file mode 100644 index 00000000000..45807f2b396 --- /dev/null +++ b/.changelog/22253.txt @@ -0,0 +1,11 @@ +```release-note:note +resource/aws_default_subnet: If no default subnet exists in the specified Availability Zone one is now created. The `force_destroy` destroy argument has been added (defaults to `false`). Setting this argument to `true` deletes the default subnet on `terraform destroy` +``` + +```release-note:note +resource/aws_default_vpc: If no default VPC exists in the current AWS Region one is now created. The `force_destroy` destroy argument has been added (defaults to `false`). Setting this argument to `true` deletes the default VPC on `terraform destroy` +``` + +```release-note:note +data-source/aws_vpcs: The type of the `ids` attributes has changed from Set to List. If no VPCs match the specified criteria an empty list is returned (previously an error was raised) +``` \ No newline at end of file diff --git a/.changelog/22664.txt b/.changelog/22664.txt new file mode 100644 index 00000000000..5a7cde63858 --- /dev/null +++ b/.changelog/22664.txt @@ -0,0 +1,7 @@ +```release-note:note +resource/aws_route: The `instance_id` argument has been deprecated. All configurations using `instance_id` should be updated to use the `network_interface_id` argument instead +``` + +```release-note:note +resource/aws_route_table: The `instance_id` argument of the `route` configuration block has been deprecated. All configurations using `route` `instance_id` should be updated to use the `route` `network_interface_id` argument instead +``` \ No newline at end of file diff --git a/.changelog/22783.txt b/.changelog/22783.txt new file mode 100644 index 00000000000..708224b486f --- /dev/null +++ b/.changelog/22783.txt @@ -0,0 +1,3 @@ +```release-note:note +resource/aws_ecs_cluster: The `capacity_providers` and `default_capacity_provider_strategy` arguments have been deprecated. Use the `aws_ecs_cluster_capacity_providers` resource instead. + ``` diff --git a/.changelog/22850.txt b/.changelog/22850.txt new file mode 100644 index 00000000000..c19e2ea16c0 --- /dev/null +++ b/.changelog/22850.txt @@ -0,0 +1,11 @@ +```release-note:note +data-source/aws_s3_bucket_object: The data source has been renamed. Use `aws_s3_object` instead +``` + +```release-note:note +data-source/aws_s3_bucket_objects: The data source has been renamed. Use `aws_s3_objects` instead +``` + +```release-note:note +resource/aws_s3_bucket_object: The resource has been renamed. Use `aws_s3_object` instead +``` diff --git a/.changelog/5055.txt b/.changelog/5055.txt new file mode 100644 index 00000000000..da9fa294d9c --- /dev/null +++ b/.changelog/5055.txt @@ -0,0 +1,3 @@ +```release-note:note +data-source/aws_instances: If no instances match the specified criteria an empty list is returned (previously an error was raised) +``` \ No newline at end of file diff --git a/.changelog/5132.txt b/.changelog/5132.txt new file mode 100644 index 00000000000..897a24e95aa --- /dev/null +++ b/.changelog/5132.txt @@ -0,0 +1,3 @@ +```release-note:new-resource +aws_s3_bucket_versioning +``` \ No newline at end of file diff --git a/.changelog/7537.txt b/.changelog/7537.txt new file mode 100644 index 00000000000..1cd879dd11e --- /dev/null +++ b/.changelog/7537.txt @@ -0,0 +1,3 @@ +```release-note:new-data-source +aws_eips +``` \ No newline at end of file diff --git a/.semgrep.yml b/.semgrep.yml index b9b3814753c..076d19b86e9 100644 --- a/.semgrep.yml +++ b/.semgrep.yml @@ -568,7 +568,7 @@ rules: - internal/service/mq/forge_test.go - internal/service/route53/sweep.go - internal/service/s3/bucket_test.go - - internal/service/s3/bucket_object_test.go + - internal/service/s3/object_test.go - internal/service/storagegateway/cached_iscsi_volume.go - internal/service/storagegateway/cached_iscsi_volume_test.go - internal/service/storagegateway/stored_iscsi_volume_test.go diff --git a/CHANGELOG.md b/CHANGELOG.md index dda76730756..9229b42ff04 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,13 +1,50 @@ -## 3.75.0 (Unreleased) +## 4.0.0 (Unreleased) + +NOTES: + +* data-source/aws_cognito_user_pools: The type of the `ids` and `arns` attributes has changed from Set to List. If no volumes match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_db_event_categories: The type of the `ids` attribute has changed from Set to List. If no event categories match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_ebs_volumes: The type of the `ids` attribute has changed from Set to List. If no volumes match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_ec2_coip_pools: The type of the `pool_ids` attribute has changed from Set to List. If no COIP pools match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_ec2_local_gateway_route_tables: The type of the `ids` attribute has changed from Set to List. If no local gateway route tables match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_ec2_local_gateway_virtual_interface_groups: The type of the `ids` and `local_gateway_virtual_interface_ids` attributes has changed from Set to List. If no local gateway virtual interface groups match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_ec2_local_gateways: The type of the `ids` attribute has changed from Set to List. If no local gateways match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_ec2_transit_gateway_route_tables: The type of the `ids` attribute has changed from Set to List. If no transit gateway route tables match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_efs_access_points: The type of the `ids` and `arns` attributes has changed from Set to List. If no access points match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_emr_release_labels: The type of the `ids` attribute has changed from Set to List. If no release labels match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_inspector_rules_packages: If no rules packages match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_instances: If no instances match the specified criteria an empty list is returned (previously an error was raised) ([#5055](https://github.com/hashicorp/terraform-provider-aws/issues/5055)) +* data-source/aws_ip_ranges: If no ranges match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_network_acls: The type of the `ids` attribute has changed from Set to List. If no NACLs match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_network_interfaces: The type of the `ids` attribute has changed from Set to List. If no network interfaces match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_route_tables: The type of the `ids` attribute has changed from Set to List. If no route tables match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_s3_bucket_object: The data source has been renamed. Use `aws_s3_object` instead ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) +* data-source/aws_s3_bucket_objects: The data source has been renamed. Use `aws_s3_objects` instead ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) +* data-source/aws_security_groups: If no security groups match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_ssoadmin_instances: The type of the `identity_store_ids` and `arns` attributes has changed from Set to List. If no instances match the specified criteria an empty list is returned (previously an error was raised) ([#21219](https://github.com/hashicorp/terraform-provider-aws/issues/21219)) +* data-source/aws_vpcs: The type of the `ids` attributes has changed from Set to List. If no VPCs match the specified criteria an empty list is returned (previously an error was raised) ([#22253](https://github.com/hashicorp/terraform-provider-aws/issues/22253)) +* resource/aws_default_subnet: If no default subnet exists in the specified Availability Zone one is now created. The `force_destroy` destroy argument has been added (defaults to `false`). Setting this argument to `true` deletes the default subnet on `terraform destroy` ([#22253](https://github.com/hashicorp/terraform-provider-aws/issues/22253)) +* resource/aws_default_vpc: If no default VPC exists in the current AWS Region one is now created. The `force_destroy` destroy argument has been added (defaults to `false`). Setting this argument to `true` deletes the default VPC on `terraform destroy` ([#22253](https://github.com/hashicorp/terraform-provider-aws/issues/22253)) +* resource/aws_ecs_cluster: The `capacity_providers` and `default_capacity_provider_strategy` arguments have been deprecated. Use the `aws_ecs_cluster_capacity_providers` resource instead. ([#22783](https://github.com/hashicorp/terraform-provider-aws/issues/22783)) +* resource/aws_route: The `instance_id` argument has been deprecated. All configurations using `instance_id` should be updated to use the `network_interface_id` argument instead ([#22664](https://github.com/hashicorp/terraform-provider-aws/issues/22664)) +* resource/aws_route_table: The `instance_id` argument of the `route` configuration block has been deprecated. All configurations using `route` `instance_id` should be updated to use the `route` `network_interface_id` argument instead ([#22664](https://github.com/hashicorp/terraform-provider-aws/issues/22664)) +* resource/aws_s3_bucket_object: The resource has been renamed. Use `aws_s3_object` instead ([#22850](https://github.com/hashicorp/terraform-provider-aws/issues/22850)) + +FEATURES: + +* **New Data Source:** `aws_eips` ([#7537](https://github.com/hashicorp/terraform-provider-aws/issues/7537)) +* **New Resource:** `aws_s3_bucket_cors_configuration` ([#12141](https://github.com/hashicorp/terraform-provider-aws/issues/12141)) +* **New Resource:** `aws_s3_bucket_versioning` ([#5132](https://github.com/hashicorp/terraform-provider-aws/issues/5132)) ENHANCEMENTS: -* data-source/aws_imagebuilder_distribution_configuration: Add `container_distribution_configuration` attribute to the `distribution` configuration block ([#22838](https://github.com/hashicorp/terraform-provider-aws/issues/22838)) -* resource/aws_imagebuilder_image_recipe: Add `parameter` argument to the `component` configuration block ([#22837](https://github.com/hashicorp/terraform-provider-aws/issues/22837)) +* data-source/aws_cloudwatch_log_group: Automatically trim `:*` suffix from `arn` attribute ([#22043](https://github.com/hashicorp/terraform-provider-aws/issues/22043)) +* resource/aws_vpn_connection: Add the ability to revert changes to unconfigured tunnel options made outside of Terraform to their [documented default values](https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html) ([#17031](https://github.com/hashicorp/terraform-provider-aws/issues/17031)) +* resource/aws_vpn_connection: Mark `customer_gateway_configuration` as [`Sensitive`](https://www.terraform.io/plugin/sdkv2/best-practices/sensitive-state#using-the-sensitive-flag) ([#15806](https://github.com/hashicorp/terraform-provider-aws/issues/15806)) BUG FIXES: -* resource/aws_cloudformation_stack: Retry resource Create and Update for IAM eventual consistency ([#22840](https://github.com/hashicorp/terraform-provider-aws/issues/22840)) +* data-source/aws_vpc_peering_connections: Return empty array instead of error when no connections found. ([#17382](https://github.com/hashicorp/terraform-provider-aws/issues/17382)) * resource/aws_route_table_association: Handle nil 'AssociationState' in ISO regions ([#22806](https://github.com/hashicorp/terraform-provider-aws/issues/22806)) ## 3.74.0 (January 28, 2022) diff --git a/examples/s3-cross-account-access/main.tf b/examples/s3-cross-account-access/main.tf index 178f9131b00..ddae63032fa 100644 --- a/examples/s3-cross-account-access/main.tf +++ b/examples/s3-cross-account-access/main.tf @@ -34,7 +34,7 @@ resource "aws_s3_bucket" "prod" { POLICY } -resource "aws_s3_bucket_object" "prod" { +resource "aws_s3_object" "prod" { provider = aws.prod bucket = aws_s3_bucket.prod.id @@ -50,7 +50,7 @@ provider "aws" { secret_key = var.test_secret_key } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { provider = aws.test bucket = aws_s3_bucket.prod.id diff --git a/examples/sagemaker/main.tf b/examples/sagemaker/main.tf index ded75b3b90e..2babe537ee2 100644 --- a/examples/sagemaker/main.tf +++ b/examples/sagemaker/main.tf @@ -86,7 +86,7 @@ resource "aws_s3_bucket" "foo" { force_destroy = true } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.foo.bucket key = "model.tar.gz" source = "model.tar.gz" diff --git a/internal/acctest/acctest.go b/internal/acctest/acctest.go index ccf398fc60c..1e0a11e2c84 100644 --- a/internal/acctest/acctest.go +++ b/internal/acctest/acctest.go @@ -652,6 +652,15 @@ func PreCheckRegion(t *testing.T, region string) { } } +// PreCheckRegionNot checks that the test region is not one of the specified regions. +func PreCheckRegionNot(t *testing.T, regions ...string) { + for _, region := range regions { + if Region() == region { + t.Skipf("skipping tests; %s (%s) not supported", conns.EnvVarDefaultRegion, region) + } + } +} + // PreCheckPartition checks that the test partition is the specified partition. func PreCheckPartition(partition string, t *testing.T) { if Partition() != partition { @@ -1844,3 +1853,23 @@ func CheckCallerIdentityAccountID(n string) resource.TestCheckFunc { return nil } } + +func CheckResourceAttrGreaterThanValue(n, key, value string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[n] + if !ok { + return fmt.Errorf("Not found: %s", n) + } + + if v, ok := rs.Primary.Attributes[key]; !ok || !(v > value) { + if !ok { + return fmt.Errorf("%s: Attribute %q not found", n, key) + } + + return fmt.Errorf("%s: Attribute %q is not greater than %q, got %q", n, key, value, v) + } + + return nil + + } +} diff --git a/internal/provider/provider.go b/internal/provider/provider.go index eac45701a60..90d0a9c1d82 100644 --- a/internal/provider/provider.go +++ b/internal/provider/provider.go @@ -474,6 +474,7 @@ func Provider() *schema.Provider { "aws_ec2_transit_gateway_vpc_attachment": ec2.DataSourceTransitGatewayVPCAttachment(), "aws_ec2_transit_gateway_vpn_attachment": ec2.DataSourceTransitGatewayVPNAttachment(), "aws_eip": ec2.DataSourceEIP(), + "aws_eips": ec2.DataSourceEIPs(), "aws_instance": ec2.DataSourceInstance(), "aws_instances": ec2.DataSourceInstances(), "aws_internet_gateway": ec2.DataSourceInternetGateway(), @@ -684,8 +685,8 @@ func Provider() *schema.Provider { "aws_canonical_user_id": s3.DataSourceCanonicalUserID(), "aws_s3_bucket": s3.DataSourceBucket(), - "aws_s3_bucket_object": s3.DataSourceBucketObject(), - "aws_s3_bucket_objects": s3.DataSourceBucketObjects(), + "aws_s3_object": s3.DataSourceObject(), + "aws_s3_objects": s3.DataSourceObjects(), "aws_sagemaker_prebuilt_ecr_image": sagemaker.DataSourcePrebuiltECRImage(), @@ -1587,15 +1588,17 @@ func Provider() *schema.Provider { "aws_s3_bucket": s3.ResourceBucket(), "aws_s3_bucket_analytics_configuration": s3.ResourceBucketAnalyticsConfiguration(), + "aws_s3_bucket_cors_configuration": s3.ResourceBucketCorsConfiguration(), "aws_s3_bucket_intelligent_tiering_configuration": s3.ResourceBucketIntelligentTieringConfiguration(), "aws_s3_bucket_inventory": s3.ResourceBucketInventory(), "aws_s3_bucket_metric": s3.ResourceBucketMetric(), "aws_s3_bucket_notification": s3.ResourceBucketNotification(), - "aws_s3_bucket_object": s3.ResourceBucketObject(), "aws_s3_bucket_ownership_controls": s3.ResourceBucketOwnershipControls(), "aws_s3_bucket_policy": s3.ResourceBucketPolicy(), "aws_s3_bucket_public_access_block": s3.ResourceBucketPublicAccessBlock(), "aws_s3_bucket_replication_configuration": s3.ResourceBucketReplicationConfiguration(), + "aws_s3_bucket_versioning": s3.ResourceBucketVersioning(), + "aws_s3_object": s3.ResourceObject(), "aws_s3_object_copy": s3.ResourceObjectCopy(), "aws_s3_access_point": s3control.ResourceAccessPoint(), diff --git a/internal/service/apigateway/domain_name_test.go b/internal/service/apigateway/domain_name_test.go index ac8db5fac72..8534ded436e 100644 --- a/internal/service/apigateway/domain_name_test.go +++ b/internal/service/apigateway/domain_name_test.go @@ -305,7 +305,7 @@ func TestAccAPIGatewayDomainName_mutualTLSAuthentication(t *testing.T) { var v apigateway.DomainName resourceName := "aws_api_gateway_domain_name.test" acmCertificateResourceName := "aws_acm_certificate.test" - s3BucketObjectResourceName := "aws_s3_bucket_object.test" + s3ObjectResourceName := "aws_s3_object.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ @@ -322,7 +322,7 @@ func TestAccAPIGatewayDomainName_mutualTLSAuthentication(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "domain_name", acmCertificateResourceName, "domain_name"), resource.TestCheckResourceAttr(resourceName, "mutual_tls_authentication.#", "1"), resource.TestCheckResourceAttr(resourceName, "mutual_tls_authentication.0.truststore_uri", fmt.Sprintf("s3://%s/%s", rName, rName)), - resource.TestCheckResourceAttrPair(resourceName, "mutual_tls_authentication.0.truststore_version", s3BucketObjectResourceName, "version_id"), + resource.TestCheckResourceAttrPair(resourceName, "mutual_tls_authentication.0.truststore_version", s3ObjectResourceName, "version_id"), ), }, { @@ -647,7 +647,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = %[1]q source = "test-fixtures/apigateway-domain-name-truststore-1.pem" @@ -663,8 +663,8 @@ resource "aws_api_gateway_domain_name" "test" { } mutual_tls_authentication { - truststore_uri = "s3://${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" - truststore_version = aws_s3_bucket_object.test.version_id + truststore_uri = "s3://${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" + truststore_version = aws_s3_object.test.version_id } } `, rName)) diff --git a/internal/service/apigatewayv2/domain_name_test.go b/internal/service/apigatewayv2/domain_name_test.go index 1cd042d3902..933bd93dcb1 100644 --- a/internal/service/apigatewayv2/domain_name_test.go +++ b/internal/service/apigatewayv2/domain_name_test.go @@ -221,7 +221,7 @@ func TestAccAPIGatewayV2DomainName_mutualTLSAuthentication(t *testing.T) { var v apigatewayv2.GetDomainNameOutput resourceName := "aws_apigatewayv2_domain_name.test" acmCertificateResourceName := "aws_acm_certificate.test" - s3BucketObjectResourceName := "aws_s3_bucket_object.test" + s3ObjectResourceName := "aws_s3_object.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ @@ -262,7 +262,7 @@ func TestAccAPIGatewayV2DomainName_mutualTLSAuthentication(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "domain_name_configuration.0.target_domain_name"), resource.TestCheckResourceAttr(resourceName, "mutual_tls_authentication.#", "1"), resource.TestCheckResourceAttr(resourceName, "mutual_tls_authentication.0.truststore_uri", fmt.Sprintf("s3://%s/%s", rName, rName)), - resource.TestCheckResourceAttrPair(resourceName, "mutual_tls_authentication.0.truststore_version", s3BucketObjectResourceName, "version_id"), + resource.TestCheckResourceAttrPair(resourceName, "mutual_tls_authentication.0.truststore_version", s3ObjectResourceName, "version_id"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -280,7 +280,7 @@ func TestAccAPIGatewayV2DomainName_mutualTLSAuthentication(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "domain_name_configuration.0.target_domain_name"), resource.TestCheckResourceAttr(resourceName, "mutual_tls_authentication.#", "1"), resource.TestCheckResourceAttr(resourceName, "mutual_tls_authentication.0.truststore_uri", fmt.Sprintf("s3://%s/%s", rName, rName)), - resource.TestCheckResourceAttrPair(resourceName, "mutual_tls_authentication.0.truststore_version", s3BucketObjectResourceName, "version_id"), + resource.TestCheckResourceAttrPair(resourceName, "mutual_tls_authentication.0.truststore_version", s3ObjectResourceName, "version_id"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -469,7 +469,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = %[1]q source = "test-fixtures/%[2]s" @@ -485,7 +485,7 @@ resource "aws_apigatewayv2_domain_name" "test" { } mutual_tls_authentication { - truststore_uri = "s3://${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" + truststore_uri = "s3://${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" } } `, rName, pemFileName)) @@ -505,7 +505,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = %[1]q source = "test-fixtures/%[2]s" @@ -521,8 +521,8 @@ resource "aws_apigatewayv2_domain_name" "test" { } mutual_tls_authentication { - truststore_uri = "s3://${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" - truststore_version = aws_s3_bucket_object.test.version_id + truststore_uri = "s3://${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" + truststore_version = aws_s3_object.test.version_id } } `, rName, pemFileName)) diff --git a/internal/service/cloudformation/stack_set_test.go b/internal/service/cloudformation/stack_set_test.go index 1cafd48cc2d..10fb87fd5c0 100644 --- a/internal/service/cloudformation/stack_set_test.go +++ b/internal/service/cloudformation/stack_set_test.go @@ -1348,7 +1348,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { acl = "public-read" bucket = aws_s3_bucket.test.bucket @@ -1362,7 +1362,7 @@ CONTENT resource "aws_cloudformation_stack_set" "test" { administration_role_arn = aws_iam_role.test.arn name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" } `, rName, testAccStackSetTemplateBodyVPC(rName+"1")) } @@ -1397,7 +1397,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { acl = "public-read" bucket = aws_s3_bucket.test.bucket @@ -1411,7 +1411,7 @@ CONTENT resource "aws_cloudformation_stack_set" "test" { administration_role_arn = aws_iam_role.test.arn name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" } `, rName, testAccStackSetTemplateBodyVPC(rName+"2")) } diff --git a/internal/service/cloudformation/stack_test.go b/internal/service/cloudformation/stack_test.go index 48af97c0bb8..fab244b7513 100644 --- a/internal/service/cloudformation/stack_test.go +++ b/internal/service/cloudformation/stack_test.go @@ -859,7 +859,7 @@ POLICY } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.b.id key = %[2]q source = "test-fixtures/cloudformation-template.json" @@ -872,7 +872,7 @@ resource "aws_cloudformation_stack" "test" { VpcCIDR = %[3]q } - template_url = "https://${aws_s3_bucket.b.id}.s3-${data.aws_region.current.name}.${data.aws_partition.current.dns_suffix}/${aws_s3_bucket_object.object.key}" + template_url = "https://${aws_s3_bucket.b.id}.s3-${data.aws_region.current.name}.${data.aws_partition.current.dns_suffix}/${aws_s3_object.object.key}" on_failure = "DELETE" timeout_in_minutes = 1 } @@ -913,7 +913,7 @@ POLICY } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.b.id key = %[2]q source = "test-fixtures/cloudformation-template.yaml" @@ -926,7 +926,7 @@ resource "aws_cloudformation_stack" "test" { VpcCIDR = %[3]q } - template_url = "https://${aws_s3_bucket.b.id}.s3-${data.aws_region.current.name}.${data.aws_partition.current.dns_suffix}/${aws_s3_bucket_object.object.key}" + template_url = "https://${aws_s3_bucket.b.id}.s3-${data.aws_region.current.name}.${data.aws_partition.current.dns_suffix}/${aws_s3_object.object.key}" on_failure = "DELETE" timeout_in_minutes = 1 } diff --git a/internal/service/cloudformation/type_data_source_test.go b/internal/service/cloudformation/type_data_source_test.go index f646af0da16..94d3dc8483c 100644 --- a/internal/service/cloudformation/type_data_source_test.go +++ b/internal/service/cloudformation/type_data_source_test.go @@ -150,14 +150,14 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = "test" source = %[2]q } resource "aws_cloudformation_type" "test" { - schema_handler_package = "s3://${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" + schema_handler_package = "s3://${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" type = "RESOURCE" type_name = %[3]q } diff --git a/internal/service/cloudformation/type_test.go b/internal/service/cloudformation/type_test.go index 4f6b555f8a3..89ea941f2da 100644 --- a/internal/service/cloudformation/type_test.go +++ b/internal/service/cloudformation/type_test.go @@ -377,7 +377,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = "test" source = %[2]q @@ -406,7 +406,7 @@ resource "aws_iam_role" "test" { resource "aws_cloudformation_type" "test" { execution_role_arn = aws_iam_role.test.arn - schema_handler_package = "s3://${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" + schema_handler_package = "s3://${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" type = "RESOURCE" type_name = %[2]q } @@ -437,7 +437,7 @@ resource "aws_iam_role" "test" { } resource "aws_cloudformation_type" "test" { - schema_handler_package = "s3://${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" + schema_handler_package = "s3://${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" type = "RESOURCE" type_name = %[2]q @@ -454,7 +454,7 @@ func testAccCloudformationTypeConfigTypeName(rName string, zipPath string, typeN testAccCloudformationTypeConfigBase(rName, zipPath), fmt.Sprintf(` resource "aws_cloudformation_type" "test" { - schema_handler_package = "s3://${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" + schema_handler_package = "s3://${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" type = "RESOURCE" type_name = %[1]q } diff --git a/internal/service/cloudwatchlogs/group_data_source.go b/internal/service/cloudwatchlogs/group_data_source.go index 5432a1950c2..a3df9461871 100644 --- a/internal/service/cloudwatchlogs/group_data_source.go +++ b/internal/service/cloudwatchlogs/group_data_source.go @@ -3,6 +3,7 @@ package cloudwatchlogs import ( "fmt" + "github.com/aws/aws-sdk-go/aws" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" @@ -52,7 +53,7 @@ func dataSourceGroupRead(d *schema.ResourceData, meta interface{}) error { } d.SetId(name) - d.Set("arn", logGroup.Arn) + d.Set("arn", TrimLogGroupARNWildcardSuffix(aws.StringValue(logGroup.Arn))) d.Set("creation_time", logGroup.CreationTime) d.Set("retention_in_days", logGroup.RetentionInDays) d.Set("kms_key_id", logGroup.KmsKeyId) diff --git a/internal/service/cloudwatchlogs/group_data_source_test.go b/internal/service/cloudwatchlogs/group_data_source_test.go index 50698853fbc..396a7e299c0 100644 --- a/internal/service/cloudwatchlogs/group_data_source_test.go +++ b/internal/service/cloudwatchlogs/group_data_source_test.go @@ -22,10 +22,10 @@ func TestAccCloudWatchLogsGroupDataSource_basic(t *testing.T) { { Config: testAccCheckGroupDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "name", "aws_cloudwatch_log_group.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "arn", "aws_cloudwatch_log_group.test", "arn"), resource.TestCheckResourceAttrSet(resourceName, "creation_time"), - resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "tags", "aws_cloudwatch_log_group.test", "tags"), ), }, }, @@ -44,13 +44,10 @@ func TestAccCloudWatchLogsGroupDataSource_tags(t *testing.T) { { Config: testAccCheckGroupTagsDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "name", "aws_cloudwatch_log_group.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "arn", "aws_cloudwatch_log_group.test", "arn"), resource.TestCheckResourceAttrSet(resourceName, "creation_time"), - resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), - resource.TestCheckResourceAttr(resourceName, "tags.Environment", "Production"), - resource.TestCheckResourceAttr(resourceName, "tags.Foo", "Bar"), - resource.TestCheckResourceAttr(resourceName, "tags.Empty", ""), + resource.TestCheckResourceAttrPair(resourceName, "tags", "aws_cloudwatch_log_group.test", "tags"), ), }, }, @@ -69,11 +66,11 @@ func TestAccCloudWatchLogsGroupDataSource_kms(t *testing.T) { { Config: testAccCheckGroupKMSDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "name", "aws_cloudwatch_log_group.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "arn", "aws_cloudwatch_log_group.test", "arn"), resource.TestCheckResourceAttrSet(resourceName, "creation_time"), - resource.TestCheckResourceAttrSet(resourceName, "kms_key_id"), - resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), + resource.TestCheckResourceAttrPair(resourceName, "kms_key_id", "aws_cloudwatch_log_group.test", "kms_key_id"), + resource.TestCheckResourceAttrPair(resourceName, "tags", "aws_cloudwatch_log_group.test", "tags"), ), }, }, @@ -92,11 +89,11 @@ func TestAccCloudWatchLogsGroupDataSource_retention(t *testing.T) { { Config: testAccCheckGroupRetentionDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(resourceName, "name", rName), - resource.TestCheckResourceAttrSet(resourceName, "arn"), + resource.TestCheckResourceAttrPair(resourceName, "name", "aws_cloudwatch_log_group.test", "name"), + resource.TestCheckResourceAttrPair(resourceName, "arn", "aws_cloudwatch_log_group.test", "arn"), resource.TestCheckResourceAttrSet(resourceName, "creation_time"), - resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), - resource.TestCheckResourceAttr(resourceName, "retention_in_days", "365"), + resource.TestCheckResourceAttrPair(resourceName, "tags", "aws_cloudwatch_log_group.test", "tags"), + resource.TestCheckResourceAttrPair(resourceName, "retention_in_days", "aws_cloudwatch_log_group.test", "retention_in_days"), ), }, }, diff --git a/internal/service/codebuild/project_test.go b/internal/service/codebuild/project_test.go index 9917c2399a4..2cfa74ad4a1 100644 --- a/internal/service/codebuild/project_test.go +++ b/internal/service/codebuild/project_test.go @@ -3171,7 +3171,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = %[1]q content = "test" @@ -3189,7 +3189,7 @@ resource "aws_codebuild_project" "test" { compute_type = "BUILD_GENERAL1_SMALL" image = "2" type = "LINUX_CONTAINER" - certificate = "${aws_s3_bucket.test.bucket}/${aws_s3_bucket_object.test.key}" + certificate = "${aws_s3_bucket.test.bucket}/${aws_s3_object.test.key}" } source { @@ -3956,7 +3956,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket content = "test" key = "test.txt" @@ -3977,7 +3977,7 @@ resource "aws_codebuild_project" "test" { } source { - location = "${aws_s3_bucket.test.bucket}/${aws_s3_bucket_object.test.key}" + location = "${aws_s3_bucket.test.bucket}/${aws_s3_object.test.key}" type = "S3" } } diff --git a/internal/service/cognitoidp/user_pools_data_source.go b/internal/service/cognitoidp/user_pools_data_source.go index 0c29cc0e82a..30ed24abdd4 100644 --- a/internal/service/cognitoidp/user_pools_data_source.go +++ b/internal/service/cognitoidp/user_pools_data_source.go @@ -13,85 +13,86 @@ import ( func DataSourceUserPools() *schema.Resource { return &schema.Resource{ Read: dataSourceUserPoolsRead, + Schema: map[string]*schema.Schema{ - "name": { - Type: schema.TypeString, - Required: true, - }, - "ids": { - Type: schema.TypeSet, + "arns": { + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "arns": { - Type: schema.TypeSet, + "ids": { + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, + "name": { + Type: schema.TypeString, + Required: true, + }, }, } } func dataSourceUserPoolsRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).CognitoIDPConn - name := d.Get("name").(string) - var ids []string - var arns []string - pools, err := getAllCognitoUserPools(conn) + output, err := findUserPoolDescriptionTypes(conn) + if err != nil { - return fmt.Errorf("Error listing cognito user pools: %w", err) + return fmt.Errorf("error reading Cognito User Pools: %w", err) } - for _, pool := range pools { - if name == aws.StringValue(pool.Name) { - id := aws.StringValue(pool.Id) - arn := arn.ARN{ - Partition: meta.(*conns.AWSClient).Partition, - Service: "cognito-idp", - Region: meta.(*conns.AWSClient).Region, - AccountID: meta.(*conns.AWSClient).AccountID, - Resource: fmt.Sprintf("userpool/%s", id), - }.String() - - ids = append(ids, id) - arns = append(arns, arn) + + name := d.Get("name").(string) + var arns, userPoolIDs []string + + for _, v := range output { + if name != aws.StringValue(v.Name) { + continue } - } - if len(ids) == 0 { - return fmt.Errorf("No cognito user pool found with name: %s", name) + userPoolID := aws.StringValue(v.Id) + arn := arn.ARN{ + Partition: meta.(*conns.AWSClient).Partition, + Service: cognitoidentityprovider.ServiceName, + Region: meta.(*conns.AWSClient).Region, + AccountID: meta.(*conns.AWSClient).AccountID, + Resource: fmt.Sprintf("userpool/%s", userPoolID), + }.String() + + userPoolIDs = append(userPoolIDs, userPoolID) + arns = append(arns, arn) } d.SetId(name) - d.Set("ids", ids) + d.Set("ids", userPoolIDs) d.Set("arns", arns) return nil } -func getAllCognitoUserPools(conn *cognitoidentityprovider.CognitoIdentityProvider) ([]*cognitoidentityprovider.UserPoolDescriptionType, error) { - var pools []*cognitoidentityprovider.UserPoolDescriptionType - var nextToken string +func findUserPoolDescriptionTypes(conn *cognitoidentityprovider.CognitoIdentityProvider) ([]*cognitoidentityprovider.UserPoolDescriptionType, error) { + input := &cognitoidentityprovider.ListUserPoolsInput{ + MaxResults: aws.Int64(60), + } + var output []*cognitoidentityprovider.UserPoolDescriptionType - for { - input := &cognitoidentityprovider.ListUserPoolsInput{ - // MaxResults Valid Range: Minimum value of 1. Maximum value of 60 - MaxResults: aws.Int64(60), - } - if nextToken != "" { - input.NextToken = aws.String(nextToken) - } - out, err := conn.ListUserPools(input) - if err != nil { - return pools, err + err := conn.ListUserPoolsPages(input, func(page *cognitoidentityprovider.ListUserPoolsOutput, lastPage bool) bool { + if page == nil { + return !lastPage } - pools = append(pools, out.UserPools...) - if out.NextToken == nil { - break + for _, v := range page.UserPools { + if v != nil { + output = append(output, v) + } } - nextToken = aws.StringValue(out.NextToken) + + return !lastPage + }) + + if err != nil { + return nil, err } - return pools, nil + return output, nil } diff --git a/internal/service/cognitoidp/user_pools_data_source_test.go b/internal/service/cognitoidp/user_pools_data_source_test.go index ecce4ddc6bd..1a5eb7cbd4a 100644 --- a/internal/service/cognitoidp/user_pools_data_source_test.go +++ b/internal/service/cognitoidp/user_pools_data_source_test.go @@ -2,7 +2,6 @@ package cognitoidp_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/cognitoidentityprovider" @@ -12,44 +11,43 @@ import ( ) func TestAccCognitoIDPUserPoolsDataSource_basic(t *testing.T) { - rName := fmt.Sprintf("tf_acc_ds_cognito_user_pools_%s", sdkacctest.RandString(7)) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t); testAccPreCheckIdentityProvider(t) }, ErrorCheck: acctest.ErrorCheck(t, cognitoidentityprovider.EndpointsID), Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccUserPoolsDataSourceConfig_basic(rName), + Config: testAccUserPoolsDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr("data.aws_cognito_user_pools.selected", "ids.#", "2"), - resource.TestCheckResourceAttr("data.aws_cognito_user_pools.selected", "arns.#", "2"), + resource.TestCheckResourceAttr("data.aws_cognito_user_pools.test", "arns.#", "2"), + resource.TestCheckResourceAttr("data.aws_cognito_user_pools.test", "ids.#", "2"), + resource.TestCheckResourceAttr("data.aws_cognito_user_pools.empty", "arns.#", "0"), + resource.TestCheckResourceAttr("data.aws_cognito_user_pools.empty", "ids.#", "0"), ), }, - { - Config: testAccUserPoolsDataSourceConfig_notFound(rName), - ExpectError: regexp.MustCompile(`No cognito user pool found with name:`), - }, }, }) } -func testAccUserPoolsDataSourceConfig_basic(rName string) string { +func testAccUserPoolsDataSourceConfig(rName string) string { return fmt.Sprintf(` -resource "aws_cognito_user_pool" "main" { +resource "aws_cognito_user_pool" "test" { count = 2 - name = "%s" + name = %[1]q } -data "aws_cognito_user_pools" "selected" { - name = aws_cognito_user_pool.main.*.name[0] -} -`, rName) +data "aws_cognito_user_pools" "test" { + name = %[1]q + + depends_on = [aws_cognito_user_pool.test[0], aws_cognito_user_pool.test[1]] } -func testAccUserPoolsDataSourceConfig_notFound(rName string) string { - return fmt.Sprintf(` -data "aws_cognito_user_pools" "selected" { - name = "%s-not-found" +data "aws_cognito_user_pools" "empty" { + name = "not.%[1]s" + + depends_on = [aws_cognito_user_pool.test[0], aws_cognito_user_pool.test[1]] } `, rName) } diff --git a/internal/service/configservice/conformance_pack_test.go b/internal/service/configservice/conformance_pack_test.go index 28141d408bd..ba32d579170 100644 --- a/internal/service/configservice/conformance_pack_test.go +++ b/internal/service/configservice/conformance_pack_test.go @@ -657,7 +657,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = %[1]q content = < 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output[0], nil +} + +func FindEBSVolumes(conn *ec2.EC2, input *ec2.DescribeVolumesInput) ([]*ec2.Volume, error) { + var output []*ec2.Volume + + err := conn.DescribeVolumesPages(input, func(page *ec2.DescribeVolumesOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.Volumes { + if v != nil { + output = append(output, v) + } + } + + return !lastPage + }) + + if tfawserr.ErrCodeEquals(err, ErrCodeInvalidVolumeNotFound) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + return output, nil +} + +func FindEBSVolume(conn *ec2.EC2, input *ec2.DescribeVolumesInput) (*ec2.Volume, error) { + output, err := FindEBSVolumes(conn, input) + + if err != nil { + return nil, err + } + + if len(output) == 0 || output[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output[0], nil +} + +func FindEIPs(conn *ec2.EC2, input *ec2.DescribeAddressesInput) ([]*ec2.Address, error) { + var addresses []*ec2.Address + + output, err := conn.DescribeAddresses(input) + + if tfawserr.ErrCodeEquals(err, ErrCodeInvalidAddressNotFound, ErrCodeInvalidAllocationIDNotFound) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + for _, v := range output.Addresses { + if v != nil { + addresses = append(addresses, v) + } + } + + return addresses, nil +} + func FindHostByID(conn *ec2.EC2, id string) (*ec2.Host, error) { input := &ec2.DescribeHostsInput{ HostIds: aws.StringSlice([]string{id}), @@ -228,23 +351,211 @@ func FindHost(conn *ec2.EC2, input *ec2.DescribeHostsInput) (*ec2.Host, error) { return host, nil } -// FindInstanceByID looks up a Instance by ID. When not found, returns nil and potentially an API error. +func FindInstances(conn *ec2.EC2, input *ec2.DescribeInstancesInput) ([]*ec2.Instance, error) { + var output []*ec2.Instance + + err := conn.DescribeInstancesPages(input, func(page *ec2.DescribeInstancesOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.Reservations { + if v != nil { + for _, v := range v.Instances { + if v != nil { + output = append(output, v) + } + } + } + } + + return !lastPage + }) + + if tfawserr.ErrCodeEquals(err, ErrCodeInvalidInstanceIDNotFound) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + return output, nil +} + +func FindInstance(conn *ec2.EC2, input *ec2.DescribeInstancesInput) (*ec2.Instance, error) { + output, err := FindInstances(conn, input) + + if err != nil { + return nil, err + } + + if len(output) == 0 || output[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output[0], nil +} + func FindInstanceByID(conn *ec2.EC2, id string) (*ec2.Instance, error) { input := &ec2.DescribeInstancesInput{ InstanceIds: aws.StringSlice([]string{id}), } - output, err := conn.DescribeInstances(input) + output, err := FindInstance(conn, input) if err != nil { return nil, err } - if output == nil || len(output.Reservations) == 0 || output.Reservations[0] == nil || len(output.Reservations[0].Instances) == 0 || output.Reservations[0].Instances[0] == nil { - return nil, nil + if state := aws.StringValue(output.State.Name); state == ec2.InstanceStateNameTerminated { + return nil, &resource.NotFoundError{ + Message: state, + LastRequest: input, + } + } + + // Eventual consistency check. + if aws.StringValue(output.InstanceId) != id { + return nil, &resource.NotFoundError{ + LastRequest: input, + } } - return output.Reservations[0].Instances[0], nil + return output, nil +} + +func FindLocalGatewayRouteTables(conn *ec2.EC2, input *ec2.DescribeLocalGatewayRouteTablesInput) ([]*ec2.LocalGatewayRouteTable, error) { + var output []*ec2.LocalGatewayRouteTable + + err := conn.DescribeLocalGatewayRouteTablesPages(input, func(page *ec2.DescribeLocalGatewayRouteTablesOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.LocalGatewayRouteTables { + if v != nil { + output = append(output, v) + } + } + + return !lastPage + }) + + if err != nil { + return nil, err + } + + return output, nil +} + +func FindLocalGatewayRouteTable(conn *ec2.EC2, input *ec2.DescribeLocalGatewayRouteTablesInput) (*ec2.LocalGatewayRouteTable, error) { + output, err := FindLocalGatewayRouteTables(conn, input) + + if err != nil { + return nil, err + } + + if len(output) == 0 || output[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output[0], nil +} + +func FindLocalGatewayVirtualInterfaceGroups(conn *ec2.EC2, input *ec2.DescribeLocalGatewayVirtualInterfaceGroupsInput) ([]*ec2.LocalGatewayVirtualInterfaceGroup, error) { + var output []*ec2.LocalGatewayVirtualInterfaceGroup + + err := conn.DescribeLocalGatewayVirtualInterfaceGroupsPages(input, func(page *ec2.DescribeLocalGatewayVirtualInterfaceGroupsOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.LocalGatewayVirtualInterfaceGroups { + if v != nil { + output = append(output, v) + } + } + + return !lastPage + }) + + if err != nil { + return nil, err + } + + return output, nil +} + +func FindLocalGatewayVirtualInterfaceGroup(conn *ec2.EC2, input *ec2.DescribeLocalGatewayVirtualInterfaceGroupsInput) (*ec2.LocalGatewayVirtualInterfaceGroup, error) { + output, err := FindLocalGatewayVirtualInterfaceGroups(conn, input) + + if err != nil { + return nil, err + } + + if len(output) == 0 || output[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output[0], nil +} + +func FindLocalGateways(conn *ec2.EC2, input *ec2.DescribeLocalGatewaysInput) ([]*ec2.LocalGateway, error) { + var output []*ec2.LocalGateway + + err := conn.DescribeLocalGatewaysPages(input, func(page *ec2.DescribeLocalGatewaysOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.LocalGateways { + if v != nil { + output = append(output, v) + } + } + + return !lastPage + }) + + if err != nil { + return nil, err + } + + return output, nil +} + +func FindLocalGateway(conn *ec2.EC2, input *ec2.DescribeLocalGatewaysInput) (*ec2.LocalGateway, error) { + output, err := FindLocalGateways(conn, input) + + if err != nil { + return nil, err + } + + if len(output) == 0 || output[0] == nil { + return nil, tfresource.NewEmptyResultError(input) + } + + if count := len(output); count > 1 { + return nil, tfresource.NewTooManyResultsError(count, input) + } + + return output[0], nil } func FindNetworkACL(conn *ec2.EC2, input *ec2.DescribeNetworkAclsInput) (*ec2.NetworkAcl, error) { @@ -1740,6 +2051,37 @@ func FindTransitGatewayAttachmentByID(conn *ec2.EC2, id string) (*ec2.TransitGat return output, nil } +func FindTransitGatewayRouteTables(conn *ec2.EC2, input *ec2.DescribeTransitGatewayRouteTablesInput) ([]*ec2.TransitGatewayRouteTable, error) { + var output []*ec2.TransitGatewayRouteTable + + err := conn.DescribeTransitGatewayRouteTablesPages(input, func(page *ec2.DescribeTransitGatewayRouteTablesOutput, lastPage bool) bool { + if page == nil { + return !lastPage + } + + for _, v := range page.TransitGatewayRouteTables { + if v != nil { + output = append(output, v) + } + } + + return !lastPage + }) + + if tfawserr.ErrCodeEquals(err, ErrCodeInvalidRouteTableIDNotFound) { + return nil, &resource.NotFoundError{ + LastError: err, + LastRequest: input, + } + } + + if err != nil { + return nil, err + } + + return output, nil +} + func FindDHCPOptions(conn *ec2.EC2, input *ec2.DescribeDhcpOptionsInput) (*ec2.DhcpOptions, error) { output, err := FindDHCPOptionses(conn, input) diff --git a/internal/service/ec2/instance_test.go b/internal/service/ec2/instance_test.go index 23421c6b304..00a0421f6bf 100644 --- a/internal/service/ec2/instance_test.go +++ b/internal/service/ec2/instance_test.go @@ -4,7 +4,6 @@ import ( "fmt" "reflect" "regexp" - "sort" "strings" "testing" "time" @@ -3988,24 +3987,33 @@ func testAccPreCheckEC2ClassicOrHasDefaultVPCWithDefaultSubnets(t *testing.T) { } } -// hasDefaultVPC returns whether the current AWS region has a default VPC. -func hasDefaultVPC(t *testing.T) bool { +// defaultVPC returns the ID of the default VPC for the current AWS Region, or "" if none exists. +func defaultVPC(t *testing.T) string { conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn - resp, err := conn.DescribeAccountAttributes(&ec2.DescribeAccountAttributesInput{ + output, err := conn.DescribeAccountAttributes(&ec2.DescribeAccountAttributesInput{ AttributeNames: aws.StringSlice([]string{ec2.AccountAttributeNameDefaultVpc}), }) - if acctest.PreCheckSkipError(err) || - len(resp.AccountAttributes) == 0 || - len(resp.AccountAttributes[0].AttributeValues) == 0 || - aws.StringValue(resp.AccountAttributes[0].AttributeValues[0].AttributeValue) == "none" { - return false + + if acctest.PreCheckSkipError(err) { + return "" } + if err != nil { t.Fatalf("error describing EC2 account attributes: %s", err) } - return true + if len(output.AccountAttributes) > 0 && len(output.AccountAttributes[0].AttributeValues) > 0 { + if v := aws.StringValue(output.AccountAttributes[0].AttributeValues[0].AttributeValue); v != "none" { + return v + } + } + + return "" +} + +func hasDefaultVPC(t *testing.T) bool { + return defaultVPC(t) != "" } // defaultSubnetCount returns the number of default subnets in the current region's default VPC. @@ -4013,66 +4021,24 @@ func defaultSubnetCount(t *testing.T) int { conn := acctest.Provider.Meta().(*conns.AWSClient).EC2Conn input := &ec2.DescribeSubnetsInput{ - Filters: buildAttributeFilterList(map[string]string{ - "defaultForAz": "true", - }), - } - output, err := conn.DescribeSubnets(input) - if acctest.PreCheckSkipError(err) { - return 0 - } - if err != nil { - t.Fatalf("error describing default subnets: %s", err) + Filters: tfec2.BuildAttributeFilterList( + map[string]string{ + "defaultForAz": "true", + }, + ), } - return len(output.Subnets) -} + subnets, err := tfec2.FindSubnets(conn, input) -// buildAttributeFilterList takes a flat map of scalar attributes (most -// likely values extracted from a *schema.ResourceData on an EC2-querying -// data source) and produces a []*ec2.Filter representing an exact match -// for each of the given non-empty attributes. -// -// The keys of the given attributes map are the attribute names expected -// by the EC2 API, which are usually either in camelcase or with dash-separated -// words. We conventionally map these to underscore-separated identifiers -// with the same words when presenting these as data source query attributes -// in Terraform. -// -// It's the callers responsibility to transform any non-string values into -// the appropriate string serialization required by the AWS API when -// encoding the given filter. Any attributes given with empty string values -// are ignored, assuming that the user wishes to leave that attribute -// unconstrained while filtering. -// -// The purpose of this function is to create values to pass in -// for the "Filters" attribute on most of the "Describe..." API functions in -// the EC2 API, to aid in the implementation of Terraform data sources that -// retrieve data about EC2 objects. -func buildAttributeFilterList(attrs map[string]string) []*ec2.Filter { - var filters []*ec2.Filter - - // sort the filters by name to make the output deterministic - var names []string - for filterName := range attrs { - names = append(names, filterName) + if acctest.PreCheckSkipError(err) { + return 0 } - sort.Strings(names) - - for _, filterName := range names { - value := attrs[filterName] - if value == "" { - continue - } - - filters = append(filters, &ec2.Filter{ - Name: aws.String(filterName), - Values: []*string{aws.String(value)}, - }) + if err != nil { + t.Fatalf("error listing default subnets: %s", err) } - return filters + return len(subnets) } func testAccAvailableAZsWavelengthZonesExcludeConfig(excludeZoneIds ...string) string { diff --git a/internal/service/ec2/instance_types_data_source_test.go b/internal/service/ec2/instance_types_data_source_test.go index 4ae7ca75492..35e9d4f26d9 100644 --- a/internal/service/ec2/instance_types_data_source_test.go +++ b/internal/service/ec2/instance_types_data_source_test.go @@ -21,7 +21,7 @@ func TestAccEC2InstanceTypesDataSource_basic(t *testing.T) { { Config: testAccInstanceTypesDataSourceConfig(), Check: resource.ComposeTestCheckFunc( - testCheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), ), }, }, @@ -40,7 +40,7 @@ func TestAccEC2InstanceTypesDataSource_filter(t *testing.T) { { Config: testAccInstanceTypesDataSourceConfigFilter(), Check: resource.ComposeTestCheckFunc( - testCheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "instance_types.#", "0"), ), }, }, diff --git a/internal/service/ec2/instances_data_source.go b/internal/service/ec2/instances_data_source.go index b318933eae9..4f9d9b2e1e0 100644 --- a/internal/service/ec2/instances_data_source.go +++ b/internal/service/ec2/instances_data_source.go @@ -2,7 +2,6 @@ package ec2 import ( "fmt" - "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" @@ -18,29 +17,21 @@ func DataSourceInstances() *schema.Resource { Read: dataSourceInstancesRead, Schema: map[string]*schema.Schema{ - "filter": DataSourceFiltersSchema(), + "filter": DataSourceFiltersSchema(), + "ids": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "instance_tags": tftags.TagsSchemaComputed(), "instance_state_names": { Type: schema.TypeSet, Optional: true, Elem: &schema.Schema{ - Type: schema.TypeString, - ValidateFunc: validation.StringInSlice([]string{ - ec2.InstanceStateNamePending, - ec2.InstanceStateNameRunning, - ec2.InstanceStateNameShuttingDown, - ec2.InstanceStateNameStopped, - ec2.InstanceStateNameStopping, - ec2.InstanceStateNameTerminated, - }, false), + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(ec2.InstanceStateName_Values(), false), }, }, - - "ids": { - Type: schema.TypeList, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - }, "private_ips": { Type: schema.TypeList, Computed: true, @@ -58,75 +49,54 @@ func DataSourceInstances() *schema.Resource { func dataSourceInstancesRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn - filters, filtersOk := d.GetOk("filter") - tags, tagsOk := d.GetOk("instance_tags") - - if !filtersOk && !tagsOk { - return fmt.Errorf("One of filters or instance_tags must be assigned") - } - - instanceStateNames := []*string{aws.String(ec2.InstanceStateNameRunning)} - if v, ok := d.GetOk("instance_state_names"); ok && len(v.(*schema.Set).List()) > 0 { - instanceStateNames = flex.ExpandStringSet(v.(*schema.Set)) - } - params := &ec2.DescribeInstancesInput{ - Filters: []*ec2.Filter{ - { - Name: aws.String("instance-state-name"), - Values: instanceStateNames, - }, - }, + input := &ec2.DescribeInstancesInput{} + + if v, ok := d.GetOk("instance_state_names"); ok && v.(*schema.Set).Len() > 0 { + input.Filters = append(input.Filters, &ec2.Filter{ + Name: aws.String("instance-state-name"), + Values: flex.ExpandStringSet(v.(*schema.Set)), + }) + } else { + input.Filters = append(input.Filters, &ec2.Filter{ + Name: aws.String("instance-state-name"), + Values: aws.StringSlice([]string{ec2.InstanceStateNameRunning}), + }) } - if filtersOk { - params.Filters = append(params.Filters, - BuildFiltersDataSource(filters.(*schema.Set))...) - } - if tagsOk { - params.Filters = append(params.Filters, BuildTagFilterList( - Tags(tftags.New(tags.(map[string]interface{}))), - )...) - } + input.Filters = append(input.Filters, BuildTagFilterList( + Tags(tftags.New(d.Get("instance_tags").(map[string]interface{}))), + )...) - log.Printf("[DEBUG] Reading EC2 instances: %s", params) - - var instanceIds, privateIps, publicIps []string - err := conn.DescribeInstancesPages(params, func(resp *ec2.DescribeInstancesOutput, lastPage bool) bool { - for _, res := range resp.Reservations { - for _, instance := range res.Instances { - instanceIds = append(instanceIds, *instance.InstanceId) - if instance.PrivateIpAddress != nil { - privateIps = append(privateIps, *instance.PrivateIpAddress) - } - if instance.PublicIpAddress != nil { - publicIps = append(publicIps, *instance.PublicIpAddress) - } - } - } - return !lastPage - }) - if err != nil { - return err - } + input.Filters = append(input.Filters, BuildFiltersDataSource( + d.Get("filter").(*schema.Set), + )...) - if len(instanceIds) < 1 { - return fmt.Errorf("Your query returned no results. Please change your search criteria and try again.") + if len(input.Filters) == 0 { + input.Filters = nil } - log.Printf("[DEBUG] Found %d instances via given filter", len(instanceIds)) + output, err := FindInstances(conn, input) - d.SetId(meta.(*conns.AWSClient).Region) - - err = d.Set("ids", instanceIds) if err != nil { - return err + return fmt.Errorf("error reading EC2 Instances: %w", err) } - err = d.Set("private_ips", privateIps) - if err != nil { - return err + var instanceIDs, privateIPs, publicIPs []string + + for _, v := range output { + instanceIDs = append(instanceIDs, aws.StringValue(v.InstanceId)) + if privateIP := aws.StringValue(v.PrivateIpAddress); privateIP != "" { + privateIPs = append(privateIPs, privateIP) + } + if publicIP := aws.StringValue(v.PublicIpAddress); publicIP != "" { + publicIPs = append(publicIPs, publicIP) + } } - err = d.Set("public_ips", publicIps) - return err + d.SetId(meta.(*conns.AWSClient).Region) + d.Set("ids", instanceIDs) + d.Set("private_ips", privateIPs) + d.Set("public_ips", publicIPs) + + return nil } diff --git a/internal/service/ec2/instances_data_source_test.go b/internal/service/ec2/instances_data_source_test.go index 9f4c2ca7887..9a0d7e9a6dd 100644 --- a/internal/service/ec2/instances_data_source_test.go +++ b/internal/service/ec2/instances_data_source_test.go @@ -67,6 +67,26 @@ func TestAccEC2InstancesDataSource_instanceStateNames(t *testing.T) { }) } +func TestAccEC2InstancesDataSource_empty(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccInstancesDataSourceConfig_empty(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_instances.test", "ids.#", "0"), + resource.TestCheckResourceAttr("data.aws_instances.test", "private_ips.#", "0"), + resource.TestCheckResourceAttr("data.aws_instances.test", "public_ips.#", "0"), + ), + }, + }, + }) +} + func testAccInstancesDataSourceConfig_ids(rName string) string { return acctest.ConfigCompose( acctest.ConfigLatestAmazonLinuxHvmEbsAmi(), @@ -78,7 +98,7 @@ resource "aws_instance" "test" { instance_type = data.aws_ec2_instance_type_offering.available.instance_type tags = { - Name = %q + Name = %[1]q } } @@ -129,7 +149,7 @@ resource "aws_instance" "test" { instance_type = data.aws_ec2_instance_type_offering.available.instance_type tags = { - Name = %q + Name = %[1]q } } @@ -143,3 +163,13 @@ data "aws_instances" "test" { } `, rName)) } + +func testAccInstancesDataSourceConfig_empty(rName string) string { + return fmt.Sprintf(` +data "aws_instances" "test" { + instance_tags = { + Name = %[1]q + } +} +`, rName) +} diff --git a/internal/service/ec2/local_gateway_route_tables_data_source.go b/internal/service/ec2/local_gateway_route_tables_data_source.go index d16d24e4ec4..4db12ee2680 100644 --- a/internal/service/ec2/local_gateway_route_tables_data_source.go +++ b/internal/service/ec2/local_gateway_route_tables_data_source.go @@ -13,17 +13,15 @@ import ( func DataSourceLocalGatewayRouteTables() *schema.Resource { return &schema.Resource{ Read: dataSourceLocalGatewayRouteTablesRead, - Schema: map[string]*schema.Schema{ - "filter": CustomFiltersSchema(), - - "tags": tftags.TagsSchemaComputed(), + Schema: map[string]*schema.Schema{ + "filter": DataSourceFiltersSchema(), "ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, }, + "tags": tftags.TagsSchemaComputed(), }, } } @@ -31,55 +29,34 @@ func DataSourceLocalGatewayRouteTables() *schema.Resource { func dataSourceLocalGatewayRouteTablesRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn - req := &ec2.DescribeLocalGatewayRouteTablesInput{} + input := &ec2.DescribeLocalGatewayRouteTablesInput{} - req.Filters = append(req.Filters, BuildTagFilterList( + input.Filters = append(input.Filters, BuildTagFilterList( Tags(tftags.New(d.Get("tags").(map[string]interface{}))), )...) - req.Filters = append(req.Filters, BuildCustomFilterList( + input.Filters = append(input.Filters, BuildFiltersDataSource( d.Get("filter").(*schema.Set), )...) - if len(req.Filters) == 0 { - // Don't send an empty filters list; the EC2 API won't accept it. - req.Filters = nil - } - - var localGatewayRouteTables []*ec2.LocalGatewayRouteTable - err := conn.DescribeLocalGatewayRouteTablesPages(req, func(page *ec2.DescribeLocalGatewayRouteTablesOutput, lastPage bool) bool { - if page == nil { - return !lastPage - } - - localGatewayRouteTables = append(localGatewayRouteTables, page.LocalGatewayRouteTables...) + if len(input.Filters) == 0 { + input.Filters = nil + } - return !lastPage - }) + output, err := FindLocalGatewayRouteTables(conn, input) if err != nil { - return fmt.Errorf("error describing EC2 Local Gateway Route Tables: %w", err) - } - - if len(localGatewayRouteTables) == 0 { - return fmt.Errorf("no matching EC2 Local Gateway Route Tables found") + return fmt.Errorf("error reading EC2 Local Gateway Route Tables: %w", err) } - var ids []string + var routeTableIDs []string - for _, localGatewayRouteTable := range localGatewayRouteTables { - if localGatewayRouteTable == nil { - continue - } - - ids = append(ids, aws.StringValue(localGatewayRouteTable.LocalGatewayRouteTableId)) + for _, v := range output { + routeTableIDs = append(routeTableIDs, aws.StringValue(v.LocalGatewayRouteTableId)) } d.SetId(meta.(*conns.AWSClient).Region) - - if err := d.Set("ids", ids); err != nil { - return fmt.Errorf("error setting ids: %w", err) - } + d.Set("ids", routeTableIDs) return nil } diff --git a/internal/service/ec2/local_gateway_route_tables_data_source_test.go b/internal/service/ec2/local_gateway_route_tables_data_source_test.go index c37a6dc4f1f..f13e5fe12db 100644 --- a/internal/service/ec2/local_gateway_route_tables_data_source_test.go +++ b/internal/service/ec2/local_gateway_route_tables_data_source_test.go @@ -19,7 +19,7 @@ func TestAccEC2LocalGatewayRouteTablesDataSource_basic(t *testing.T) { { Config: testAccLocalGatewayRouteTablesDataSourceConfig(), Check: resource.ComposeTestCheckFunc( - testCheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), ), }, }, diff --git a/internal/service/ec2/local_gateway_virtual_interface_groups_data_source.go b/internal/service/ec2/local_gateway_virtual_interface_groups_data_source.go index 7a61ecbe88d..b4481123d48 100644 --- a/internal/service/ec2/local_gateway_virtual_interface_groups_data_source.go +++ b/internal/service/ec2/local_gateway_virtual_interface_groups_data_source.go @@ -3,6 +3,7 @@ package ec2 import ( "fmt" + "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-provider-aws/internal/conns" @@ -14,14 +15,14 @@ func DataSourceLocalGatewayVirtualInterfaceGroups() *schema.Resource { Read: dataSourceLocalGatewayVirtualInterfaceGroupsRead, Schema: map[string]*schema.Schema{ - "filter": CustomFiltersSchema(), + "filter": DataSourceFiltersSchema(), "ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, "local_gateway_virtual_interface_ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, @@ -39,55 +40,30 @@ func dataSourceLocalGatewayVirtualInterfaceGroupsRead(d *schema.ResourceData, me Tags(tftags.New(d.Get("tags").(map[string]interface{}))), )...) - input.Filters = append(input.Filters, BuildCustomFilterList( + input.Filters = append(input.Filters, BuildFiltersDataSource( d.Get("filter").(*schema.Set), )...) if len(input.Filters) == 0 { - // Don't send an empty filters list; the EC2 API won't accept it. input.Filters = nil } - var localGatewayVirtualInterfaceGroups []*ec2.LocalGatewayVirtualInterfaceGroup - - err := conn.DescribeLocalGatewayVirtualInterfaceGroupsPages(input, func(page *ec2.DescribeLocalGatewayVirtualInterfaceGroupsOutput, lastPage bool) bool { - if page == nil { - return !lastPage - } - - localGatewayVirtualInterfaceGroups = append(localGatewayVirtualInterfaceGroups, page.LocalGatewayVirtualInterfaceGroups...) - - return !lastPage - }) + output, err := FindLocalGatewayVirtualInterfaceGroups(conn, input) if err != nil { - return fmt.Errorf("error describing EC2 Local Gateway Virtual Interface Groups: %w", err) - } - - if len(localGatewayVirtualInterfaceGroups) == 0 { - return fmt.Errorf("no matching EC2 Local Gateway Virtual Interface Groups found") + return fmt.Errorf("error reading EC2 Local Gateway Virtual Interface Groups: %w", err) } - var ids, localGatewayVirtualInterfaceIds []*string + var groupIDs, interfaceIDs []string - for _, group := range localGatewayVirtualInterfaceGroups { - if group == nil { - continue - } - - ids = append(ids, group.LocalGatewayVirtualInterfaceGroupId) - localGatewayVirtualInterfaceIds = append(localGatewayVirtualInterfaceIds, group.LocalGatewayVirtualInterfaceIds...) + for _, v := range output { + groupIDs = append(groupIDs, aws.StringValue(v.LocalGatewayVirtualInterfaceGroupId)) + interfaceIDs = append(interfaceIDs, aws.StringValueSlice(v.LocalGatewayVirtualInterfaceIds)...) } d.SetId(meta.(*conns.AWSClient).Region) - - if err := d.Set("ids", ids); err != nil { - return fmt.Errorf("error setting ids: %w", err) - } - - if err := d.Set("local_gateway_virtual_interface_ids", localGatewayVirtualInterfaceIds); err != nil { - return fmt.Errorf("error setting local_gateway_virtual_interface_ids: %w", err) - } + d.Set("ids", groupIDs) + d.Set("local_gateway_virtual_interface_ids", interfaceIDs) return nil } diff --git a/internal/service/ec2/local_gateways_data_source.go b/internal/service/ec2/local_gateways_data_source.go index cf46eae5255..9a4ae075086 100644 --- a/internal/service/ec2/local_gateways_data_source.go +++ b/internal/service/ec2/local_gateways_data_source.go @@ -14,16 +14,13 @@ func DataSourceLocalGateways() *schema.Resource { return &schema.Resource{ Read: dataSourceLocalGatewaysRead, Schema: map[string]*schema.Schema{ - "filter": CustomFiltersSchema(), - - "tags": tftags.TagsSchemaComputed(), - + "filter": DataSourceFiltersSchema(), "ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, }, + "tags": tftags.TagsSchemaComputed(), }, } } @@ -31,59 +28,34 @@ func DataSourceLocalGateways() *schema.Resource { func dataSourceLocalGatewaysRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn - req := &ec2.DescribeLocalGatewaysInput{} + input := &ec2.DescribeLocalGatewaysInput{} - if tags, tagsOk := d.GetOk("tags"); tagsOk { - req.Filters = append(req.Filters, BuildTagFilterList( - Tags(tftags.New(tags.(map[string]interface{}))), - )...) - } - - if filters, filtersOk := d.GetOk("filter"); filtersOk { - req.Filters = append(req.Filters, BuildCustomFilterList( - filters.(*schema.Set), - )...) - } - if len(req.Filters) == 0 { - // Don't send an empty filters list; the EC2 API won't accept it. - req.Filters = nil - } + input.Filters = append(input.Filters, BuildTagFilterList( + Tags(tftags.New(d.Get("tags").(map[string]interface{}))), + )...) - var localGateways []*ec2.LocalGateway + input.Filters = append(input.Filters, BuildFiltersDataSource( + d.Get("filter").(*schema.Set), + )...) - err := conn.DescribeLocalGatewaysPages(req, func(page *ec2.DescribeLocalGatewaysOutput, lastPage bool) bool { - if page == nil { - return !lastPage - } - - localGateways = append(localGateways, page.LocalGateways...) + if len(input.Filters) == 0 { + input.Filters = nil + } - return !lastPage - }) + output, err := FindLocalGateways(conn, input) if err != nil { - return fmt.Errorf("error describing EC2 Local Gateways: %w", err) - } - - if len(localGateways) == 0 { - return fmt.Errorf("no matching EC2 Local Gateways found") + return fmt.Errorf("error reading EC2 Local Gateways: %w", err) } - var ids []string + var gatewayIDs []string - for _, localGateway := range localGateways { - if localGateway == nil { - continue - } - - ids = append(ids, aws.StringValue(localGateway.LocalGatewayId)) + for _, v := range output { + gatewayIDs = append(gatewayIDs, aws.StringValue(v.LocalGatewayId)) } d.SetId(meta.(*conns.AWSClient).Region) - - if err := d.Set("ids", ids); err != nil { - return fmt.Errorf("error setting ids: %w", err) - } + d.Set("ids", gatewayIDs) return nil } diff --git a/internal/service/ec2/local_gateways_data_source_test.go b/internal/service/ec2/local_gateways_data_source_test.go index a26c4a03d72..0fee22703c1 100644 --- a/internal/service/ec2/local_gateways_data_source_test.go +++ b/internal/service/ec2/local_gateways_data_source_test.go @@ -19,7 +19,7 @@ func TestAccEC2LocalGatewaysDataSource_basic(t *testing.T) { { Config: testAccLocalGatewaysDataSourceConfig(), Check: resource.ComposeTestCheckFunc( - testCheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), ), }, }, diff --git a/internal/service/ec2/network_acls_data_source.go b/internal/service/ec2/network_acls_data_source.go index 26bd24308a3..7de247c6ace 100644 --- a/internal/service/ec2/network_acls_data_source.go +++ b/internal/service/ec2/network_acls_data_source.go @@ -1,9 +1,7 @@ package ec2 import ( - "errors" "fmt" - "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" @@ -16,21 +14,17 @@ func DataSourceNetworkACLs() *schema.Resource { return &schema.Resource{ Read: dataSourceNetworkACLsRead, Schema: map[string]*schema.Schema{ - "filter": CustomFiltersSchema(), - + "filter": DataSourceFiltersSchema(), + "ids": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "tags": tftags.TagsSchemaComputed(), - "vpc_id": { Type: schema.TypeString, Optional: true, }, - - "ids": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, - }, }, } } @@ -38,57 +32,42 @@ func DataSourceNetworkACLs() *schema.Resource { func dataSourceNetworkACLsRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn - req := &ec2.DescribeNetworkAclsInput{} + input := &ec2.DescribeNetworkAclsInput{} if v, ok := d.GetOk("vpc_id"); ok { - req.Filters = BuildAttributeFilterList( + input.Filters = append(input.Filters, BuildAttributeFilterList( map[string]string{ "vpc-id": v.(string), }, - ) + )...) } - filters, filtersOk := d.GetOk("filter") - tags, tagsOk := d.GetOk("tags") + input.Filters = append(input.Filters, BuildTagFilterList( + Tags(tftags.New(d.Get("tags").(map[string]interface{}))), + )...) - if tagsOk { - req.Filters = append(req.Filters, BuildTagFilterList( - Tags(tftags.New(tags.(map[string]interface{}))), - )...) - } + input.Filters = append(input.Filters, BuildFiltersDataSource( + d.Get("filter").(*schema.Set), + )...) - if filtersOk { - req.Filters = append(req.Filters, BuildCustomFilterList( - filters.(*schema.Set), - )...) + if len(input.Filters) == 0 { + input.Filters = nil } - if len(req.Filters) == 0 { - // Don't send an empty filters list; the EC2 API won't accept it. - req.Filters = nil - } + output, err := FindNetworkACLs(conn, input) - log.Printf("[DEBUG] DescribeNetworkAcls %s\n", req) - resp, err := conn.DescribeNetworkAcls(req) if err != nil { - return err - } - - if resp == nil || len(resp.NetworkAcls) == 0 { - return errors.New("no matching network ACLs found") + return fmt.Errorf("error reading EC2 Network ACLs: %w", err) } - networkAcls := make([]string, 0) + var naclIDs []string - for _, networkAcl := range resp.NetworkAcls { - networkAcls = append(networkAcls, aws.StringValue(networkAcl.NetworkAclId)) + for _, v := range output { + naclIDs = append(naclIDs, aws.StringValue(v.NetworkAclId)) } d.SetId(meta.(*conns.AWSClient).Region) - - if err := d.Set("ids", networkAcls); err != nil { - return fmt.Errorf("Error setting network ACL ids: %w", err) - } + d.Set("ids", naclIDs) return nil } diff --git a/internal/service/ec2/network_acls_data_source_test.go b/internal/service/ec2/network_acls_data_source_test.go index 59c76a62494..692413f3ca4 100644 --- a/internal/service/ec2/network_acls_data_source_test.go +++ b/internal/service/ec2/network_acls_data_source_test.go @@ -2,7 +2,6 @@ package ec2_test import ( "fmt" - "regexp" "testing" "github.com/aws/aws-sdk-go/service/ec2" @@ -12,7 +11,7 @@ import ( ) func TestAccEC2NetworkACLsDataSource_basic(t *testing.T) { - rName := sdkacctest.RandString(5) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_network_acls.test" resource.ParallelTest(t, resource.TestCase{ @@ -21,15 +20,10 @@ func TestAccEC2NetworkACLsDataSource_basic(t *testing.T) { Providers: acctest.Providers, CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ - { - // Ensure at least 1 network ACL exists. We cannot use depends_on. - Config: testAccNetworkACLsDataSourceConfig_Base(rName), - }, { Config: testAccNetworkACLsDataSourceConfig_basic(rName), Check: resource.ComposeTestCheckFunc( - // At least 1 - resource.TestMatchResourceAttr(dataSourceName, "ids.#", regexp.MustCompile(`^[1-9][0-9]*`)), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "1"), ), }, }, @@ -37,7 +31,7 @@ func TestAccEC2NetworkACLsDataSource_basic(t *testing.T) { } func TestAccEC2NetworkACLsDataSource_filter(t *testing.T) { - rName := sdkacctest.RandString(5) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_network_acls.test" resource.ParallelTest(t, resource.TestCase{ @@ -57,7 +51,7 @@ func TestAccEC2NetworkACLsDataSource_filter(t *testing.T) { } func TestAccEC2NetworkACLsDataSource_tags(t *testing.T) { - rName := sdkacctest.RandString(5) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_network_acls.test" resource.ParallelTest(t, resource.TestCase{ @@ -77,7 +71,7 @@ func TestAccEC2NetworkACLsDataSource_tags(t *testing.T) { } func TestAccEC2NetworkACLsDataSource_vpcID(t *testing.T) { - rName := sdkacctest.RandString(5) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_network_acls.test" resource.ParallelTest(t, resource.TestCase{ @@ -97,59 +91,97 @@ func TestAccEC2NetworkACLsDataSource_vpcID(t *testing.T) { }) } +func TestAccEC2NetworkACLsDataSource_empty(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_network_acls.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNetworkACLsDataSourceConfig_Empty(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "0"), + ), + }, + }, + }) +} + func testAccNetworkACLsDataSourceConfig_Base(rName string) string { return fmt.Sprintf(` resource "aws_vpc" "test" { cidr_block = "10.0.0.0/16" tags = { - Name = "testacc-acl-%[1]s" + Name = %[1]q } } -resource "aws_network_acl" "acl" { +resource "aws_network_acl" "test" { count = 2 vpc_id = aws_vpc.test.id tags = { - Name = "testacc-acl-%[1]s" + Name = %[1]q } } `, rName) } func testAccNetworkACLsDataSourceConfig_basic(rName string) string { - return testAccNetworkACLsDataSourceConfig_Base(rName) + ` -data "aws_network_acls" "test" {} -` + return acctest.ConfigCompose(testAccNetworkACLsDataSourceConfig_Base(rName), ` +data "aws_network_acls" "test" { + depends_on = [aws_network_acl.test[0], aws_network_acl.test[1]] +} +`) } func testAccNetworkACLsDataSourceConfig_Filter(rName string) string { - return testAccNetworkACLsDataSourceConfig_Base(rName) + ` + return acctest.ConfigCompose(testAccNetworkACLsDataSourceConfig_Base(rName), ` data "aws_network_acls" "test" { filter { name = "network-acl-id" - values = [aws_network_acl.acl[0].id] + values = [aws_network_acl.test[0].id] } + + depends_on = [aws_network_acl.test[0], aws_network_acl.test[1]] } -` +`) } func testAccNetworkACLsDataSourceConfig_Tags(rName string) string { - return testAccNetworkACLsDataSourceConfig_Base(rName) + ` + return acctest.ConfigCompose(testAccNetworkACLsDataSourceConfig_Base(rName), ` data "aws_network_acls" "test" { tags = { - Name = aws_network_acl.acl[0].tags.Name + Name = aws_network_acl.test[0].tags.Name } + + depends_on = [aws_network_acl.test[0], aws_network_acl.test[1]] } -` +`) } func testAccNetworkACLsDataSourceConfig_VPCID(rName string) string { - return testAccNetworkACLsDataSourceConfig_Base(rName) + ` + return acctest.ConfigCompose(testAccNetworkACLsDataSourceConfig_Base(rName), ` data "aws_network_acls" "test" { - vpc_id = aws_network_acl.acl[0].vpc_id + vpc_id = aws_network_acl.test[0].vpc_id + + depends_on = [aws_network_acl.test[0], aws_network_acl.test[1]] +} +`) } -` + +func testAccNetworkACLsDataSourceConfig_Empty(rName string) string { + return fmt.Sprintf(` +data "aws_network_acls" "test" { + tags = { + Name = %[1]q + } +} +`, rName) } diff --git a/internal/service/ec2/network_interfaces_data_source.go b/internal/service/ec2/network_interfaces_data_source.go index 129d92a48f5..fcbe3e8ecfb 100644 --- a/internal/service/ec2/network_interfaces_data_source.go +++ b/internal/service/ec2/network_interfaces_data_source.go @@ -1,7 +1,6 @@ package ec2 import ( - "errors" "fmt" "github.com/aws/aws-sdk-go/aws" @@ -14,13 +13,13 @@ import ( func DataSourceNetworkInterfaces() *schema.Resource { return &schema.Resource{ Read: dataSourceNetworkInterfacesRead, + Schema: map[string]*schema.Schema{ - "filter": CustomFiltersSchema(), + "filter": DataSourceFiltersSchema(), "ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, }, "tags": tftags.TagsSchemaComputed(), }, @@ -32,17 +31,13 @@ func dataSourceNetworkInterfacesRead(d *schema.ResourceData, meta interface{}) e input := &ec2.DescribeNetworkInterfacesInput{} - if v, ok := d.GetOk("tags"); ok { - input.Filters = BuildTagFilterList( - Tags(tftags.New(v.(map[string]interface{}))), - ) - } + input.Filters = append(input.Filters, BuildTagFilterList( + Tags(tftags.New(d.Get("tags").(map[string]interface{}))), + )...) - if v, ok := d.GetOk("filter"); ok { - input.Filters = append(input.Filters, BuildCustomFilterList( - v.(*schema.Set), - )...) - } + input.Filters = append(input.Filters, BuildFiltersDataSource( + d.Get("filter").(*schema.Set), + )...) if len(input.Filters) == 0 { input.Filters = nil @@ -56,19 +51,12 @@ func dataSourceNetworkInterfacesRead(d *schema.ResourceData, meta interface{}) e return fmt.Errorf("error reading EC2 Network Interfaces: %w", err) } - if len(output) == 0 { - return errors.New("no matching network interfaces found") - } - for _, v := range output { networkInterfaceIDs = append(networkInterfaceIDs, aws.StringValue(v.NetworkInterfaceId)) } d.SetId(meta.(*conns.AWSClient).Region) - - if err := d.Set("ids", networkInterfaceIDs); err != nil { - return fmt.Errorf("error setting ids: %w", err) - } + d.Set("ids", networkInterfaceIDs) return nil } diff --git a/internal/service/ec2/network_interfaces_data_source_test.go b/internal/service/ec2/network_interfaces_data_source_test.go index 81f7759999d..cb65c331c3b 100644 --- a/internal/service/ec2/network_interfaces_data_source_test.go +++ b/internal/service/ec2/network_interfaces_data_source_test.go @@ -48,6 +48,25 @@ func TestAccEC2NetworkInterfacesDataSource_tags(t *testing.T) { }) } +func TestAccEC2NetworkInterfacesDataSource_empty(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckVpcDestroy, + Steps: []resource.TestStep{ + { + Config: testAccNetworkInterfacesDataSourceConfig_Empty(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_network_interfaces.test", "ids.#", "0"), + ), + }, + }, + }) +} + func testAccNetworkInterfacesDataSourceConfig_Base(rName string) string { return fmt.Sprintf(` resource "aws_vpc" "test" { @@ -105,3 +124,13 @@ data "aws_network_interfaces" "test" { } `) } + +func testAccNetworkInterfacesDataSourceConfig_Empty(rName string) string { + return fmt.Sprintf(` +data "aws_network_interfaces" "test" { + tags = { + Name = %[1]q + } +} +`, rName) +} diff --git a/internal/service/ec2/route.go b/internal/service/ec2/route.go index 91cc643f8f7..445f5b5c720 100644 --- a/internal/service/ec2/route.go +++ b/internal/service/ec2/route.go @@ -110,6 +110,7 @@ func ResourceRoute() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, + Deprecated: "Use network_interface_id instead", ExactlyOneOf: routeValidTargets, }, "local_gateway_id": { diff --git a/internal/service/ec2/route_table.go b/internal/service/ec2/route_table.go index fd40abb9f7c..d479c1a5b72 100644 --- a/internal/service/ec2/route_table.go +++ b/internal/service/ec2/route_table.go @@ -121,8 +121,9 @@ func ResourceRouteTable() *schema.Resource { Optional: true, }, "instance_id": { - Type: schema.TypeString, - Optional: true, + Type: schema.TypeString, + Optional: true, + Deprecated: "Use network_interface_id instead", }, "local_gateway_id": { Type: schema.TypeString, diff --git a/internal/service/ec2/route_tables_data_source.go b/internal/service/ec2/route_tables_data_source.go index f82bf0763ce..1779aa1e74b 100644 --- a/internal/service/ec2/route_tables_data_source.go +++ b/internal/service/ec2/route_tables_data_source.go @@ -2,7 +2,6 @@ package ec2 import ( "fmt" - "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" @@ -14,23 +13,19 @@ import ( func DataSourceRouteTables() *schema.Resource { return &schema.Resource{ Read: dataSourceRouteTablesRead, - Schema: map[string]*schema.Schema{ - - "filter": CustomFiltersSchema(), + Schema: map[string]*schema.Schema{ + "filter": DataSourceFiltersSchema(), + "ids": { + Type: schema.TypeList, + Computed: true, + Elem: &schema.Schema{Type: schema.TypeString}, + }, "tags": tftags.TagsSchemaComputed(), - "vpc_id": { Type: schema.TypeString, Optional: true, }, - - "ids": { - Type: schema.TypeSet, - Computed: true, - Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, - }, }, } } @@ -38,45 +33,42 @@ func DataSourceRouteTables() *schema.Resource { func dataSourceRouteTablesRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn - req := &ec2.DescribeRouteTablesInput{} + input := &ec2.DescribeRouteTablesInput{} if v, ok := d.GetOk("vpc_id"); ok { - req.Filters = BuildAttributeFilterList( + input.Filters = append(input.Filters, BuildAttributeFilterList( map[string]string{ "vpc-id": v.(string), }, - ) + )...) } - req.Filters = append(req.Filters, BuildTagFilterList( + input.Filters = append(input.Filters, BuildTagFilterList( Tags(tftags.New(d.Get("tags").(map[string]interface{}))), )...) - req.Filters = append(req.Filters, BuildCustomFilterList( + input.Filters = append(input.Filters, BuildFiltersDataSource( d.Get("filter").(*schema.Set), )...) - log.Printf("[DEBUG] DescribeRouteTables %s\n", req) - resp, err := conn.DescribeRouteTables(req) - if err != nil { - return err + if len(input.Filters) == 0 { + input.Filters = nil } - if resp == nil || len(resp.RouteTables) == 0 { - return fmt.Errorf("no matching route tables found for vpc with id %s", d.Get("vpc_id").(string)) + output, err := FindRouteTables(conn, input) + + if err != nil { + return fmt.Errorf("error reading EC2 Route Tables: %w", err) } - routeTables := make([]string, 0) + var routeTableIDs []string - for _, routeTable := range resp.RouteTables { - routeTables = append(routeTables, aws.StringValue(routeTable.RouteTableId)) + for _, v := range output { + routeTableIDs = append(routeTableIDs, aws.StringValue(v.RouteTableId)) } d.SetId(meta.(*conns.AWSClient).Region) - - if err = d.Set("ids", routeTables); err != nil { - return fmt.Errorf("error setting ids: %w", err) - } + d.Set("ids", routeTableIDs) return nil } diff --git a/internal/service/ec2/route_tables_data_source_test.go b/internal/service/ec2/route_tables_data_source_test.go index 49a36f2c6bb..5fad885e1ab 100644 --- a/internal/service/ec2/route_tables_data_source_test.go +++ b/internal/service/ec2/route_tables_data_source_test.go @@ -11,7 +11,8 @@ import ( ) func TestAccEC2RouteTablesDataSource_basic(t *testing.T) { - rInt := sdkacctest.RandIntRange(0, 256) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), @@ -19,154 +20,107 @@ func TestAccEC2RouteTablesDataSource_basic(t *testing.T) { CheckDestroy: testAccCheckVpcDestroy, Steps: []resource.TestStep{ { - Config: testAccRouteTablesDataSourceConfig(rInt), - }, - { - Config: testAccRouteTablesWithDataSourceDataSourceConfig(rInt), - Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr("data.aws_route_tables.test", "ids.#", "5"), - resource.TestCheckResourceAttr("data.aws_route_tables.private", "ids.#", "3"), - resource.TestCheckResourceAttr("data.aws_route_tables.test2", "ids.#", "1"), - resource.TestCheckResourceAttr("data.aws_route_tables.filter_test", "ids.#", "2"), + Config: testAccRouteTablesDataSourceConfig(rName), + Check: resource.ComposeAggregateTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_route_tables.by_vpc_id", "ids.#", "2"), // Add the default route table. + resource.TestCheckResourceAttr("data.aws_route_tables.by_tags", "ids.#", "2"), + resource.TestCheckResourceAttr("data.aws_route_tables.by_filter", "ids.#", "6"), // Add the default route tables. + resource.TestCheckResourceAttr("data.aws_route_tables.empty", "ids.#", "0"), ), }, }, }) } -func testAccRouteTablesWithDataSourceDataSourceConfig(rInt int) string { +func testAccRouteTablesDataSourceConfig(rName string) string { return fmt.Sprintf(` -resource "aws_vpc" "test" { - cidr_block = "172.%d.0.0/16" +resource "aws_vpc" "test1" { + cidr_block = "172.16.0.0/16" tags = { - Name = "terraform-testacc-route-tables-data-source" + Name = %[1]q } } resource "aws_vpc" "test2" { - cidr_block = "172.%d.0.0/16" + cidr_block = "172.16.0.0/16" tags = { - Name = "terraform-test2acc-route-tables-data-source" + Name = %[1]q } } -resource "aws_route_table" "test_public_a" { - vpc_id = aws_vpc.test.id +resource "aws_route_table" "test1_public" { + vpc_id = aws_vpc.test1.id tags = { - Name = "tf-acc-route-tables-data-source-public-a" + Name = %[1]q Tier = "Public" Component = "Frontend" } } -resource "aws_route_table" "test_private_a" { - vpc_id = aws_vpc.test.id +resource "aws_route_table" "test1_private1" { + vpc_id = aws_vpc.test1.id tags = { - Name = "tf-acc-route-tables-data-source-private-a" + Name = %[1]q Tier = "Private" Component = "Database" } } -resource "aws_route_table" "test_private_b" { - vpc_id = aws_vpc.test.id +resource "aws_route_table" "test1_private2" { + vpc_id = aws_vpc.test1.id tags = { - Name = "tf-acc-route-tables-data-source-private-b" + Name = %[1]q Tier = "Private" - Component = "Backend-1" + Component = "AppServer" } } -resource "aws_route_table" "test_private_c" { - vpc_id = aws_vpc.test.id - - tags = { - Name = "tf-acc-route-tables-data-source-private-c" - Tier = "Private" - Component = "Backend-2" - } -} - -data "aws_route_tables" "test" { - vpc_id = aws_vpc.test.id -} - -data "aws_route_tables" "test2" { +resource "aws_route_table" "test2_public" { vpc_id = aws_vpc.test2.id -} - -data "aws_route_tables" "private" { - vpc_id = aws_vpc.test.id tags = { - Tier = "Private" + Name = %[1]q + Tier = "Public" + Component = "Frontend" } } -data "aws_route_tables" "filter_test" { - vpc_id = aws_vpc.test.id +data "aws_route_tables" "by_vpc_id" { + vpc_id = aws_vpc.test2.id - filter { - name = "tag:Component" - values = ["Backend*"] - } + depends_on = [aws_route_table.test1_public, aws_route_table.test1_private1, aws_route_table.test1_private2, aws_route_table.test2_public] } -`, rInt, rInt) -} - -func testAccRouteTablesDataSourceConfig(rInt int) string { - return fmt.Sprintf(` -resource "aws_vpc" "test" { - cidr_block = "172.%d.0.0/16" +data "aws_route_tables" "by_tags" { tags = { - Name = "terraform-testacc-route-tables-data-source" + Tier = "Public" } -} - -resource "aws_route_table" "test_public_a" { - vpc_id = aws_vpc.test.id - tags = { - Name = "tf-acc-route-tables-data-source-public-a" - Tier = "Public" - Component = "Frontend" - } + depends_on = [aws_route_table.test1_public, aws_route_table.test1_private1, aws_route_table.test1_private2, aws_route_table.test2_public] } -resource "aws_route_table" "test_private_a" { - vpc_id = aws_vpc.test.id - - tags = { - Name = "tf-acc-route-tables-data-source-private-a" - Tier = "Private" - Component = "Database" +data "aws_route_tables" "by_filter" { + filter { + name = "vpc-id" + values = [aws_vpc.test1.id, aws_vpc.test2.id] } -} - -resource "aws_route_table" "test_private_b" { - vpc_id = aws_vpc.test.id - tags = { - Name = "tf-acc-route-tables-data-source-private-b" - Tier = "Private" - Component = "Backend-1" - } + depends_on = [aws_route_table.test1_public, aws_route_table.test1_private1, aws_route_table.test1_private2, aws_route_table.test2_public] } -resource "aws_route_table" "test_private_c" { - vpc_id = aws_vpc.test.id +data "aws_route_tables" "empty" { + vpc_id = aws_vpc.test2.id tags = { - Name = "tf-acc-route-tables-data-source-private-c" - Tier = "Private" - Component = "Backend-2" + Tier = "Private" } + + depends_on = [aws_route_table.test1_public, aws_route_table.test1_private1, aws_route_table.test1_private2, aws_route_table.test2_public] } -`, rInt) +`, rName) } diff --git a/internal/service/ec2/security_groups_data_source.go b/internal/service/ec2/security_groups_data_source.go index f651645efb0..8dc1cc5e8c5 100644 --- a/internal/service/ec2/security_groups_data_source.go +++ b/internal/service/ec2/security_groups_data_source.go @@ -2,7 +2,6 @@ package ec2 import ( "fmt" - "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/arn" @@ -17,20 +16,19 @@ func DataSourceSecurityGroups() *schema.Resource { Read: dataSourceSecurityGroupsRead, Schema: map[string]*schema.Schema{ - "filter": DataSourceFiltersSchema(), - "tags": tftags.TagsSchemaComputed(), - - "ids": { + "arns": { Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "vpc_ids": { + "filter": DataSourceFiltersSchema(), + "ids": { Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, - "arns": { + "tags": tftags.TagsSchemaComputed(), + "vpc_ids": { Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, @@ -41,75 +39,46 @@ func DataSourceSecurityGroups() *schema.Resource { func dataSourceSecurityGroupsRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn - req := &ec2.DescribeSecurityGroupsInput{} - filters, filtersOk := d.GetOk("filter") - tags, tagsOk := d.GetOk("tags") + input := &ec2.DescribeSecurityGroupsInput{} - if !filtersOk && !tagsOk { - return fmt.Errorf("One of filters or tags must be assigned") - } + input.Filters = append(input.Filters, BuildTagFilterList( + Tags(tftags.New(d.Get("tags").(map[string]interface{}))), + )...) - if filtersOk { - req.Filters = append(req.Filters, - BuildFiltersDataSource(filters.(*schema.Set))...) - } - if tagsOk { - req.Filters = append(req.Filters, BuildTagFilterList( - Tags(tftags.New(tags.(map[string]interface{}))), - )...) - } + input.Filters = append(input.Filters, BuildFiltersDataSource( + d.Get("filter").(*schema.Set), + )...) - log.Printf("[DEBUG] Reading Security Groups with request: %s", req) - - var ids, vpcIds, arns []string - for { - resp, err := conn.DescribeSecurityGroups(req) - if err != nil { - return fmt.Errorf("error reading security groups: %w", err) - } - - for _, sg := range resp.SecurityGroups { - ids = append(ids, aws.StringValue(sg.GroupId)) - vpcIds = append(vpcIds, aws.StringValue(sg.VpcId)) - - arn := arn.ARN{ - Partition: meta.(*conns.AWSClient).Partition, - Service: ec2.ServiceName, - Region: meta.(*conns.AWSClient).Region, - AccountID: aws.StringValue(sg.OwnerId), - Resource: fmt.Sprintf("security-group/%s", aws.StringValue(sg.GroupId)), - }.String() - - arns = append(arns, arn) - } - - if resp.NextToken == nil { - break - } - req.NextToken = resp.NextToken + if len(input.Filters) == 0 { + input.Filters = nil } - if len(ids) < 1 { - return fmt.Errorf("Your query returned no results. Please change your search criteria and try again.") - } + output, err := FindSecurityGroups(conn, input) - log.Printf("[DEBUG] Found %d security groups via given filter: %s", len(ids), req) - - d.SetId(meta.(*conns.AWSClient).Region) - - err := d.Set("ids", ids) if err != nil { - return err + return fmt.Errorf("error reading EC2 Security Groups: %w", err) } - if err = d.Set("vpc_ids", vpcIds); err != nil { - return fmt.Errorf("error setting vpc_ids: %s", err) + var arns, securityGroupIDs, vpcIDs []string + + for _, v := range output { + arn := arn.ARN{ + Partition: meta.(*conns.AWSClient).Partition, + Service: ec2.ServiceName, + Region: meta.(*conns.AWSClient).Region, + AccountID: aws.StringValue(v.OwnerId), + Resource: fmt.Sprintf("security-group/%s", aws.StringValue(v.GroupId)), + }.String() + arns = append(arns, arn) + securityGroupIDs = append(securityGroupIDs, aws.StringValue(v.GroupId)) + vpcIDs = append(vpcIDs, aws.StringValue(v.VpcId)) } - if err = d.Set("arns", arns); err != nil { - return fmt.Errorf("error setting arns: %s", err) - } + d.SetId(meta.(*conns.AWSClient).Region) + d.Set("arns", arns) + d.Set("ids", securityGroupIDs) + d.Set("vpc_ids", vpcIDs) return nil } diff --git a/internal/service/ec2/security_groups_data_source_test.go b/internal/service/ec2/security_groups_data_source_test.go index fba42a1bf1e..803b3139f5d 100644 --- a/internal/service/ec2/security_groups_data_source_test.go +++ b/internal/service/ec2/security_groups_data_source_test.go @@ -11,19 +11,20 @@ import ( ) func TestAccEC2SecurityGroupsDataSource_tag(t *testing.T) { - rInt := sdkacctest.RandInt() - dataSourceName := "data.aws_security_groups.by_tag" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_security_groups.test" + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccSecurityGroupsDataSourceConfig_tag(rInt), + Config: testAccSecurityGroupsDataSourceConfig_tag(rName), Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "arns.#", "3"), resource.TestCheckResourceAttr(dataSourceName, "ids.#", "3"), resource.TestCheckResourceAttr(dataSourceName, "vpc_ids.#", "3"), - resource.TestCheckResourceAttr(dataSourceName, "arns.#", "3"), ), }, }, @@ -31,83 +32,116 @@ func TestAccEC2SecurityGroupsDataSource_tag(t *testing.T) { } func TestAccEC2SecurityGroupsDataSource_filter(t *testing.T) { - rInt := sdkacctest.RandInt() - dataSourceName := "data.aws_security_groups.by_filter" + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_security_groups.test" + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccSecurityGroupsDataSourceConfig_filter(rInt), + Config: testAccSecurityGroupsDataSourceConfig_filter(rName), Check: resource.ComposeTestCheckFunc( - resource.TestCheckResourceAttr(dataSourceName, "ids.#", "3"), - resource.TestCheckResourceAttr(dataSourceName, "vpc_ids.#", "3"), - resource.TestCheckResourceAttr(dataSourceName, "arns.#", "3"), + resource.TestCheckResourceAttr(dataSourceName, "arns.#", "1"), + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "1"), + resource.TestCheckResourceAttr(dataSourceName, "vpc_ids.#", "1"), ), }, }, }) } -func testAccSecurityGroupsDataSourceConfig_tag(rInt int) string { +func TestAccEC2SecurityGroupsDataSource_empty(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_security_groups.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccDataSourceAwsSecurityGroupsConfig_empty(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "arns.#", "0"), + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "0"), + resource.TestCheckResourceAttr(dataSourceName, "vpc_ids.#", "0"), + ), + }, + }, + }) +} + +func testAccSecurityGroupsDataSourceConfig_tag(rName string) string { return fmt.Sprintf(` -resource "aws_vpc" "test_tag" { +resource "aws_vpc" "test" { cidr_block = "172.16.0.0/16" tags = { - Name = "terraform-testacc-security-group-data-source" + Name = %[1]q } } resource "aws_security_group" "test" { count = 3 - vpc_id = aws_vpc.test_tag.id - name = "tf-%[1]d-${count.index}" + vpc_id = aws_vpc.test.id + name = "%[1]s-${count.index}" tags = { - Seed = "%[1]d" + Name = %[1]q } } -data "aws_security_groups" "by_tag" { +data "aws_security_groups" "test" { tags = { - Seed = aws_security_group.test[0].tags["Seed"] + Name = %[1]q } + + depends_on = [aws_security_group.test[0], aws_security_group.test[1], aws_security_group.test[2]] } -`, rInt) +`, rName) } -func testAccSecurityGroupsDataSourceConfig_filter(rInt int) string { +func testAccSecurityGroupsDataSourceConfig_filter(rName string) string { return fmt.Sprintf(` -resource "aws_vpc" "test_filter" { +resource "aws_vpc" "test" { cidr_block = "172.16.0.0/16" tags = { - Name = "terraform-testacc-security-group-data-source" + Name = %[1]q } } resource "aws_security_group" "test" { - count = 3 - vpc_id = aws_vpc.test_filter.id - name = "tf-%[1]d-${count.index}" + vpc_id = aws_vpc.test.id + name = %[1]q tags = { - Seed = "%[1]d" + Name = %[1]q } } -data "aws_security_groups" "by_filter" { +data "aws_security_groups" "test" { filter { name = "vpc-id" - values = [aws_vpc.test_filter.id] + values = [aws_vpc.test.id] } filter { name = "group-name" - values = ["tf-${aws_security_group.test[0].tags["Seed"]}-*"] + values = [aws_security_group.test.name] + } +} +`, rName) +} + +func testAccDataSourceAwsSecurityGroupsConfig_empty(rName string) string { + return fmt.Sprintf(` +data "aws_security_groups" "test" { + tags = { + Name = %[1]q } } -`, rInt) +`, rName) } diff --git a/internal/service/ec2/spot_instance_request.go b/internal/service/ec2/spot_instance_request.go index 410975543e6..a4d2d71998d 100644 --- a/internal/service/ec2/spot_instance_request.go +++ b/internal/service/ec2/spot_instance_request.go @@ -1,7 +1,6 @@ package ec2 import ( - "context" "fmt" "log" "math/big" @@ -101,23 +100,12 @@ func ResourceSpotInstanceRequest() *schema.Resource { ForceNew: true, ValidateFunc: validation.IntDivisibleBy(60), } - s["instance_interruption_behaviour"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, - ForceNew: true, - ValidateFunc: validation.StringInSlice(ec2.InstanceInterruptionBehavior_Values(), false), - Deprecated: "Use the parameter \"instance_interruption_behavior\" instead.", - ConflictsWith: []string{"instance_interruption_behavior"}, - } s["instance_interruption_behavior"] = &schema.Schema{ - Type: schema.TypeString, - Optional: true, - Computed: true, // Only during `instance_interruption_behaviour` deprecation period - // Default: ec2.InstanceInterruptionBehaviorTerminate, - ForceNew: true, - ValidateFunc: validation.StringInSlice(ec2.InstanceInterruptionBehavior_Values(), false), - ConflictsWith: []string{"instance_interruption_behaviour"}, + Type: schema.TypeString, + Optional: true, + Default: ec2.InstanceInterruptionBehaviorTerminate, + ForceNew: true, + ValidateFunc: validation.StringInSlice(ec2.InstanceInterruptionBehavior_Values(), false), } s["valid_from"] = &schema.Schema{ Type: schema.TypeString, @@ -138,21 +126,6 @@ func ResourceSpotInstanceRequest() *schema.Resource { CustomizeDiff: customdiff.All( verify.SetTagsDiff, - // This function exists to apply a default value to `instance_interruption_behavior` while - // accounting for the deprecated parameter `instance_interruption_behaviour`. It can be removed - // in favor of setting a `Default` on the parameter once `instance_interruption_behaviour` is removed. - // https://github.com/hashicorp/terraform-provider-aws/issues/20101 - func(_ context.Context, diff *schema.ResourceDiff, meta interface{}) error { - if v, ok := diff.GetOk("instance_interruption_behavior"); ok && v != "" { - return nil - } - if v, ok := diff.GetOk("instance_interruption_behaviour"); ok && v != "" { - diff.SetNew("instance_interruption_behavior", v) - return nil - } - diff.SetNew("instance_interruption_behavior", ec2.InstanceInterruptionBehaviorTerminate) - return nil - }, ), } } @@ -373,7 +346,6 @@ func resourceSpotInstanceRequestRead(d *schema.ResourceData, meta interface{}) e } d.Set("instance_interruption_behavior", request.InstanceInterruptionBehavior) - d.Set("instance_interruption_behaviour", request.InstanceInterruptionBehavior) d.Set("valid_from", aws.TimeValue(request.ValidFrom).Format(time.RFC3339)) d.Set("valid_until", aws.TimeValue(request.ValidUntil).Format(time.RFC3339)) d.Set("spot_type", request.Type) diff --git a/internal/service/ec2/spot_instance_request_test.go b/internal/service/ec2/spot_instance_request_test.go index 33a122c7c50..ed176ee1144 100644 --- a/internal/service/ec2/spot_instance_request_test.go +++ b/internal/service/ec2/spot_instance_request_test.go @@ -34,7 +34,6 @@ func TestAccEC2SpotInstanceRequest_basic(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "spot_bid_status", "fulfilled"), resource.TestCheckResourceAttr(resourceName, "spot_request_state", "active"), resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "terminate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "terminate"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, @@ -616,7 +615,6 @@ func TestAccEC2SpotInstanceRequest_interruptStop(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "spot_bid_status", "fulfilled"), resource.TestCheckResourceAttr(resourceName, "spot_request_state", "active"), resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "stop"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "stop"), ), }, { @@ -646,7 +644,6 @@ func TestAccEC2SpotInstanceRequest_interruptHibernate(t *testing.T) { resource.TestCheckResourceAttr(resourceName, "spot_bid_status", "fulfilled"), resource.TestCheckResourceAttr(resourceName, "spot_request_state", "active"), resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "hibernate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "hibernate"), ), }, { @@ -674,7 +671,6 @@ func TestAccEC2SpotInstanceRequest_interruptUpdate(t *testing.T) { Check: resource.ComposeAggregateTestCheckFunc( testAccCheckSpotInstanceRequestExists(resourceName, &sir1), resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "hibernate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "hibernate"), ), }, { @@ -683,99 +679,6 @@ func TestAccEC2SpotInstanceRequest_interruptUpdate(t *testing.T) { testAccCheckSpotInstanceRequestExists(resourceName, &sir2), testAccCheckSpotInstanceRequestRecreated(&sir1, &sir2), resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "terminate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "terminate"), - ), - }, - }, - }) -} - -func TestAccEC2SpotInstanceRequest_interruptDeprecated(t *testing.T) { - var sir ec2.SpotInstanceRequest - resourceName := "aws_spot_instance_request.test" - - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), - Providers: acctest.Providers, - CheckDestroy: testAccCheckSpotInstanceRequestDestroy, - Steps: []resource.TestStep{ - { - Config: testAccSpotInstanceRequestInterruptConfig_Deprecated("hibernate"), - Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(resourceName, &sir), - resource.TestCheckResourceAttr(resourceName, "spot_bid_status", "fulfilled"), - resource.TestCheckResourceAttr(resourceName, "spot_request_state", "active"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "hibernate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "hibernate"), - ), - }, - { - ResourceName: resourceName, - ImportState: true, - ImportStateVerify: true, - ImportStateVerifyIgnore: []string{"wait_for_fulfillment"}, - }, - }, - }) -} - -func TestAccEC2SpotInstanceRequest_interruptFixDeprecated(t *testing.T) { - var sir1, sir2 ec2.SpotInstanceRequest - resourceName := "aws_spot_instance_request.test" - - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), - Providers: acctest.Providers, - CheckDestroy: testAccCheckSpotInstanceRequestDestroy, - Steps: []resource.TestStep{ - { - Config: testAccSpotInstanceRequestInterruptConfig_Deprecated("hibernate"), - Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(resourceName, &sir1), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "hibernate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "hibernate"), - ), - }, - { - Config: testAccSpotInstanceRequestInterruptConfig("hibernate"), - Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(resourceName, &sir2), - testAccCheckSpotInstanceRequestNotRecreated(&sir1, &sir2), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "hibernate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "hibernate"), - ), - }, - }, - }) -} - -func TestAccEC2SpotInstanceRequest_interruptUpdateFromDeprecated(t *testing.T) { - var sir1, sir2 ec2.SpotInstanceRequest - resourceName := "aws_spot_instance_request.test" - - resource.ParallelTest(t, resource.TestCase{ - PreCheck: func() { acctest.PreCheck(t) }, - ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), - Providers: acctest.Providers, - CheckDestroy: testAccCheckSpotInstanceRequestDestroy, - Steps: []resource.TestStep{ - { - Config: testAccSpotInstanceRequestInterruptConfig_Deprecated("hibernate"), - Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(resourceName, &sir1), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "hibernate"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "hibernate"), - ), - }, - { - Config: testAccSpotInstanceRequestInterruptConfig("stop"), - Check: resource.ComposeAggregateTestCheckFunc( - testAccCheckSpotInstanceRequestExists(resourceName, &sir2), - testAccCheckSpotInstanceRequestRecreated(&sir1, &sir2), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behavior", "stop"), - resource.TestCheckResourceAttr(resourceName, "instance_interruption_behaviour", "stop"), ), }, }, @@ -792,16 +695,6 @@ func testAccCheckSpotInstanceRequestRecreated(before, after *ec2.SpotInstanceReq } } -func testAccCheckSpotInstanceRequestNotRecreated(before, after *ec2.SpotInstanceRequest) resource.TestCheckFunc { - return func(s *terraform.State) error { - if before, after := aws.StringValue(before.InstanceId), aws.StringValue(after.InstanceId); before != after { - return fmt.Errorf("Spot Instance (%s/%s) recreated", before, after) - } - - return nil - } -} - func testAccSpotInstanceRequestConfig() string { return acctest.ConfigCompose( acctest.ConfigLatestAmazonLinuxHvmEbsAmi(), @@ -1079,18 +972,3 @@ resource "aws_spot_instance_request" "test" { } `, interruptionBehavior)) } - -func testAccSpotInstanceRequestInterruptConfig_Deprecated(interruptionBehavior string) string { - return acctest.ConfigCompose( - acctest.ConfigLatestAmazonLinuxHvmEbsAmi(), - acctest.AvailableEC2InstanceTypeForRegion("c5.large", "c4.large"), - fmt.Sprintf(` -resource "aws_spot_instance_request" "test" { - ami = data.aws_ami.amzn-ami-minimal-hvm-ebs.id - instance_type = data.aws_ec2_instance_type_offering.available.instance_type - spot_price = "0.07" - wait_for_fulfillment = true - instance_interruption_behaviour = %[1]q -} -`, interruptionBehavior)) -} diff --git a/internal/service/ec2/status.go b/internal/service/ec2/status.go index c79acfc5a5b..ef7ed687207 100644 --- a/internal/service/ec2/status.go +++ b/internal/service/ec2/status.go @@ -218,7 +218,7 @@ func StatusInstanceIAMInstanceProfile(conn *ec2.EC2, id string) resource.StateRe return func() (interface{}, string, error) { instance, err := FindInstanceByID(conn, id) - if tfawserr.ErrCodeEquals(err, ErrCodeInvalidInstanceIDNotFound) { + if tfresource.NotFound(err) { return nil, "", nil } @@ -226,10 +226,6 @@ func StatusInstanceIAMInstanceProfile(conn *ec2.EC2, id string) resource.StateRe return nil, "", err } - if instance == nil { - return nil, "", nil - } - if instance.IamInstanceProfile == nil || instance.IamInstanceProfile.Arn == nil { return instance, "", nil } @@ -376,7 +372,7 @@ func StatusSubnetIPv6CIDRBlockAssociationState(conn *ec2.EC2, id string) resourc } } -func StatusSubnetMapCustomerOwnedIPOnLaunch(conn *ec2.EC2, id string) resource.StateRefreshFunc { +func StatusSubnetAssignIpv6AddressOnCreation(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { output, err := FindSubnetByID(conn, id) @@ -388,7 +384,7 @@ func StatusSubnetMapCustomerOwnedIPOnLaunch(conn *ec2.EC2, id string) resource.S return nil, "", err } - return output, strconv.FormatBool(aws.BoolValue(output.MapCustomerOwnedIpOnLaunch)), nil + return output, strconv.FormatBool(aws.BoolValue(output.AssignIpv6AddressOnCreation)), nil } } @@ -440,6 +436,22 @@ func StatusSubnetEnableResourceNameDnsARecordOnLaunch(conn *ec2.EC2, id string) } } +func StatusSubnetMapCustomerOwnedIPOnLaunch(conn *ec2.EC2, id string) resource.StateRefreshFunc { + return func() (interface{}, string, error) { + output, err := FindSubnetByID(conn, id) + + if tfresource.NotFound(err) { + return nil, "", nil + } + + if err != nil { + return nil, "", err + } + + return output, strconv.FormatBool(aws.BoolValue(output.MapCustomerOwnedIpOnLaunch)), nil + } +} + func StatusSubnetMapPublicIPOnLaunch(conn *ec2.EC2, id string) resource.StateRefreshFunc { return func() (interface{}, string, error) { output, err := FindSubnetByID(conn, id) diff --git a/internal/service/ec2/subnet.go b/internal/service/ec2/subnet.go index 8d85d88578e..66182c22684 100644 --- a/internal/service/ec2/subnet.go +++ b/internal/service/ec2/subnet.go @@ -37,6 +37,8 @@ func ResourceSubnet() *schema.Resource { SchemaVersion: 1, MigrateState: SubnetMigrateState, + // Keep in sync with aws_default_subnet's schema. + // See notes in default_subnet.go. Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -182,128 +184,26 @@ func resourceSubnetCreate(d *schema.ResourceData, meta interface{}) error { d.SetId(aws.StringValue(output.Subnet.SubnetId)) - _, err = WaitSubnetAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)) + subnet, err := WaitSubnetAvailable(conn, d.Id(), d.Timeout(schema.TimeoutCreate)) if err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) to become available: %w", d.Id(), err) + return fmt.Errorf("error waiting for EC2 Subnet (%s) create: %w", d.Id(), err) } - // You cannot modify multiple subnet attributes in the same request, - // except CustomerOwnedIpv4Pool and MapCustomerOwnedIpOnLaunch. - // Reference: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifySubnetAttribute.html - - if d.Get("assign_ipv6_address_on_creation").(bool) { - input := &ec2.ModifySubnetAttributeInput{ - AssignIpv6AddressOnCreation: &ec2.AttributeBooleanValue{ - Value: aws.Bool(true), - }, - SubnetId: aws.String(d.Id()), - } - - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) AssignIpv6AddressOnCreation: %w", d.Id(), err) - } - } - - if v, ok := d.GetOk("customer_owned_ipv4_pool"); ok { - input := &ec2.ModifySubnetAttributeInput{ - CustomerOwnedIpv4Pool: aws.String(v.(string)), - MapCustomerOwnedIpOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("map_customer_owned_ip_on_launch").(bool)), - }, - SubnetId: aws.String(d.Id()), - } - - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) CustomerOwnedIpv4Pool/MapCustomerOwnedIpOnLaunch: %w", d.Id(), err) - } - - if _, err := WaitSubnetMapCustomerOwnedIPOnLaunchUpdated(conn, d.Id(), d.Get("map_customer_owned_ip_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) MapCustomerOwnedIpOnLaunch update: %w", d.Id(), err) - } - } - - if d.Get("enable_dns64").(bool) { - input := &ec2.ModifySubnetAttributeInput{ - EnableDns64: &ec2.AttributeBooleanValue{ - Value: aws.Bool(true), - }, - SubnetId: aws.String(d.Id()), - } - - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) EnableDns64: %w", d.Id(), err) - } - - if _, err := WaitSubnetEnableDns64Updated(conn, d.Id(), d.Get("enable_dns64").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableDns64 update: %w", d.Id(), err) - } - } - - if d.Get("enable_resource_name_dns_aaaa_record_on_launch").(bool) { - input := &ec2.ModifySubnetAttributeInput{ - EnableResourceNameDnsAAAARecordOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(true), - }, - SubnetId: aws.String(d.Id()), - } - - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) EnableResourceNameDnsAAAARecordOnLaunch: %w", d.Id(), err) - } - - if _, err := WaitSubnetEnableResourceNameDnsAAAARecordOnLaunchUpdated(conn, d.Id(), d.Get("enable_resource_name_dns_aaaa_record_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableResourceNameDnsAAAARecordOnLaunch update: %w", d.Id(), err) - } - } - - if d.Get("enable_resource_name_dns_a_record_on_launch").(bool) { - input := &ec2.ModifySubnetAttributeInput{ - EnableResourceNameDnsARecordOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(true), - }, - SubnetId: aws.String(d.Id()), - } + for _, v := range subnet.Ipv6CidrBlockAssociationSet { + if aws.StringValue(v.Ipv6CidrBlockState.State) == ec2.SubnetCidrBlockStateCodeAssociating { //we can only ever have 1 IPv6 block associated at once + associationID := aws.StringValue(v.AssociationId) - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) EnableResourceNameDnsARecordOnLaunch: %w", d.Id(), err) - } + _, err = WaitSubnetIPv6CIDRBlockAssociationCreated(conn, associationID) - if _, err := WaitSubnetEnableResourceNameDnsARecordOnLaunchUpdated(conn, d.Id(), d.Get("enable_resource_name_dns_a_record_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableResourceNameDnsARecordOnLaunch update: %w", d.Id(), err) + if err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) IPv6 CIDR block (%s) to become associated: %w", d.Id(), associationID, err) + } } } - if d.Get("map_public_ip_on_launch").(bool) { - input := &ec2.ModifySubnetAttributeInput{ - MapPublicIpOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(true), - }, - SubnetId: aws.String(d.Id()), - } - - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) MapPublicIpOnLaunch: %w", d.Id(), err) - } - - if _, err := WaitSubnetMapPublicIPOnLaunchUpdated(conn, d.Id(), d.Get("map_public_ip_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) MapPublicIpOnLaunch update: %w", d.Id(), err) - } - } - - if v, ok := d.GetOk("private_dns_hostname_type_on_launch"); ok { - input := &ec2.ModifySubnetAttributeInput{ - PrivateDnsHostnameTypeOnLaunch: aws.String(v.(string)), - SubnetId: aws.String(d.Id()), - } - - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) PrivateDnsHostnameTypeOnLaunch: %w", d.Id(), err) - } - - if _, err := WaitSubnetPrivateDNSHostnameTypeOnLaunchUpdated(conn, d.Id(), d.Get("private_dns_hostname_type_on_launch").(string)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) PrivateDnsHostnameTypeOnLaunch update: %w", d.Id(), err) - } + if err := modifySubnetAttributesOnCreate(conn, d, subnet, false); err != nil { + return err } return resourceSubnetRead(d, meta) @@ -319,13 +219,13 @@ func resourceSubnetRead(d *schema.ResourceData, meta interface{}) error { }, d.IsNewResource()) if !d.IsNewResource() && tfresource.NotFound(err) { - log.Printf("[WARN] Subnet (%s) not found, removing from state", d.Id()) + log.Printf("[WARN] EC2 Subnet (%s) not found, removing from state", d.Id()) d.SetId("") return nil } if err != nil { - return fmt.Errorf("error reading Subnet (%s): %w", d.Id(), err) + return fmt.Errorf("error reading EC2 Subnet (%s): %w", d.Id(), err) } subnet := outputRaw.(*ec2.Subnet) @@ -396,209 +296,355 @@ func resourceSubnetUpdate(d *schema.ResourceData, meta interface{}) error { // Reference: https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifySubnetAttribute.html if d.HasChanges("customer_owned_ipv4_pool", "map_customer_owned_ip_on_launch") { - input := &ec2.ModifySubnetAttributeInput{ - MapCustomerOwnedIpOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("map_customer_owned_ip_on_launch").(bool)), - }, - SubnetId: aws.String(d.Id()), + if err := modifySubnetOutpostRackAttributes(conn, d.Id(), d.Get("customer_owned_ipv4_pool").(string), d.Get("map_customer_owned_ip_on_launch").(bool)); err != nil { + return err } + } - if v, ok := d.GetOk("customer_owned_ipv4_pool"); ok { - input.CustomerOwnedIpv4Pool = aws.String(v.(string)) + if d.HasChange("enable_dns64") { + if err := modifySubnetEnableDns64(conn, d.Id(), d.Get("enable_dns64").(bool)); err != nil { + return err } + } - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) CustomerOwnedIpv4Pool/MapCustomerOwnedIpOnLaunch: %w", d.Id(), err) + if d.HasChange("enable_resource_name_dns_aaaa_record_on_launch") { + if err := modifySubnetEnableResourceNameDnsAAAARecordOnLaunch(conn, d.Id(), d.Get("enable_resource_name_dns_aaaa_record_on_launch").(bool)); err != nil { + return err } + } - if _, err := WaitSubnetMapCustomerOwnedIPOnLaunchUpdated(conn, d.Id(), d.Get("map_customer_owned_ip_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) MapCustomerOwnedIpOnLaunch update: %w", d.Id(), err) + if d.HasChange("enable_resource_name_dns_a_record_on_launch") { + if err := modifySubnetEnableResourceNameDnsARecordOnLaunch(conn, d.Id(), d.Get("enable_resource_name_dns_a_record_on_launch").(bool)); err != nil { + return err } } - if d.HasChange("enable_dns64") { - input := &ec2.ModifySubnetAttributeInput{ - EnableDns64: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("enable_dns64").(bool)), - }, - SubnetId: aws.String(d.Id()), + if d.HasChange("map_public_ip_on_launch") { + if err := modifySubnetMapPublicIpOnLaunch(conn, d.Id(), d.Get("map_public_ip_on_launch").(bool)); err != nil { + return err } + } - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) EnableDns64: %w", d.Id(), err) + if d.HasChange("private_dns_hostname_type_on_launch") { + if err := modifySubnetPrivateDnsHostnameTypeOnLaunch(conn, d.Id(), d.Get("private_dns_hostname_type_on_launch").(string)); err != nil { + return err } + } - if _, err := WaitSubnetEnableDns64Updated(conn, d.Id(), d.Get("enable_dns64").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableDns64 update: %w", d.Id(), err) + // If we're disabling IPv6 assignment for new ENIs, do that before modifying the IPv6 CIDR block. + if d.HasChange("assign_ipv6_address_on_creation") && !d.Get("assign_ipv6_address_on_creation").(bool) { + if err := modifySubnetAssignIpv6AddressOnCreation(conn, d.Id(), false); err != nil { + return err } } - if d.HasChange("enable_resource_name_dns_aaaa_record_on_launch") { - input := &ec2.ModifySubnetAttributeInput{ - EnableResourceNameDnsAAAARecordOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("enable_resource_name_dns_aaaa_record_on_launch").(bool)), - }, - SubnetId: aws.String(d.Id()), + if d.HasChange("ipv6_cidr_block") { + if err := modifySubnetIPv6CIDRBlockAssociation(conn, d.Id(), d.Get("ipv6_cidr_block_association_id").(string), d.Get("ipv6_cidr_block").(string)); err != nil { + return err } + } - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) EnableResourceNameDnsAAAARecordOnLaunch: %w", d.Id(), err) + // If we're enabling IPv6 assignment for new ENIs, do that after modifying the IPv6 CIDR block. + if d.HasChange("assign_ipv6_address_on_creation") && d.Get("assign_ipv6_address_on_creation").(bool) { + if err := modifySubnetAssignIpv6AddressOnCreation(conn, d.Id(), true); err != nil { + return err } + } - if _, err := WaitSubnetEnableResourceNameDnsAAAARecordOnLaunchUpdated(conn, d.Id(), d.Get("enable_resource_name_dns_aaaa_record_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableResourceNameDnsAAAARecordOnLaunch update: %w", d.Id(), err) - } + return resourceSubnetRead(d, meta) +} + +func resourceSubnetDelete(d *schema.ResourceData, meta interface{}) error { + conn := meta.(*conns.AWSClient).EC2Conn + + log.Printf("[INFO] Deleting EC2 Subnet: %s", d.Id()) + + if err := deleteLingeringLambdaENIs(conn, "subnet-id", d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { + return fmt.Errorf("error deleting Lambda ENIs for EC2 Subnet (%s): %w", d.Id(), err) } - if d.HasChange("enable_resource_name_dns_a_record_on_launch") { - input := &ec2.ModifySubnetAttributeInput{ - EnableResourceNameDnsARecordOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("enable_resource_name_dns_a_record_on_launch").(bool)), - }, + _, err := tfresource.RetryWhenAWSErrCodeEquals(d.Timeout(schema.TimeoutDelete), func() (interface{}, error) { + return conn.DeleteSubnet(&ec2.DeleteSubnetInput{ SubnetId: aws.String(d.Id()), + }) + }, ErrCodeDependencyViolation) + + if tfawserr.ErrCodeEquals(err, ErrCodeInvalidSubnetIDNotFound) { + return nil + } + + if err != nil { + return fmt.Errorf("error deleting EC2 Subnet (%s): %w", d.Id(), err) + } + + return nil +} + +// modifySubnetAttributesOnCreate sets subnet attributes on resource Create. +// Called after new subnet creation or existing default subnet adoption. +func modifySubnetAttributesOnCreate(conn *ec2.EC2, d *schema.ResourceData, subnet *ec2.Subnet, computedIPv6CidrBlock bool) error { + // If we're disabling IPv6 assignment for new ENIs, do that before modifying the IPv6 CIDR block. + if new, old := d.Get("assign_ipv6_address_on_creation").(bool), aws.BoolValue(subnet.AssignIpv6AddressOnCreation); old != new && !new { + if err := modifySubnetAssignIpv6AddressOnCreation(conn, d.Id(), false); err != nil { + return err } + } - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) EnableResourceNameDnsARecordOnLaunch: %w", d.Id(), err) + // If we're disabling DNS64, do that before modifying the IPv6 CIDR block. + if new, old := d.Get("enable_dns64").(bool), aws.BoolValue(subnet.EnableDns64); old != new && !new { + if err := modifySubnetEnableDns64(conn, d.Id(), false); err != nil { + return err } + } + + // Creating a new IPv6-native default subnet assigns a computed IPv6 CIDR block. + // Don't attempt to do anything with it. + if !computedIPv6CidrBlock { + var oldAssociationID, oldIPv6CIDRBlock string + for _, v := range subnet.Ipv6CidrBlockAssociationSet { + if aws.StringValue(v.Ipv6CidrBlockState.State) == ec2.SubnetCidrBlockStateCodeAssociated { //we can only ever have 1 IPv6 block associated at once + oldAssociationID = aws.StringValue(v.AssociationId) + oldIPv6CIDRBlock = aws.StringValue(v.Ipv6CidrBlock) - if _, err := WaitSubnetEnableResourceNameDnsARecordOnLaunchUpdated(conn, d.Id(), d.Get("enable_resource_name_dns_a_record_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableResourceNameDnsARecordOnLaunch update: %w", d.Id(), err) + break + } + } + if new := d.Get("ipv6_cidr_block").(string); oldIPv6CIDRBlock != new { + if err := modifySubnetIPv6CIDRBlockAssociation(conn, d.Id(), oldAssociationID, new); err != nil { + return err + } } } - if d.HasChange("map_public_ip_on_launch") { - input := &ec2.ModifySubnetAttributeInput{ - MapPublicIpOnLaunch: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("map_public_ip_on_launch").(bool)), - }, - SubnetId: aws.String(d.Id()), + // If we're enabling IPv6 assignment for new ENIs, do that after modifying the IPv6 CIDR block. + if new, old := d.Get("assign_ipv6_address_on_creation").(bool), aws.BoolValue(subnet.AssignIpv6AddressOnCreation); old != new && new { + if err := modifySubnetAssignIpv6AddressOnCreation(conn, d.Id(), true); err != nil { + return err } + } - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) MapPublicIpOnLaunch: %w", d.Id(), err) + if newCustomerOwnedIPOnLaunch, oldCustomerOwnedIPOnLaunch, newMapCustomerOwnedIPOnLaunch, oldMapCustomerOwnedIPOnLaunch := + d.Get("customer_owned_ipv4_pool").(string), aws.StringValue(subnet.CustomerOwnedIpv4Pool), d.Get("map_customer_owned_ip_on_launch").(bool), aws.BoolValue(subnet.MapCustomerOwnedIpOnLaunch); oldCustomerOwnedIPOnLaunch != newCustomerOwnedIPOnLaunch || oldMapCustomerOwnedIPOnLaunch != newMapCustomerOwnedIPOnLaunch { + if err := modifySubnetOutpostRackAttributes(conn, d.Id(), newCustomerOwnedIPOnLaunch, newMapCustomerOwnedIPOnLaunch); err != nil { + return err } + } - if _, err := WaitSubnetMapPublicIPOnLaunchUpdated(conn, d.Id(), d.Get("map_public_ip_on_launch").(bool)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) MapPublicIpOnLaunch update: %w", d.Id(), err) + // If we're enabling DNS64, do that after modifying the IPv6 CIDR block. + if new, old := d.Get("enable_dns64").(bool), aws.BoolValue(subnet.EnableDns64); old != new && new { + if err := modifySubnetEnableDns64(conn, d.Id(), true); err != nil { + return err } } - if d.HasChange("private_dns_hostname_type_on_launch") { - input := &ec2.ModifySubnetAttributeInput{ - PrivateDnsHostnameTypeOnLaunch: aws.String(d.Get("private_dns_hostname_type_on_launch").(string)), - SubnetId: aws.String(d.Id()), + if subnet.PrivateDnsNameOptionsOnLaunch != nil { + if new, old := d.Get("enable_resource_name_dns_aaaa_record_on_launch").(bool), aws.BoolValue(subnet.PrivateDnsNameOptionsOnLaunch.EnableResourceNameDnsAAAARecord); old != new { + if err := modifySubnetEnableResourceNameDnsAAAARecordOnLaunch(conn, d.Id(), new); err != nil { + return err + } } - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) PrivateDnsHostnameTypeOnLaunch: %w", d.Id(), err) + if new, old := d.Get("enable_resource_name_dns_a_record_on_launch").(bool), aws.BoolValue(subnet.PrivateDnsNameOptionsOnLaunch.EnableResourceNameDnsARecord); old != new { + if err := modifySubnetEnableResourceNameDnsARecordOnLaunch(conn, d.Id(), new); err != nil { + return err + } } - if _, err := WaitSubnetPrivateDNSHostnameTypeOnLaunchUpdated(conn, d.Id(), d.Get("private_dns_hostname_type_on_launch").(string)); err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) PrivateDnsHostnameTypeOnLaunch update: %w", d.Id(), err) + // private_dns_hostname_type_on_launch is Computed, so only modify if the new value is set. + if new, old := d.Get("private_dns_hostname_type_on_launch").(string), aws.StringValue(subnet.PrivateDnsNameOptionsOnLaunch.HostnameType); old != new && new != "" { + if err := modifySubnetPrivateDnsHostnameTypeOnLaunch(conn, d.Id(), new); err != nil { + return err + } } } - if d.HasChange("ipv6_cidr_block") { - // We need to handle that we disassociate the IPv6 CIDR block before we try to associate the new one - // This could be an issue as, we could error out when we try to add the new one - // We may need to roll back the state and reattach the old one if this is the case - if v, ok := d.GetOk("ipv6_cidr_block_association_id"); ok { - if !d.Get("assign_ipv6_address_on_creation").(bool) { - input := &ec2.ModifySubnetAttributeInput{ - AssignIpv6AddressOnCreation: &ec2.AttributeBooleanValue{ - Value: aws.Bool(false), - }, - SubnetId: aws.String(d.Id()), - } - - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error setting EC2 Subnet (%s) AssignIpv6AddressOnCreation: %w", d.Id(), err) - } - } + if new, old := d.Get("map_public_ip_on_launch").(bool), aws.BoolValue(subnet.MapPublicIpOnLaunch); old != new { + if err := modifySubnetMapPublicIpOnLaunch(conn, d.Id(), new); err != nil { + return err + } + } - associationID := v.(string) + return nil +} - //Firstly we have to disassociate the old IPv6 CIDR Block - input := &ec2.DisassociateSubnetCidrBlockInput{ - AssociationId: aws.String(associationID), - } +func modifySubnetAssignIpv6AddressOnCreation(conn *ec2.EC2, subnetID string, v bool) error { + input := &ec2.ModifySubnetAttributeInput{ + AssignIpv6AddressOnCreation: &ec2.AttributeBooleanValue{ + Value: aws.Bool(v), + }, + SubnetId: aws.String(subnetID), + } - _, err := conn.DisassociateSubnetCidrBlock(input) + if _, err := conn.ModifySubnetAttribute(input); err != nil { + return fmt.Errorf("error setting EC2 Subnet (%s) AssignIpv6AddressOnCreation: %w", subnetID, err) + } - if err != nil { - return fmt.Errorf("error disassociating EC2 Subnet (%s) CIDR block (%s): %w", d.Id(), associationID, err) - } + if _, err := WaitSubnetAssignIpv6AddressOnCreationUpdated(conn, subnetID, v); err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) AssignIpv6AddressOnCreation update: %w", subnetID, err) + } - _, err = WaitSubnetIPv6CIDRBlockAssociationDeleted(conn, associationID) + return nil +} - if err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) CIDR block (%s) to become disassociated: %w", d.Id(), associationID, err) - } - } +func modifySubnetEnableDns64(conn *ec2.EC2, subnetID string, v bool) error { + input := &ec2.ModifySubnetAttributeInput{ + EnableDns64: &ec2.AttributeBooleanValue{ + Value: aws.Bool(v), + }, + SubnetId: aws.String(subnetID), + } - if newIpv6 := d.Get("ipv6_cidr_block").(string); newIpv6 != "" { - //Now we need to try to associate the new CIDR block - input := &ec2.AssociateSubnetCidrBlockInput{ - Ipv6CidrBlock: aws.String(newIpv6), - SubnetId: aws.String(d.Id()), - } + if _, err := conn.ModifySubnetAttribute(input); err != nil { + return fmt.Errorf("error modifying EC2 Subnet (%s) EnableDns64: %w", subnetID, err) + } - output, err := conn.AssociateSubnetCidrBlock(input) + if _, err := WaitSubnetEnableDns64Updated(conn, subnetID, v); err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableDns64 update: %w", subnetID, err) + } - if err != nil { - //The big question here is, do we want to try to reassociate the old one?? - //If we have a failure here, then we may be in a situation that we have nothing associated - return fmt.Errorf("error associating EC2 Subnet (%s) CIDR block (%s): %w", d.Id(), newIpv6, err) - } + return nil +} - associationID := aws.StringValue(output.Ipv6CidrBlockAssociation.AssociationId) +func modifySubnetEnableResourceNameDnsAAAARecordOnLaunch(conn *ec2.EC2, subnetID string, v bool) error { + input := &ec2.ModifySubnetAttributeInput{ + EnableResourceNameDnsAAAARecordOnLaunch: &ec2.AttributeBooleanValue{ + Value: aws.Bool(v), + }, + SubnetId: aws.String(subnetID), + } - _, err = WaitSubnetIPv6CIDRBlockAssociationCreated(conn, associationID) + if _, err := conn.ModifySubnetAttribute(input); err != nil { + return fmt.Errorf("error modifying EC2 Subnet (%s) EnableResourceNameDnsAAAARecordOnLaunch: %w", subnetID, err) + } - if err != nil { - return fmt.Errorf("error waiting for EC2 Subnet (%s) CIDR block (%s) to become associated: %w", d.Id(), associationID, err) - } + if _, err := WaitSubnetEnableResourceNameDnsAAAARecordOnLaunchUpdated(conn, subnetID, v); err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableResourceNameDnsAAAARecordOnLaunch update: %w", subnetID, err) + } + + return nil +} + +func modifySubnetEnableResourceNameDnsARecordOnLaunch(conn *ec2.EC2, subnetID string, v bool) error { + input := &ec2.ModifySubnetAttributeInput{ + EnableResourceNameDnsARecordOnLaunch: &ec2.AttributeBooleanValue{ + Value: aws.Bool(v), + }, + SubnetId: aws.String(subnetID), + } + + if _, err := conn.ModifySubnetAttribute(input); err != nil { + return fmt.Errorf("error modifying EC2 Subnet (%s) EnableResourceNameDnsARecordOnLaunch: %w", subnetID, err) + } + + if _, err := WaitSubnetEnableResourceNameDnsARecordOnLaunchUpdated(conn, subnetID, v); err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) EnableResourceNameDnsARecordOnLaunch update: %w", subnetID, err) + } + + return nil +} + +func modifySubnetIPv6CIDRBlockAssociation(conn *ec2.EC2, subnetID, associationID, cidrBlock string) error { + // We need to handle that we disassociate the IPv6 CIDR block before we try to associate the new one + // This could be an issue as, we could error out when we try to add the new one + // We may need to roll back the state and reattach the old one if this is the case + if associationID != "" { + input := &ec2.DisassociateSubnetCidrBlockInput{ + AssociationId: aws.String(associationID), + } + + _, err := conn.DisassociateSubnetCidrBlock(input) + + if err != nil { + return fmt.Errorf("error disassociating EC2 Subnet (%s) IPv6 CIDR block (%s): %w", subnetID, associationID, err) + } + + _, err = WaitSubnetIPv6CIDRBlockAssociationDeleted(conn, associationID) + + if err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) IPv6 CIDR block (%s) to become disassociated: %w", subnetID, associationID, err) } } - if d.HasChange("assign_ipv6_address_on_creation") { - input := &ec2.ModifySubnetAttributeInput{ - AssignIpv6AddressOnCreation: &ec2.AttributeBooleanValue{ - Value: aws.Bool(d.Get("assign_ipv6_address_on_creation").(bool)), - }, - SubnetId: aws.String(d.Id()), + if cidrBlock != "" { + input := &ec2.AssociateSubnetCidrBlockInput{ + Ipv6CidrBlock: aws.String(cidrBlock), + SubnetId: aws.String(subnetID), } - if _, err := conn.ModifySubnetAttribute(input); err != nil { - return fmt.Errorf("error enabling EC2 Subnet (%s) AssignIpv6AddressOnCreation: %w", d.Id(), err) + output, err := conn.AssociateSubnetCidrBlock(input) + + if err != nil { + //The big question here is, do we want to try to reassociate the old one?? + //If we have a failure here, then we may be in a situation that we have nothing associated + return fmt.Errorf("error associating EC2 Subnet (%s) IPv6 CIDR block (%s): %w", subnetID, cidrBlock, err) + } + + associationID := aws.StringValue(output.Ipv6CidrBlockAssociation.AssociationId) + + _, err = WaitSubnetIPv6CIDRBlockAssociationCreated(conn, associationID) + + if err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) IPv6 CIDR block (%s) to become associated: %w", subnetID, associationID, err) } } - return resourceSubnetRead(d, meta) + return nil } -func resourceSubnetDelete(d *schema.ResourceData, meta interface{}) error { - conn := meta.(*conns.AWSClient).EC2Conn +func modifySubnetMapPublicIpOnLaunch(conn *ec2.EC2, subnetID string, v bool) error { + input := &ec2.ModifySubnetAttributeInput{ + MapPublicIpOnLaunch: &ec2.AttributeBooleanValue{ + Value: aws.Bool(v), + }, + SubnetId: aws.String(subnetID), + } - log.Printf("[INFO] Deleting EC2 Subnet: %s", d.Id()) + if _, err := conn.ModifySubnetAttribute(input); err != nil { + return fmt.Errorf("error modifying EC2 Subnet (%s) MapPublicIpOnLaunch: %w", subnetID, err) + } - if err := deleteLingeringLambdaENIs(conn, "subnet-id", d.Id(), d.Timeout(schema.TimeoutDelete)); err != nil { - return fmt.Errorf("error deleting Lambda ENIs for EC2 Subnet (%s): %w", d.Id(), err) + if _, err := WaitSubnetMapPublicIPOnLaunchUpdated(conn, subnetID, v); err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) MapPublicIpOnLaunch update: %w", subnetID, err) } - _, err := tfresource.RetryWhenAWSErrCodeEquals(d.Timeout(schema.TimeoutDelete), func() (interface{}, error) { - return conn.DeleteSubnet(&ec2.DeleteSubnetInput{ - SubnetId: aws.String(d.Id()), - }) - }, ErrCodeDependencyViolation) + return nil +} - if tfawserr.ErrCodeEquals(err, ErrCodeInvalidSubnetIDNotFound) { - return nil +func modifySubnetOutpostRackAttributes(conn *ec2.EC2, subnetID string, customerOwnedIPv4Pool string, mapCustomerOwnedIPOnLaunch bool) error { + input := &ec2.ModifySubnetAttributeInput{ + MapCustomerOwnedIpOnLaunch: &ec2.AttributeBooleanValue{ + Value: aws.Bool(mapCustomerOwnedIPOnLaunch), + }, + SubnetId: aws.String(subnetID), } - if err != nil { - return fmt.Errorf("error deleting EC2 Subnet (%s): %w", d.Id(), err) + if customerOwnedIPv4Pool != "" { + input.CustomerOwnedIpv4Pool = aws.String(customerOwnedIPv4Pool) + } + + if _, err := conn.ModifySubnetAttribute(input); err != nil { + return fmt.Errorf("error modifying EC2 Subnet (%s) CustomerOwnedIpv4Pool/MapCustomerOwnedIpOnLaunch: %w", subnetID, err) + } + + if _, err := WaitSubnetMapCustomerOwnedIPOnLaunchUpdated(conn, subnetID, mapCustomerOwnedIPOnLaunch); err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) MapCustomerOwnedIpOnLaunch update: %w", subnetID, err) + } + + return nil +} + +func modifySubnetPrivateDnsHostnameTypeOnLaunch(conn *ec2.EC2, subnetID string, v string) error { + input := &ec2.ModifySubnetAttributeInput{ + PrivateDnsHostnameTypeOnLaunch: aws.String(v), + SubnetId: aws.String(subnetID), + } + + if _, err := conn.ModifySubnetAttribute(input); err != nil { + return fmt.Errorf("error modifying EC2 Subnet (%s) PrivateDnsHostnameTypeOnLaunch: %w", subnetID, err) + } + + if _, err := WaitSubnetPrivateDNSHostnameTypeOnLaunchUpdated(conn, subnetID, v); err != nil { + return fmt.Errorf("error waiting for EC2 Subnet (%s) PrivateDnsHostnameTypeOnLaunch update: %w", subnetID, err) } return nil diff --git a/internal/service/ec2/subnets_data_source_test.go b/internal/service/ec2/subnets_data_source_test.go index 6c030aba54c..c2f901cf743 100644 --- a/internal/service/ec2/subnets_data_source_test.go +++ b/internal/service/ec2/subnets_data_source_test.go @@ -26,7 +26,7 @@ func TestAccEC2SubnetsDataSource_basic(t *testing.T) { Check: resource.ComposeTestCheckFunc( resource.TestCheckResourceAttr("data.aws_subnets.selected", "ids.#", "4"), resource.TestCheckResourceAttr("data.aws_subnets.private", "ids.#", "2"), - testCheckResourceAttrGreaterThanValue("data.aws_subnets.all", "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_subnets.all", "ids.#", "0"), resource.TestCheckResourceAttr("data.aws_subnets.none", "ids.#", "0"), ), }, diff --git a/internal/service/ec2/sweep.go b/internal/service/ec2/sweep.go index adf591dab62..ead759e8877 100644 --- a/internal/service/ec2/sweep.go +++ b/internal/service/ec2/sweep.go @@ -1497,37 +1497,27 @@ func sweepSpotFleetRequests(region string) error { func sweepSubnets(region string) error { client, err := sweep.SharedRegionalSweepClient(region) - if err != nil { return fmt.Errorf("error getting client: %w", err) } - conn := client.(*conns.AWSClient).EC2Conn - sweepResources := make([]*sweep.SweepResource, 0) - var errs *multierror.Error - input := &ec2.DescribeSubnetsInput{} + sweepResources := make([]*sweep.SweepResource, 0) err = conn.DescribeSubnetsPages(input, func(page *ec2.DescribeSubnetsOutput, lastPage bool) bool { if page == nil { return !lastPage } - for _, subnet := range page.Subnets { - if subnet == nil { - continue - } - - id := aws.StringValue(subnet.SubnetId) - - if aws.BoolValue(subnet.DefaultForAz) { - log.Printf("[DEBUG] Skipping default EC2 Subnet: %s", id) + for _, v := range page.Subnets { + // Skip default subnets. + if aws.BoolValue(v.DefaultForAz) { continue } r := ResourceSubnet() d := r.Data(nil) - d.SetId(id) + d.SetId(aws.StringValue(v.SubnetId)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } @@ -1535,20 +1525,22 @@ func sweepSubnets(region string) error { return !lastPage }) - if err != nil { - errs = multierror.Append(errs, fmt.Errorf("error describing EC2 Subnets for %s: %w", region, err)) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 Subnet sweep for %s: %s", region, err) + return nil } - if err = sweep.SweepOrchestrator(sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("error sweeping EC2 Subnets for %s: %w", region, err)) + if err != nil { + return fmt.Errorf("error listing EC2 Subnets (%s): %w", region, err) } - if sweep.SkipSweepError(errs.ErrorOrNil()) { - log.Printf("[WARN] Skipping EC2 Subnet sweep for %s: %s", region, errs) - return nil + err = sweep.SweepOrchestrator(sweepResources) + + if err != nil { + return fmt.Errorf("error sweeping EC2 Subnets (%s): %w", region, err) } - return errs.ErrorOrNil() + return nil } func sweepTransitGatewayPeeringAttachments(region string) error { @@ -2007,37 +1999,27 @@ func sweepVPCPeeringConnections(region string) error { func sweepVPCs(region string) error { client, err := sweep.SharedRegionalSweepClient(region) - if err != nil { return fmt.Errorf("error getting client: %s", err) } - conn := client.(*conns.AWSClient).EC2Conn - sweepResources := make([]*sweep.SweepResource, 0) - var errs *multierror.Error - input := &ec2.DescribeVpcsInput{} + sweepResources := make([]*sweep.SweepResource, 0) err = conn.DescribeVpcsPages(input, func(page *ec2.DescribeVpcsOutput, lastPage bool) bool { if page == nil { return !lastPage } - for _, vpc := range page.Vpcs { - if vpc == nil { - continue - } - - id := aws.StringValue(vpc.VpcId) - - if aws.BoolValue(vpc.IsDefault) { - log.Printf("[DEBUG] Skipping default EC2 VPC: %s", id) + for _, v := range page.Vpcs { + // Skip default VPCs. + if aws.BoolValue(v.IsDefault) { continue } r := ResourceVPC() d := r.Data(nil) - d.SetId(id) + d.SetId(aws.StringValue(v.VpcId)) sweepResources = append(sweepResources, sweep.NewSweepResource(r, d, client)) } @@ -2045,20 +2027,22 @@ func sweepVPCs(region string) error { return !lastPage }) - if err != nil { - errs = multierror.Append(errs, fmt.Errorf("error describing EC2 VPCs for %s: %w", region, err)) + if sweep.SkipSweepError(err) { + log.Printf("[WARN] Skipping EC2 VPC sweep for %s: %s", region, err) + return nil } - if err = sweep.SweepOrchestrator(sweepResources); err != nil { - errs = multierror.Append(errs, fmt.Errorf("error sweeping EC2 VPCs for %s: %w", region, err)) + if err != nil { + return fmt.Errorf("error listing EC2 VPCs (%s): %w", region, err) } - if sweep.SkipSweepError(errs.ErrorOrNil()) { - log.Printf("[WARN] Skipping EC2 VPCs sweep for %s: %s", region, errs) - return nil + err = sweep.SweepOrchestrator(sweepResources) + + if err != nil { + return fmt.Errorf("error sweeping EC2 VPCs (%s): %w", region, err) } - return errs.ErrorOrNil() + return nil } func sweepVPNConnections(region string) error { diff --git a/internal/service/ec2/transit_gateway_data_source_test.go b/internal/service/ec2/transit_gateway_data_source_test.go index 9b3d5c800aa..8cafb8d3e57 100644 --- a/internal/service/ec2/transit_gateway_data_source_test.go +++ b/internal/service/ec2/transit_gateway_data_source_test.go @@ -31,7 +31,9 @@ func TestAccEC2TransitGatewayDataSource_serial(t *testing.T) { }, "RouteTables": { "basic": testAccTransitGatewayRouteTablesDataSource_basic, - "Filter": testAccTransitGatewayRouteTablesDataSource_Filter, + "Filter": testAccTransitGatewayRouteTablesDataSource_filter, + "Tags": testAccTransitGatewayRouteTablesDataSource_tags, + "Empty": testAccTransitGatewayRouteTablesDataSource_empty, }, "VpcAttachment": { "Filter": testAccTransitGatewayVPCAttachmentDataSource_Filter, diff --git a/internal/service/ec2/transit_gateway_route_tables_data_source.go b/internal/service/ec2/transit_gateway_route_tables_data_source.go index 294050898b8..be60da44063 100644 --- a/internal/service/ec2/transit_gateway_route_tables_data_source.go +++ b/internal/service/ec2/transit_gateway_route_tables_data_source.go @@ -15,15 +15,12 @@ func DataSourceTransitGatewayRouteTables() *schema.Resource { Read: dataSourceTransitGatewayRouteTablesRead, Schema: map[string]*schema.Schema{ - "filter": CustomFiltersSchema(), - + "filter": DataSourceFiltersSchema(), "ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, }, - "tags": tftags.TagsSchemaComputed(), }, } @@ -38,50 +35,28 @@ func dataSourceTransitGatewayRouteTablesRead(d *schema.ResourceData, meta interf Tags(tftags.New(d.Get("tags").(map[string]interface{}))), )...) - input.Filters = append(input.Filters, BuildCustomFilterList( + input.Filters = append(input.Filters, BuildFiltersDataSource( d.Get("filter").(*schema.Set), )...) if len(input.Filters) == 0 { - // Don't send an empty filters list; the EC2 API won't accept it. input.Filters = nil } - var transitGatewayRouteTables []*ec2.TransitGatewayRouteTable - - err := conn.DescribeTransitGatewayRouteTablesPages(input, func(page *ec2.DescribeTransitGatewayRouteTablesOutput, lastPage bool) bool { - if page == nil { - return !lastPage - } - - transitGatewayRouteTables = append(transitGatewayRouteTables, page.TransitGatewayRouteTables...) - - return !lastPage - }) + output, err := FindTransitGatewayRouteTables(conn, input) if err != nil { - return fmt.Errorf("error describing EC2 Transit Gateway Route Tables: %w", err) + return fmt.Errorf("error reading EC2 Transit Gateway Route Tables: %w", err) } - if len(transitGatewayRouteTables) == 0 { - return fmt.Errorf("no matching EC2 Transit Gateway Route Tables found") - } - - var ids []string - - for _, transitGatewayRouteTable := range transitGatewayRouteTables { - if transitGatewayRouteTable == nil { - continue - } + var routeTableIDs []string - ids = append(ids, aws.StringValue(transitGatewayRouteTable.TransitGatewayRouteTableId)) + for _, v := range output { + routeTableIDs = append(routeTableIDs, aws.StringValue(v.TransitGatewayRouteTableId)) } d.SetId(meta.(*conns.AWSClient).Region) - - if err = d.Set("ids", ids); err != nil { - return fmt.Errorf("error setting ids: %w", err) - } + d.Set("ids", routeTableIDs) return nil } diff --git a/internal/service/ec2/transit_gateway_route_tables_data_source_test.go b/internal/service/ec2/transit_gateway_route_tables_data_source_test.go index 4cd8ab8a19b..3e3bdfff874 100644 --- a/internal/service/ec2/transit_gateway_route_tables_data_source_test.go +++ b/internal/service/ec2/transit_gateway_route_tables_data_source_test.go @@ -1,14 +1,17 @@ package ec2_test import ( + "fmt" "testing" "github.com/aws/aws-sdk-go/service/ec2" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) func testAccTransitGatewayRouteTablesDataSource_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_ec2_transit_gateway_route_tables.test" resource.Test(t, resource.TestCase{ @@ -17,16 +20,17 @@ func testAccTransitGatewayRouteTablesDataSource_basic(t *testing.T) { Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccTransitGatewayRouteTablesDataSourceConfig, + Config: testAccTransitGatewayRouteTablesDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( - testCheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), ), }, }, }) } -func testAccTransitGatewayRouteTablesDataSource_Filter(t *testing.T) { +func testAccTransitGatewayRouteTablesDataSource_filter(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) dataSourceName := "data.aws_ec2_transit_gateway_route_tables.test" resource.Test(t, resource.TestCase{ @@ -35,32 +39,89 @@ func testAccTransitGatewayRouteTablesDataSource_Filter(t *testing.T) { Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccTransitGatewayRouteTablesTransitGatewayFilterDataSource, + Config: testAccTransitGatewayRouteTablesTransitGatewayFilterDataSource(rName), Check: resource.ComposeTestCheckFunc( - testCheckResourceAttrGreaterThanValue(dataSourceName, "ids.#", "0"), + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "2"), ), }, }, }) } -const testAccTransitGatewayRouteTablesDataSourceConfig = ` -resource "aws_ec2_transit_gateway" "test" {} +func testAccTransitGatewayRouteTablesDataSource_tags(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_ec2_transit_gateway_route_tables.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckTransitGateway(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccTransitGatewayRouteTablesTransitGatewayTagsDataSource(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "1"), + ), + }, + }, + }) +} + +func testAccTransitGatewayRouteTablesDataSource_empty(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + dataSourceName := "data.aws_ec2_transit_gateway_route_tables.test" + + resource.Test(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t); testAccPreCheckTransitGateway(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccTransitGatewayRouteTablesTransitGatewayEmptyDataSource(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "0"), + ), + }, + }, + }) +} + +func testAccTransitGatewayRouteTablesDataSourceConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_ec2_transit_gateway" "test" { + tags = { + Name = %[1]q + } +} resource "aws_ec2_transit_gateway_route_table" "test" { transit_gateway_id = aws_ec2_transit_gateway.test.id + + tags = { + Name = %[1]q + } } data "aws_ec2_transit_gateway_route_tables" "test" { depends_on = [aws_ec2_transit_gateway_route_table.test] } -` +`, rName) +} -const testAccTransitGatewayRouteTablesTransitGatewayFilterDataSource = ` -resource "aws_ec2_transit_gateway" "test" {} +func testAccTransitGatewayRouteTablesTransitGatewayFilterDataSource(rName string) string { + return fmt.Sprintf(` +resource "aws_ec2_transit_gateway" "test" { + tags = { + Name = %[1]q + } +} resource "aws_ec2_transit_gateway_route_table" "test" { transit_gateway_id = aws_ec2_transit_gateway.test.id + + tags = { + Name = %[1]q + } } data "aws_ec2_transit_gateway_route_tables" "test" { @@ -71,4 +132,41 @@ data "aws_ec2_transit_gateway_route_tables" "test" { depends_on = [aws_ec2_transit_gateway_route_table.test] } -` +`, rName) +} + +func testAccTransitGatewayRouteTablesTransitGatewayTagsDataSource(rName string) string { + return fmt.Sprintf(` +resource "aws_ec2_transit_gateway" "test" { + tags = { + Name = %[1]q + } +} + +resource "aws_ec2_transit_gateway_route_table" "test" { + transit_gateway_id = aws_ec2_transit_gateway.test.id + + tags = { + Name = %[1]q + } +} + +data "aws_ec2_transit_gateway_route_tables" "test" { + tags = { + Name = %[1]q + } + + depends_on = [aws_ec2_transit_gateway_route_table.test] +} +`, rName) +} + +func testAccTransitGatewayRouteTablesTransitGatewayEmptyDataSource(rName string) string { + return fmt.Sprintf(` +data "aws_ec2_transit_gateway_route_tables" "test" { + tags = { + Name = %[1]q + } +} +`, rName) +} diff --git a/internal/service/ec2/vpc.go b/internal/service/ec2/vpc.go index 7a2a5255101..ce91cf44317 100644 --- a/internal/service/ec2/vpc.go +++ b/internal/service/ec2/vpc.go @@ -45,6 +45,8 @@ func ResourceVPC() *schema.Resource { SchemaVersion: 1, MigrateState: VPCMigrateState, + // Keep in sync with aws_default_vpc's schema. + // See notes in default_vpc.go. Schema: map[string]*schema.Schema{ "arn": { Type: schema.TypeString, @@ -341,25 +343,8 @@ func resourceVPCRead(d *schema.ResourceData, meta interface{}) error { d.Set("ipv6_ipam_pool_id", nil) d.Set("ipv6_netmask_length", nil) - // Try and find IPv6 CIDR block information, first by any stored association ID. - // Then if no IPv6 CIDR block information is available, use the first associated IPv6 CIDR block. - var ipv6CIDRBlockAssociation *ec2.VpcIpv6CidrBlockAssociation - if associationID := d.Get("ipv6_association_id").(string); associationID != "" { - for _, v := range vpc.Ipv6CidrBlockAssociationSet { - if state := aws.StringValue(v.Ipv6CidrBlockState.State); state == ec2.VpcCidrBlockStateCodeAssociated && aws.StringValue(v.AssociationId) == associationID { - ipv6CIDRBlockAssociation = v + ipv6CIDRBlockAssociation := defaultIPv6CIDRBlockAssociation(vpc, d.Get("ipv6_association_id").(string)) - break - } - } - } - if ipv6CIDRBlockAssociation == nil { - for _, v := range vpc.Ipv6CidrBlockAssociationSet { - if aws.StringValue(v.Ipv6CidrBlockState.State) == ec2.VpcCidrBlockStateCodeAssociated { - ipv6CIDRBlockAssociation = v - } - } - } if ipv6CIDRBlockAssociation == nil { d.Set("ipv6_association_id", nil) } else { @@ -525,6 +510,33 @@ func resourceVPCCustomizeDiff(_ context.Context, diff *schema.ResourceDiff, v in return nil } +// defaultIPv6CIDRBlockAssociation returns the "default" IPv6 CIDR block. +// Try and find IPv6 CIDR block information, first by any stored association ID. +// Then if no IPv6 CIDR block information is available, use the first associated IPv6 CIDR block. +func defaultIPv6CIDRBlockAssociation(vpc *ec2.Vpc, associationID string) *ec2.VpcIpv6CidrBlockAssociation { + var ipv6CIDRBlockAssociation *ec2.VpcIpv6CidrBlockAssociation + + if associationID != "" { + for _, v := range vpc.Ipv6CidrBlockAssociationSet { + if state := aws.StringValue(v.Ipv6CidrBlockState.State); state == ec2.VpcCidrBlockStateCodeAssociated && aws.StringValue(v.AssociationId) == associationID { + ipv6CIDRBlockAssociation = v + + break + } + } + } + + if ipv6CIDRBlockAssociation == nil { + for _, v := range vpc.Ipv6CidrBlockAssociationSet { + if aws.StringValue(v.Ipv6CidrBlockState.State) == ec2.VpcCidrBlockStateCodeAssociated { + ipv6CIDRBlockAssociation = v + } + } + } + + return ipv6CIDRBlockAssociation +} + type vpcInfo struct { vpc *ec2.Vpc enableClassicLink bool @@ -656,7 +668,7 @@ func modifyVPCIPv6CIDRBlockAssociation(conn *ec2.EC2, vpcID, associationID strin _, err := conn.DisassociateVpcCidrBlock(input) if err != nil { - return "", fmt.Errorf("error disassociating EC2 VPC (%s) CIDR block (%s): %w", vpcID, associationID, err) + return "", fmt.Errorf("error disassociating EC2 VPC (%s) IPv6 CIDR block (%s): %w", vpcID, associationID, err) } _, err = WaitVPCIPv6CIDRBlockAssociationDeleted(conn, associationID, vpcIPv6CIDRBlockAssociationDeletedTimeout) diff --git a/internal/service/ec2/vpc_peering_connections_data_source.go b/internal/service/ec2/vpc_peering_connections_data_source.go index 948a0b296de..70ac96734ef 100644 --- a/internal/service/ec2/vpc_peering_connections_data_source.go +++ b/internal/service/ec2/vpc_peering_connections_data_source.go @@ -47,8 +47,8 @@ func dataSourceVPCPeeringConnectionsRead(d *schema.ResourceData, meta interface{ if err != nil { return err } - if resp == nil || len(resp.VpcPeeringConnections) == 0 { - return fmt.Errorf("no matching VPC peering connections found") + if resp == nil { + return fmt.Errorf("error reading EC2 VPC Peering Connections: empty response") } var ids []string diff --git a/internal/service/ec2/vpc_peering_connections_data_source_test.go b/internal/service/ec2/vpc_peering_connections_data_source_test.go index c61efa8b67c..67454597835 100644 --- a/internal/service/ec2/vpc_peering_connections_data_source_test.go +++ b/internal/service/ec2/vpc_peering_connections_data_source_test.go @@ -24,6 +24,22 @@ func TestAccEC2VPCPeeringConnectionsDataSource_basic(t *testing.T) { }) } +func TestAccEC2VPCPeeringConnectionsDataSource_NoMatches(t *testing.T) { + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + Steps: []resource.TestStep{ + { + Config: testAccVPCPeeringConnectionsDataSourceConfig_NoMatches, + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_vpc_peering_connections.test", "ids.#", "0"), + ), + }, + }, + }) +} + const testAccVPCPeeringConnectionsDataSourceConfig = ` resource "aws_vpc" "foo" { cidr_block = "10.1.0.0/16" @@ -81,3 +97,11 @@ data "aws_vpc_peering_connections" "test_by_filters" { } } ` + +const testAccVPCPeeringConnectionsDataSourceConfig_NoMatches = ` +data "aws_vpc_peering_connections" "test" { + tags = { + Name = "Non-Existent" + } +} +` diff --git a/internal/service/ec2/vpcs_data_source.go b/internal/service/ec2/vpcs_data_source.go index b9aaf5df3e9..6f00ce37845 100644 --- a/internal/service/ec2/vpcs_data_source.go +++ b/internal/service/ec2/vpcs_data_source.go @@ -2,7 +2,6 @@ package ec2 import ( "fmt" - "log" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/service/ec2" @@ -15,16 +14,13 @@ func DataSourceVPCs() *schema.Resource { return &schema.Resource{ Read: dataSourceVPCsRead, Schema: map[string]*schema.Schema{ - "filter": CustomFiltersSchema(), - - "tags": tftags.TagsSchemaComputed(), - + "filter": DataSourceFiltersSchema(), "ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, - Set: schema.HashString, }, + "tags": tftags.TagsSchemaComputed(), }, } } @@ -32,45 +28,37 @@ func DataSourceVPCs() *schema.Resource { func dataSourceVPCsRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn - req := &ec2.DescribeVpcsInput{} + input := &ec2.DescribeVpcsInput{} if tags, tagsOk := d.GetOk("tags"); tagsOk { - req.Filters = append(req.Filters, BuildTagFilterList( + input.Filters = append(input.Filters, BuildTagFilterList( Tags(tftags.New(tags.(map[string]interface{}))), )...) } if filters, filtersOk := d.GetOk("filter"); filtersOk { - req.Filters = append(req.Filters, BuildCustomFilterList( - filters.(*schema.Set), - )...) - } - if len(req.Filters) == 0 { - // Don't send an empty filters list; the EC2 API won't accept it. - req.Filters = nil + input.Filters = append(input.Filters, + BuildFiltersDataSource(filters.(*schema.Set))...) } - log.Printf("[DEBUG] DescribeVpcs %s\n", req) - resp, err := conn.DescribeVpcs(req) - if err != nil { - return err + if len(input.Filters) == 0 { + input.Filters = nil } - if resp == nil || len(resp.Vpcs) == 0 { - return fmt.Errorf("no matching VPC found") + output, err := FindVPCs(conn, input) + + if err != nil { + return fmt.Errorf("error reading EC2 VPCs: %w", err) } - vpcs := make([]string, 0) + var vpcIDs []string - for _, vpc := range resp.Vpcs { - vpcs = append(vpcs, aws.StringValue(vpc.VpcId)) + for _, v := range output { + vpcIDs = append(vpcIDs, aws.StringValue(v.VpcId)) } d.SetId(meta.(*conns.AWSClient).Region) - - if err := d.Set("ids", vpcs); err != nil { - return fmt.Errorf("error setting vpc ids: %w", err) - } + d.Set("ids", vpcIDs) return nil } diff --git a/internal/service/ec2/vpcs_data_source_test.go b/internal/service/ec2/vpcs_data_source_test.go index 465acf80e85..b618c63e87b 100644 --- a/internal/service/ec2/vpcs_data_source_test.go +++ b/internal/service/ec2/vpcs_data_source_test.go @@ -7,20 +7,21 @@ import ( "github.com/aws/aws-sdk-go/service/ec2" sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) func TestAccEC2VPCsDataSource_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), Providers: acctest.Providers, Steps: []resource.TestStep{ { - Config: testAccVPCsDataSourceConfig(), + Config: testAccVPCsDataSourceConfig(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckVPCsExistsDataSource("data.aws_vpcs.all"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_vpcs.test", "ids.#", "0"), ), }, }, @@ -28,7 +29,8 @@ func TestAccEC2VPCsDataSource_basic(t *testing.T) { } func TestAccEC2VPCsDataSource_tags(t *testing.T) { - rName := sdkacctest.RandString(5) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), @@ -37,8 +39,7 @@ func TestAccEC2VPCsDataSource_tags(t *testing.T) { { Config: testAccVPCsDataSourceConfig_tags(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckVPCsExistsDataSource("data.aws_vpcs.selected"), - resource.TestCheckResourceAttr("data.aws_vpcs.selected", "ids.#", "1"), + resource.TestCheckResourceAttr("data.aws_vpcs.test", "ids.#", "1"), ), }, }, @@ -46,7 +47,8 @@ func TestAccEC2VPCsDataSource_tags(t *testing.T) { } func TestAccEC2VPCsDataSource_filters(t *testing.T) { - rName := sdkacctest.RandString(5) + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), @@ -55,102 +57,89 @@ func TestAccEC2VPCsDataSource_filters(t *testing.T) { { Config: testAccVPCsDataSourceConfig_filters(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckVPCsExistsDataSource("data.aws_vpcs.selected"), - testCheckResourceAttrGreaterThanValue("data.aws_vpcs.selected", "ids.#", "0"), + acctest.CheckResourceAttrGreaterThanValue("data.aws_vpcs.test", "ids.#", "0"), ), }, }, }) } -func testCheckResourceAttrGreaterThanValue(name, key, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - ms := s.RootModule() - rs, ok := ms.Resources[name] - if !ok { - return fmt.Errorf("Not found: %s in %s", name, ms.Path) - } - - is := rs.Primary - if is == nil { - return fmt.Errorf("No primary instance: %s in %s", name, ms.Path) - } - - if v, ok := is.Attributes[key]; !ok || !(v > value) { - if !ok { - return fmt.Errorf("%s: Attribute '%s' not found", name, key) - } - - return fmt.Errorf( - "%s: Attribute '%s' is not greater than %#v, got %#v", - name, - key, - value, - v) - } - return nil - - } -} +func TestAccEC2VPCsDataSource_empty(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) -func testAccCheckVPCsExistsDataSource(n string) resource.TestCheckFunc { - return func(s *terraform.State) error { - rs, ok := s.RootModule().Resources[n] - if !ok { - return fmt.Errorf("Can't find aws_vpcs data source: %s", n) - } - - if rs.Primary.ID == "" { - return fmt.Errorf("aws_vpcs data source ID not set") - } - return nil - } + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, ec2.EndpointsID), + Providers: acctest.Providers, + Steps: []resource.TestStep{ + { + Config: testAccVPCsDataSourceConfig_empty(rName), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr("data.aws_vpcs.test", "ids.#", "0"), + ), + }, + }, + }) } -func testAccVPCsDataSourceConfig() string { - return ` -resource "aws_vpc" "test-vpc" { +func testAccVPCsDataSourceConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_vpc" "test" { cidr_block = "10.0.0.0/24" + + tags = { + Name = %[1]q + } } -data "aws_vpcs" "all" {} -` +data "aws_vpcs" "test" {} +`, rName) } func testAccVPCsDataSourceConfig_tags(rName string) string { return fmt.Sprintf(` -resource "aws_vpc" "test-vpc" { +resource "aws_vpc" "test" { cidr_block = "10.0.0.0/24" tags = { - Name = "testacc-vpc-%s" + Name = %[1]q Service = "testacc-test" } } -data "aws_vpcs" "selected" { +data "aws_vpcs" "test" { tags = { - Name = "testacc-vpc-%s" - Service = aws_vpc.test-vpc.tags["Service"] + Name = %[1]q + Service = aws_vpc.test.tags["Service"] } } -`, rName, rName) +`, rName) } func testAccVPCsDataSourceConfig_filters(rName string) string { return fmt.Sprintf(` -resource "aws_vpc" "test-vpc" { +resource "aws_vpc" "test" { cidr_block = "192.168.0.0/25" tags = { - Name = "testacc-vpc-%s" + Name = %[1]q } } -data "aws_vpcs" "selected" { +data "aws_vpcs" "test" { filter { name = "cidr" - values = [aws_vpc.test-vpc.cidr_block] + values = [aws_vpc.test.cidr_block] + } +} +`, rName) +} + +func testAccVPCsDataSourceConfig_empty(rName string) string { + return fmt.Sprintf(` +data "aws_vpcs" "test" { + tags = { + Name = %[1]q } } `, rName) diff --git a/internal/service/ec2/vpn_connection.go b/internal/service/ec2/vpn_connection.go index abce1fb2765..ceaf071d9bb 100644 --- a/internal/service/ec2/vpn_connection.go +++ b/internal/service/ec2/vpn_connection.go @@ -7,6 +7,7 @@ import ( "net" "regexp" "sort" + "strconv" "time" "github.com/aws/aws-sdk-go/aws" @@ -37,8 +38,9 @@ func ResourceVPNConnection() *schema.Resource { Computed: true, }, "customer_gateway_configuration": { - Type: schema.TypeString, - Computed: true, + Type: schema.TypeString, + Sensitive: true, + Computed: true, }, "customer_gateway_id": { Type: schema.TypeString, @@ -55,26 +57,26 @@ func ResourceVPNConnection() *schema.Resource { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateLocalIpv4NetworkCidr(), + ValidateFunc: validation.IsCIDRNetwork(0, 32), }, "local_ipv6_network_cidr": { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateLocalIpv6NetworkCidr(), + ValidateFunc: validation.IsCIDRNetwork(0, 128), RequiredWith: []string{"transit_gateway_id"}, }, "remote_ipv4_network_cidr": { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateLocalIpv4NetworkCidr(), + ValidateFunc: validation.IsCIDRNetwork(0, 32), }, "remote_ipv6_network_cidr": { Type: schema.TypeString, Optional: true, Computed: true, - ValidateFunc: validateLocalIpv6NetworkCidr(), + ValidateFunc: validation.IsCIDRNetwork(0, 128), RequiredWith: []string{"transit_gateway_id"}, }, "routes": { @@ -140,17 +142,32 @@ func ResourceVPNConnection() *schema.Resource { "tunnel1_dpd_timeout_action": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateVpnConnectionTunnelDpdTimeoutAction(), + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsDPDTimeoutAction_Values(), false), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == defaultVpnTunnelOptionsDPDTimeoutAction && new == "" { + return true + } + return false + }, }, "tunnel1_dpd_timeout_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelDpdTimeoutSeconds(), + ValidateFunc: validation.IntAtLeast(30), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsDPDTimeoutSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel1_ike_versions": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsIKEVersion_Values(), false), + }, }, "tunnel1_inside_cidr": { Type: schema.TypeString, @@ -175,17 +192,29 @@ func ResourceVPNConnection() *schema.Resource { "tunnel1_phase1_encryption_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase1EncryptionAlgorithm_Values(), false), + }, }, "tunnel1_phase1_integrity_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase1IntegrityAlgorithm_Values(), false), + }, }, "tunnel1_phase1_lifetime_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelPhase1LifetimeSeconds(), + ValidateFunc: validation.IntBetween(900, 28800), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsPhase1LifetimeSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel1_phase2_dh_group_numbers": { Type: schema.TypeSet, @@ -195,17 +224,29 @@ func ResourceVPNConnection() *schema.Resource { "tunnel1_phase2_encryption_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase2EncryptionAlgorithm_Values(), false), + }, }, "tunnel1_phase2_integrity_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase2IntegrityAlgorithm_Values(), false), + }, }, "tunnel1_phase2_lifetime_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelPhase2LifetimeSeconds(), + ValidateFunc: validation.IntBetween(900, 3600), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsPhase2LifetimeSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel1_preshared_key": { Type: schema.TypeString, @@ -217,22 +258,46 @@ func ResourceVPNConnection() *schema.Resource { "tunnel1_rekey_fuzz_percentage": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelRekeyFuzzPercentage(), + ValidateFunc: validation.IntBetween(0, 100), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsRekeyFuzzPercentage) && new == "0" { + return true + } + return false + }, }, "tunnel1_rekey_margin_time_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelRekeyMarginTimeSeconds(), + ValidateFunc: validation.IntBetween(60, 1800), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsRekeyMarginTimeSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel1_replay_window_size": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelReplayWindowSize(), + ValidateFunc: validation.IntBetween(64, 2048), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsReplayWindowSize) && new == "0" { + return true + } + return false + }, }, "tunnel1_startup_action": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateVpnConnectionTunnelStartupAction(), + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsStartupAction_Values(), false), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == defaultVpnTunnelOptionsStartupAction && new == "" { + return true + } + return false + }, }, "tunnel1_vgw_inside_address": { Type: schema.TypeString, @@ -257,17 +322,32 @@ func ResourceVPNConnection() *schema.Resource { "tunnel2_dpd_timeout_action": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateVpnConnectionTunnelDpdTimeoutAction(), + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsDPDTimeoutAction_Values(), false), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == defaultVpnTunnelOptionsDPDTimeoutAction && new == "" { + return true + } + return false + }, }, "tunnel2_dpd_timeout_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelDpdTimeoutSeconds(), + ValidateFunc: validation.IntAtLeast(30), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsDPDTimeoutSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel2_ike_versions": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsIKEVersion_Values(), false), + }, }, "tunnel2_inside_cidr": { Type: schema.TypeString, @@ -292,17 +372,29 @@ func ResourceVPNConnection() *schema.Resource { "tunnel2_phase1_encryption_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase1EncryptionAlgorithm_Values(), false), + }, }, "tunnel2_phase1_integrity_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase1IntegrityAlgorithm_Values(), false), + }, }, "tunnel2_phase1_lifetime_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelPhase1LifetimeSeconds(), + ValidateFunc: validation.IntBetween(900, 28800), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsPhase1LifetimeSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel2_phase2_dh_group_numbers": { Type: schema.TypeSet, @@ -312,17 +404,29 @@ func ResourceVPNConnection() *schema.Resource { "tunnel2_phase2_encryption_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase2EncryptionAlgorithm_Values(), false), + }, }, "tunnel2_phase2_integrity_algorithms": { Type: schema.TypeSet, Optional: true, - Elem: &schema.Schema{Type: schema.TypeString}, + Elem: &schema.Schema{ + Type: schema.TypeString, + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsPhase2IntegrityAlgorithm_Values(), false), + }, }, "tunnel2_phase2_lifetime_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelPhase2LifetimeSeconds(), + ValidateFunc: validation.IntBetween(900, 3600), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsPhase2LifetimeSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel2_preshared_key": { Type: schema.TypeString, @@ -334,22 +438,46 @@ func ResourceVPNConnection() *schema.Resource { "tunnel2_rekey_fuzz_percentage": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelRekeyFuzzPercentage(), + ValidateFunc: validation.IntBetween(0, 100), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsRekeyFuzzPercentage) && new == "0" { + return true + } + return false + }, }, "tunnel2_rekey_margin_time_seconds": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelRekeyMarginTimeSeconds(), + ValidateFunc: validation.IntBetween(60, 1800), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsRekeyMarginTimeSeconds) && new == "0" { + return true + } + return false + }, }, "tunnel2_replay_window_size": { Type: schema.TypeInt, Optional: true, - ValidateFunc: validateVpnConnectionTunnelReplayWindowSize(), + ValidateFunc: validation.IntBetween(64, 2048), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == strconv.Itoa(defaultVpnTunnelOptionsReplayWindowSize) && new == "0" { + return true + } + return false + }, }, "tunnel2_startup_action": { Type: schema.TypeString, Optional: true, - ValidateFunc: validateVpnConnectionTunnelStartupAction(), + ValidateFunc: validation.StringInSlice(VpnTunnelOptionsStartupAction_Values(), false), + DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool { + if old == defaultVpnTunnelOptionsStartupAction && new == "" { + return true + } + return false + }, }, "tunnel2_vgw_inside_address": { Type: schema.TypeString, @@ -404,6 +532,45 @@ func ResourceVPNConnection() *schema.Resource { } } +// https://docs.aws.amazon.com/vpn/latest/s2svpn/VPNTunnels.html. +var ( + defaultVpnTunnelOptionsDPDTimeoutAction = VpnTunnelOptionsDPDTimeoutActionClear + defaultVpnTunnelOptionsDPDTimeoutSeconds = 30 + defaultVpnTunnelOptionsIKEVersions = []string{VpnTunnelOptionsIKEVersion1, VpnTunnelOptionsIKEVersion2} + defaultVpnTunnelOptionsPhase1DHGroupNumbers = []int{2, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24} + defaultVpnTunnelOptionsPhase1EncryptionAlgorithms = []string{ + VpnTunnelOptionsPhase1EncryptionAlgorithmAES128, + VpnTunnelOptionsPhase1EncryptionAlgorithmAES256, + VpnTunnelOptionsPhase1EncryptionAlgorithmAES128_GCM_16, + VpnTunnelOptionsPhase1EncryptionAlgorithmAES256_GCM_16, + } + defaultVpnTunnelOptionsPhase1IntegrityAlgorithms = []string{ + VpnTunnelOptionsPhase1IntegrityAlgorithmSHA1, + VpnTunnelOptionsPhase1IntegrityAlgorithmSHA2_256, + VpnTunnelOptionsPhase1IntegrityAlgorithmSHA2_384, + VpnTunnelOptionsPhase1IntegrityAlgorithmSHA2_512, + } + defaultVpnTunnelOptionsPhase1LifetimeSeconds = 28800 + defaultVpnTunnelOptionsPhase2DHGroupNumbers = []int{2, 5, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24} + defaultVpnTunnelOptionsPhase2EncryptionAlgorithms = []string{ + VpnTunnelOptionsPhase2EncryptionAlgorithmAES128, + VpnTunnelOptionsPhase2EncryptionAlgorithmAES256, + VpnTunnelOptionsPhase2EncryptionAlgorithmAES128_GCM_16, + VpnTunnelOptionsPhase2EncryptionAlgorithmAES256_GCM_16, + } + defaultVpnTunnelOptionsPhase2IntegrityAlgorithms = []string{ + VpnTunnelOptionsPhase2IntegrityAlgorithmSHA1, + VpnTunnelOptionsPhase2IntegrityAlgorithmSHA2_256, + VpnTunnelOptionsPhase2IntegrityAlgorithmSHA2_384, + VpnTunnelOptionsPhase2IntegrityAlgorithmSHA2_512, + } + defaultVpnTunnelOptionsPhase2LifetimeSeconds = 3600 + defaultVpnTunnelOptionsRekeyFuzzPercentage = 100 + defaultVpnTunnelOptionsRekeyMarginTimeSeconds = 540 + defaultVpnTunnelOptionsReplayWindowSize = 1024 + defaultVpnTunnelOptionsStartupAction = VpnTunnelOptionsStartupActionAdd +) + func resourceVPNConnectionCreate(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EC2Conn defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig @@ -836,81 +1003,139 @@ func expandModifyVpnTunnelOptionsSpecification(d *schema.ResourceData, prefix st hasChange := false if key := prefix + "dpd_timeout_action"; d.HasChange(key) { - apiObject.DPDTimeoutAction = aws.String(d.Get(key).(string)) + if v, ok := d.GetOk(key); ok { + apiObject.DPDTimeoutAction = aws.String(v.(string)) + } else { + apiObject.DPDTimeoutAction = aws.String(defaultVpnTunnelOptionsDPDTimeoutAction) + } hasChange = true } if key := prefix + "dpd_timeout_seconds"; d.HasChange(key) { - apiObject.DPDTimeoutSeconds = aws.Int64(int64(d.Get(key).(int))) + if v, ok := d.GetOk(key); ok { + apiObject.DPDTimeoutSeconds = aws.Int64(int64(v.(int))) + } else { + apiObject.DPDTimeoutSeconds = aws.Int64(int64(defaultVpnTunnelOptionsDPDTimeoutSeconds)) + } hasChange = true } if key := prefix + "ike_versions"; d.HasChange(key) { - for _, v := range d.Get(key).(*schema.Set).List() { - apiObject.IKEVersions = append(apiObject.IKEVersions, &ec2.IKEVersionsRequestListValue{Value: aws.String(v.(string))}) + if v, ok := d.GetOk(key); ok && v.(*schema.Set).Len() > 0 { + for _, v := range d.Get(key).(*schema.Set).List() { + apiObject.IKEVersions = append(apiObject.IKEVersions, &ec2.IKEVersionsRequestListValue{Value: aws.String(v.(string))}) + } + } else { + for _, v := range defaultVpnTunnelOptionsIKEVersions { + apiObject.IKEVersions = append(apiObject.IKEVersions, &ec2.IKEVersionsRequestListValue{Value: aws.String(v)}) + } } hasChange = true } if key := prefix + "phase1_dh_group_numbers"; d.HasChange(key) { - for _, v := range d.Get(key).(*schema.Set).List() { - apiObject.Phase1DHGroupNumbers = append(apiObject.Phase1DHGroupNumbers, &ec2.Phase1DHGroupNumbersRequestListValue{Value: aws.Int64(int64(v.(int)))}) + if v, ok := d.GetOk(key); ok && v.(*schema.Set).Len() > 0 { + for _, v := range d.Get(key).(*schema.Set).List() { + apiObject.Phase1DHGroupNumbers = append(apiObject.Phase1DHGroupNumbers, &ec2.Phase1DHGroupNumbersRequestListValue{Value: aws.Int64(int64(v.(int)))}) + } + } else { + for _, v := range defaultVpnTunnelOptionsPhase1DHGroupNumbers { + apiObject.Phase1DHGroupNumbers = append(apiObject.Phase1DHGroupNumbers, &ec2.Phase1DHGroupNumbersRequestListValue{Value: aws.Int64(int64(v))}) + } } hasChange = true } if key := prefix + "phase1_encryption_algorithms"; d.HasChange(key) { - for _, v := range d.Get(key).(*schema.Set).List() { - apiObject.Phase1EncryptionAlgorithms = append(apiObject.Phase1EncryptionAlgorithms, &ec2.Phase1EncryptionAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + if v, ok := d.GetOk(key); ok && v.(*schema.Set).Len() > 0 { + for _, v := range d.Get(key).(*schema.Set).List() { + apiObject.Phase1EncryptionAlgorithms = append(apiObject.Phase1EncryptionAlgorithms, &ec2.Phase1EncryptionAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + } + } else { + for _, v := range defaultVpnTunnelOptionsPhase1EncryptionAlgorithms { + apiObject.Phase1EncryptionAlgorithms = append(apiObject.Phase1EncryptionAlgorithms, &ec2.Phase1EncryptionAlgorithmsRequestListValue{Value: aws.String(v)}) + } } hasChange = true } if key := prefix + "phase1_integrity_algorithms"; d.HasChange(key) { - for _, v := range d.Get(key).(*schema.Set).List() { - apiObject.Phase1IntegrityAlgorithms = append(apiObject.Phase1IntegrityAlgorithms, &ec2.Phase1IntegrityAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + if v, ok := d.GetOk(key); ok && v.(*schema.Set).Len() > 0 { + for _, v := range d.Get(key).(*schema.Set).List() { + apiObject.Phase1IntegrityAlgorithms = append(apiObject.Phase1IntegrityAlgorithms, &ec2.Phase1IntegrityAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + } + } else { + for _, v := range defaultVpnTunnelOptionsPhase1IntegrityAlgorithms { + apiObject.Phase1IntegrityAlgorithms = append(apiObject.Phase1IntegrityAlgorithms, &ec2.Phase1IntegrityAlgorithmsRequestListValue{Value: aws.String(v)}) + } } hasChange = true } if key := prefix + "phase1_lifetime_seconds"; d.HasChange(key) { - apiObject.Phase1LifetimeSeconds = aws.Int64(int64(d.Get(key).(int))) + if v, ok := d.GetOk(key); ok { + apiObject.Phase1LifetimeSeconds = aws.Int64(int64(v.(int))) + } else { + apiObject.Phase1LifetimeSeconds = aws.Int64(int64(defaultVpnTunnelOptionsPhase1LifetimeSeconds)) + } hasChange = true } if key := prefix + "phase2_dh_group_numbers"; d.HasChange(key) { - for _, v := range d.Get(key).(*schema.Set).List() { - apiObject.Phase2DHGroupNumbers = append(apiObject.Phase2DHGroupNumbers, &ec2.Phase2DHGroupNumbersRequestListValue{Value: aws.Int64(int64(v.(int)))}) + if v, ok := d.GetOk(key); ok && v.(*schema.Set).Len() > 0 { + for _, v := range d.Get(key).(*schema.Set).List() { + apiObject.Phase2DHGroupNumbers = append(apiObject.Phase2DHGroupNumbers, &ec2.Phase2DHGroupNumbersRequestListValue{Value: aws.Int64(int64(v.(int)))}) + } + } else { + for _, v := range defaultVpnTunnelOptionsPhase2DHGroupNumbers { + apiObject.Phase2DHGroupNumbers = append(apiObject.Phase2DHGroupNumbers, &ec2.Phase2DHGroupNumbersRequestListValue{Value: aws.Int64(int64(v))}) + } } hasChange = true } if key := prefix + "phase2_encryption_algorithms"; d.HasChange(key) { - for _, v := range d.Get(key).(*schema.Set).List() { - apiObject.Phase2EncryptionAlgorithms = append(apiObject.Phase2EncryptionAlgorithms, &ec2.Phase2EncryptionAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + if v, ok := d.GetOk(key); ok && v.(*schema.Set).Len() > 0 { + for _, v := range d.Get(key).(*schema.Set).List() { + apiObject.Phase2EncryptionAlgorithms = append(apiObject.Phase2EncryptionAlgorithms, &ec2.Phase2EncryptionAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + } + } else { + for _, v := range defaultVpnTunnelOptionsPhase2EncryptionAlgorithms { + apiObject.Phase2EncryptionAlgorithms = append(apiObject.Phase2EncryptionAlgorithms, &ec2.Phase2EncryptionAlgorithmsRequestListValue{Value: aws.String(v)}) + } } hasChange = true } if key := prefix + "phase2_integrity_algorithms"; d.HasChange(key) { - for _, v := range d.Get(key).(*schema.Set).List() { - apiObject.Phase2IntegrityAlgorithms = append(apiObject.Phase2IntegrityAlgorithms, &ec2.Phase2IntegrityAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + if v, ok := d.GetOk(key); ok && v.(*schema.Set).Len() > 0 { + for _, v := range d.Get(key).(*schema.Set).List() { + apiObject.Phase2IntegrityAlgorithms = append(apiObject.Phase2IntegrityAlgorithms, &ec2.Phase2IntegrityAlgorithmsRequestListValue{Value: aws.String(v.(string))}) + } + } else { + for _, v := range defaultVpnTunnelOptionsPhase2IntegrityAlgorithms { + apiObject.Phase2IntegrityAlgorithms = append(apiObject.Phase2IntegrityAlgorithms, &ec2.Phase2IntegrityAlgorithmsRequestListValue{Value: aws.String(v)}) + } } hasChange = true } if key := prefix + "phase2_lifetime_seconds"; d.HasChange(key) { - apiObject.Phase2LifetimeSeconds = aws.Int64(int64(d.Get(key).(int))) + if v, ok := d.GetOk(key); ok { + apiObject.Phase2LifetimeSeconds = aws.Int64(int64(v.(int))) + } else { + apiObject.Phase2LifetimeSeconds = aws.Int64(int64(defaultVpnTunnelOptionsPhase2LifetimeSeconds)) + } hasChange = true } @@ -922,25 +1147,41 @@ func expandModifyVpnTunnelOptionsSpecification(d *schema.ResourceData, prefix st } if key := prefix + "rekey_fuzz_percentage"; d.HasChange(key) { - apiObject.RekeyFuzzPercentage = aws.Int64(int64(d.Get(key).(int))) + if v, ok := d.GetOk(key); ok { + apiObject.RekeyFuzzPercentage = aws.Int64(int64(v.(int))) + } else { + apiObject.RekeyFuzzPercentage = aws.Int64(int64(defaultVpnTunnelOptionsRekeyFuzzPercentage)) + } hasChange = true } if key := prefix + "rekey_margin_time_seconds"; d.HasChange(key) { - apiObject.RekeyMarginTimeSeconds = aws.Int64(int64(d.Get(key).(int))) + if v, ok := d.GetOk(key); ok { + apiObject.RekeyMarginTimeSeconds = aws.Int64(int64(v.(int))) + } else { + apiObject.RekeyMarginTimeSeconds = aws.Int64(int64(defaultVpnTunnelOptionsRekeyMarginTimeSeconds)) + } hasChange = true } if key := prefix + "replay_window_size"; d.HasChange(key) { - apiObject.ReplayWindowSize = aws.Int64(int64(d.Get(key).(int))) + if v, ok := d.GetOk(key); ok { + apiObject.ReplayWindowSize = aws.Int64(int64(v.(int))) + } else { + apiObject.ReplayWindowSize = aws.Int64(int64(defaultVpnTunnelOptionsReplayWindowSize)) + } hasChange = true } if key := prefix + "startup_action"; d.HasChange(key) { - apiObject.StartupAction = aws.String(d.Get(key).(string)) + if v, ok := d.GetOk(key); ok { + apiObject.StartupAction = aws.String(v.(string)) + } else { + apiObject.StartupAction = aws.String(defaultVpnTunnelOptionsStartupAction) + } hasChange = true } @@ -1241,75 +1482,3 @@ func validateVpnConnectionTunnelInsideIpv6CIDR() schema.SchemaValidateFunc { validation.StringMatch(regexp.MustCompile(`^fd00:`), "must be within fd00::/8"), ) } - -func validateLocalIpv4NetworkCidr() schema.SchemaValidateFunc { - return validation.All( - validation.IsCIDRNetwork(0, 32), - ) -} - -func validateLocalIpv6NetworkCidr() schema.SchemaValidateFunc { - return validation.All( - validation.IsCIDRNetwork(0, 128), - ) -} - -func validateVpnConnectionTunnelDpdTimeoutAction() schema.SchemaValidateFunc { - allowedDpdTimeoutActions := []string{ - "clear", - "none", - "restart", - } - - return validation.All( - validation.StringInSlice(allowedDpdTimeoutActions, false), - ) -} - -func validateVpnConnectionTunnelDpdTimeoutSeconds() schema.SchemaValidateFunc { - return validation.All( - //validation.IntBetween(0, 30) - validation.IntAtLeast(30), // Must be 30 or higher - ) -} - -func validateVpnConnectionTunnelPhase1LifetimeSeconds() schema.SchemaValidateFunc { - return validation.All( - validation.IntBetween(900, 28800), - ) -} - -func validateVpnConnectionTunnelPhase2LifetimeSeconds() schema.SchemaValidateFunc { - return validation.All( - validation.IntBetween(900, 3600), - ) -} - -func validateVpnConnectionTunnelRekeyFuzzPercentage() schema.SchemaValidateFunc { - return validation.All( - validation.IntBetween(0, 100), - ) -} - -func validateVpnConnectionTunnelRekeyMarginTimeSeconds() schema.SchemaValidateFunc { - return validation.All( - validation.IntBetween(60, 1800), - ) -} - -func validateVpnConnectionTunnelReplayWindowSize() schema.SchemaValidateFunc { - return validation.All( - validation.IntBetween(64, 2048), - ) -} - -func validateVpnConnectionTunnelStartupAction() schema.SchemaValidateFunc { - allowedStartupAction := []string{ - "add", - "start", - } - - return validation.All( - validation.StringInSlice(allowedStartupAction, false), - ) -} diff --git a/internal/service/ec2/vpn_connection_test.go b/internal/service/ec2/vpn_connection_test.go index 366afd7a441..085e898b506 100644 --- a/internal/service/ec2/vpn_connection_test.go +++ b/internal/service/ec2/vpn_connection_test.go @@ -402,7 +402,6 @@ func TestAccEC2VPNConnection_tunnelOptions(t *testing.T) { Providers: acctest.Providers, CheckDestroy: testAccVPNConnectionDestroy, Steps: []resource.TestStep{ - // Checking CIDR blocks { Config: testAccVPNConnectionSingleTunnelOptionsConfig(rName, rBgpAsn, "12345678", "not-a-cidr"), ExpectError: regexp.MustCompile(`invalid CIDR address: not-a-cidr`), @@ -443,8 +442,6 @@ func TestAccEC2VPNConnection_tunnelOptions(t *testing.T) { Config: testAccVPNConnectionSingleTunnelOptionsConfig(rName, rBgpAsn, "12345678", "169.254.169.252/30"), ExpectError: badCidrRangeErr, }, - - // Checking PreShared Key { Config: testAccVPNConnectionSingleTunnelOptionsConfig(rName, rBgpAsn, "1234567", "169.254.254.0/30"), ExpectError: regexp.MustCompile(`expected length of \w+ to be in the range \(8 - 64\)`), @@ -461,25 +458,6 @@ func TestAccEC2VPNConnection_tunnelOptions(t *testing.T) { Config: testAccVPNConnectionSingleTunnelOptionsConfig(rName, rBgpAsn, "1234567!", "169.254.254.0/30"), ExpectError: regexp.MustCompile(`can only contain alphanumeric, period and underscore characters`), }, - - // Should pre-check: - // - local_ipv4_network_cidr - // - local_ipv6_network_cidr - // - remote_ipv4_network_cidr - // - remote_ipv6_network_cidr - // - tunnel_inside_ip_version - // - tunnel1_dpd_timeout_action - // - tunnel1_dpd_timeout_seconds - // - tunnel1_phase1_lifetime_seconds - // - tunnel1_phase2_lifetime_seconds - // - tunnel1_rekey_fuzz_percentage - // - tunnel1_rekey_margin_time_seconds - // - tunnel1_replay_window_size - // - tunnel1_startup_action - // - tunnel1_inside_cidr - // - tunnel1_inside_ipv6_cidr - - //Try actual building { Config: testAccVPNConnectionTunnelOptionsConfig(rName, rBgpAsn, "192.168.1.1/32", "192.168.1.2/32", tunnel1, tunnel2), Check: resource.ComposeTestCheckFunc( @@ -507,7 +485,7 @@ func TestAccEC2VPNConnection_tunnelOptionsLesser(t *testing.T) { rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rBgpAsn := sdkacctest.RandIntRange(64512, 65534) resourceName := "aws_vpn_connection.test" - var vpn1, vpn2, vpn3, vpn4 ec2.VpnConnection + var vpn1, vpn2, vpn3, vpn4, vpn5 ec2.VpnConnection tunnel1 := TunnelOptions{ psk: "12345678", @@ -988,6 +966,66 @@ func TestAccEC2VPNConnection_tunnelOptionsLesser(t *testing.T) { resource.TestCheckResourceAttrSet(resourceName, "tunnel2_vgw_inside_address"), ), }, + // Test resetting to defaults. + // [local|remote]_ipv[4|6]_network_cidr, tunnel[1|2]_inside_[ipv6_]cidr and tunnel[1|2]_preshared_key are Computed so no diffs will be detected. + { + Config: testAccVPNConnectionConfig(rName, rBgpAsn), + Check: resource.ComposeAggregateTestCheckFunc( + testAccVPNConnectionExists(resourceName, &vpn5), + testAccCheckVPNConnectionNotRecreated(&vpn4, &vpn5), + resource.TestCheckResourceAttrSet(resourceName, "tunnel1_address"), + resource.TestCheckResourceAttrSet(resourceName, "tunnel1_bgp_asn"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_bgp_holdtime", "30"), + resource.TestCheckResourceAttrSet(resourceName, "tunnel1_cgw_inside_address"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_dpd_timeout_action", "clear"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_dpd_timeout_seconds", "30"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_ike_versions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_inside_cidr", "169.254.8.0/30"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_inside_ipv6_cidr", ""), + resource.TestCheckNoResourceAttr(resourceName, "tunnel1_phase1_dh_group_numbers"), + resource.TestCheckNoResourceAttr(resourceName, "tunnel1_phase1_encryption_algorithms"), + resource.TestCheckNoResourceAttr(resourceName, "tunnel1_phase1_integrity_algorithms"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase1_lifetime_seconds", "28800"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase1_dh_group_numbers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase1_encryption_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase1_integrity_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase2_dh_group_numbers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase2_encryption_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase2_integrity_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_phase2_lifetime_seconds", "3600"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_preshared_key", "12345678"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_rekey_fuzz_percentage", "100"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_rekey_margin_time_seconds", "540"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_replay_window_size", "1024"), + resource.TestCheckResourceAttr(resourceName, "tunnel1_startup_action", "add"), + resource.TestCheckResourceAttrSet(resourceName, "tunnel1_vgw_inside_address"), + resource.TestCheckResourceAttrSet(resourceName, "tunnel2_address"), + resource.TestCheckResourceAttrSet(resourceName, "tunnel2_bgp_asn"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_bgp_holdtime", "30"), + resource.TestCheckResourceAttrSet(resourceName, "tunnel2_cgw_inside_address"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_dpd_timeout_action", "clear"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_dpd_timeout_seconds", "30"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_ike_versions.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_inside_cidr", "169.254.9.0/30"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_inside_ipv6_cidr", ""), + resource.TestCheckResourceAttr(resourceName, "tunnel2_phase1_dh_group_numbers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_phase1_encryption_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_phase1_integrity_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_phase2_dh_group_numbers.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_phase2_encryption_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_phase2_integrity_algorithms.#", "0"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_phase2_lifetime_seconds", "3600"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_preshared_key", "abcdefgh"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_rekey_fuzz_percentage", "100"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_rekey_margin_time_seconds", "540"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_replay_window_size", "1024"), + resource.TestCheckResourceAttr(resourceName, "tunnel2_startup_action", "add"), + resource.TestCheckResourceAttrSet(resourceName, "tunnel2_vgw_inside_address"), + resource.TestCheckResourceAttr(resourceName, "tunnel_inside_ip_version", "ipv4"), + resource.TestCheckResourceAttr(resourceName, "vgw_telemetry.#", "2"), + ), + ExpectNonEmptyPlan: true, + }, }, }) } diff --git a/internal/service/ec2/wait.go b/internal/service/ec2/wait.go index 490b2c0798c..5e8967e08bb 100644 --- a/internal/service/ec2/wait.go +++ b/internal/service/ec2/wait.go @@ -521,6 +521,24 @@ func WaitSubnetIPv6CIDRBlockAssociationDeleted(conn *ec2.EC2, id string) (*ec2.S return nil, err } +func WaitSubnetAssignIpv6AddressOnCreationUpdated(conn *ec2.EC2, subnetID string, expectedValue bool) (*ec2.Subnet, error) { + stateConf := &resource.StateChangeConf{ + Target: []string{strconv.FormatBool(expectedValue)}, + Refresh: StatusSubnetAssignIpv6AddressOnCreation(conn, subnetID), + Timeout: SubnetAttributePropagationTimeout, + Delay: 10 * time.Second, + MinTimeout: 3 * time.Second, + } + + outputRaw, err := stateConf.WaitForState() + + if output, ok := outputRaw.(*ec2.Subnet); ok { + return output, err + } + + return nil, err +} + func WaitSubnetEnableDns64Updated(conn *ec2.EC2, subnetID string, expectedValue bool) (*ec2.Subnet, error) { stateConf := &resource.StateChangeConf{ Target: []string{strconv.FormatBool(expectedValue)}, diff --git a/internal/service/ecs/cluster.go b/internal/service/ecs/cluster.go index aa27d680d47..af87c05084d 100644 --- a/internal/service/ecs/cluster.go +++ b/internal/service/ecs/cluster.go @@ -50,9 +50,10 @@ func ResourceCluster() *schema.Resource { Computed: true, }, "capacity_providers": { - Type: schema.TypeSet, - Optional: true, - Computed: true, + Type: schema.TypeSet, + Optional: true, + Computed: true, + Deprecated: "Use the aws_ecs_cluster_capacity_providers resource instead", Elem: &schema.Schema{ Type: schema.TypeString, }, @@ -114,9 +115,10 @@ func ResourceCluster() *schema.Resource { }, }, "default_capacity_provider_strategy": { - Type: schema.TypeSet, - Optional: true, - Computed: true, + Type: schema.TypeSet, + Optional: true, + Computed: true, + Deprecated: "Use the aws_ecs_cluster_capacity_providers resource instead", Elem: &schema.Resource{ Schema: map[string]*schema.Schema{ "base": { diff --git a/internal/service/efs/access_points_data_source.go b/internal/service/efs/access_points_data_source.go index 3fc36a150d1..44b4abf0516 100644 --- a/internal/service/efs/access_points_data_source.go +++ b/internal/service/efs/access_points_data_source.go @@ -16,7 +16,7 @@ func DataSourceAccessPoints() *schema.Resource { Schema: map[string]*schema.Schema{ "arns": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, @@ -26,7 +26,7 @@ func DataSourceAccessPoints() *schema.Resource { ValidateFunc: validation.StringIsNotEmpty, }, "ids": { - Type: schema.TypeSet, + Type: schema.TypeList, Computed: true, Elem: &schema.Schema{Type: schema.TypeString}, }, @@ -37,47 +37,51 @@ func DataSourceAccessPoints() *schema.Resource { func dataSourceAccessPointsRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).EFSConn - fileSystemId := d.Get("file_system_id").(string) + fileSystemID := d.Get("file_system_id").(string) input := &efs.DescribeAccessPointsInput{ - FileSystemId: aws.String(fileSystemId), + FileSystemId: aws.String(fileSystemID), } - var accessPoints []*efs.AccessPointDescription + output, err := findAccessPointDescriptions(conn, input) + + if err != nil { + return fmt.Errorf("error reading EFS Access Points: %w", err) + } + + var accessPointIDs, arns []string + + for _, v := range output { + accessPointIDs = append(accessPointIDs, aws.StringValue(v.AccessPointId)) + arns = append(arns, aws.StringValue(v.AccessPointArn)) + } + + d.SetId(fileSystemID) + d.Set("arns", arns) + d.Set("ids", accessPointIDs) + + return nil +} + +func findAccessPointDescriptions(conn *efs.EFS, input *efs.DescribeAccessPointsInput) ([]*efs.AccessPointDescription, error) { + var output []*efs.AccessPointDescription err := conn.DescribeAccessPointsPages(input, func(page *efs.DescribeAccessPointsOutput, lastPage bool) bool { if page == nil { return !lastPage } - accessPoints = append(accessPoints, page.AccessPoints...) + for _, v := range page.AccessPoints { + if v != nil { + output = append(output, v) + } + } return !lastPage }) if err != nil { - return fmt.Errorf("error reading EFS Access Points for File System (%s): %w", fileSystemId, err) + return nil, err } - if len(accessPoints) == 0 { - return fmt.Errorf("no matching EFS Access Points for File System (%s) found", fileSystemId) - } - - d.SetId(fileSystemId) - - var arns, ids []string - - for _, accessPoint := range accessPoints { - arns = append(arns, aws.StringValue(accessPoint.AccessPointArn)) - ids = append(ids, aws.StringValue(accessPoint.AccessPointId)) - } - - if err := d.Set("arns", arns); err != nil { - return fmt.Errorf("error setting arns: %w", err) - } - - if err := d.Set("ids", ids); err != nil { - return fmt.Errorf("error setting ids: %w", err) - } - - return nil + return output, nil } diff --git a/internal/service/efs/access_points_data_source_test.go b/internal/service/efs/access_points_data_source_test.go index 9e87e146a34..9ae9a1b3f4d 100644 --- a/internal/service/efs/access_points_data_source_test.go +++ b/internal/service/efs/access_points_data_source_test.go @@ -28,6 +28,26 @@ func TestAccEFSAccessPointsDataSource_basic(t *testing.T) { }) } +func TestAccEFSAccessPointsDataSource_empty(t *testing.T) { + dataSourceName := "data.aws_efs_access_points.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, efs.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckEfsAccessPointDestroy, + Steps: []resource.TestStep{ + { + Config: testAccAccessPointsEmptyDataSourceConfig(), + Check: resource.ComposeTestCheckFunc( + resource.TestCheckResourceAttr(dataSourceName, "arns.#", "0"), + resource.TestCheckResourceAttr(dataSourceName, "ids.#", "0"), + ), + }, + }, + }) +} + func testAccAccessPointsDataSourceConfig() string { return ` resource "aws_efs_file_system" "test" {} @@ -41,3 +61,13 @@ data "aws_efs_access_points" "test" { } ` } + +func testAccAccessPointsEmptyDataSourceConfig() string { + return ` +resource "aws_efs_file_system" "test" {} + +data "aws_efs_access_points" "test" { + file_system_id = aws_efs_file_system.test.id +} +` +} diff --git a/internal/service/eks/cluster_data_source_test.go b/internal/service/eks/cluster_data_source_test.go index 5e044e50f7a..999fc01ca4d 100644 --- a/internal/service/eks/cluster_data_source_test.go +++ b/internal/service/eks/cluster_data_source_test.go @@ -1,14 +1,12 @@ package eks_test import ( - "fmt" "regexp" "testing" "github.com/aws/aws-sdk-go/service/eks" sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" - "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) @@ -66,33 +64,3 @@ data "aws_eks_cluster" "test" { } `) } - -func testCheckResourceAttrGreaterThanValue(name, key, value string) resource.TestCheckFunc { - return func(s *terraform.State) error { - ms := s.RootModule() - rs, ok := ms.Resources[name] - if !ok { - return fmt.Errorf("Not found: %s in %s", name, ms.Path) - } - - is := rs.Primary - if is == nil { - return fmt.Errorf("No primary instance: %s in %s", name, ms.Path) - } - - if v, ok := is.Attributes[key]; !ok || !(v > value) { - if !ok { - return fmt.Errorf("%s: Attribute '%s' not found", name, key) - } - - return fmt.Errorf( - "%s: Attribute '%s' is not greater than %#v, got %#v", - name, - key, - value, - v) - } - return nil - - } -} diff --git a/internal/service/eks/clusters_data_source_test.go b/internal/service/eks/clusters_data_source_test.go index 70357deb036..535d778966a 100644 --- a/internal/service/eks/clusters_data_source_test.go +++ b/internal/service/eks/clusters_data_source_test.go @@ -22,7 +22,7 @@ func TestAccEKSClustersDataSource_basic(t *testing.T) { { Config: testAccClustersDataSourceConfig_Basic(rName), Check: resource.ComposeTestCheckFunc( - testCheckResourceAttrGreaterThanValue(dataSourceResourceName, "names.#", "0"), + acctest.CheckResourceAttrGreaterThanValue(dataSourceResourceName, "names.#", "0"), ), }, }, diff --git a/internal/service/elasticache/global_replication_group.go b/internal/service/elasticache/global_replication_group.go index 49fd9306052..6ec61b556c2 100644 --- a/internal/service/elasticache/global_replication_group.go +++ b/internal/service/elasticache/global_replication_group.go @@ -77,11 +77,6 @@ func ResourceGlobalReplicationGroup() *schema.Resource { Type: schema.TypeString, Computed: true, }, - "actual_engine_version": { - Type: schema.TypeString, - Computed: true, - Deprecated: "Use engine_version_actual instead", - }, "global_replication_group_id": { Type: schema.TypeString, Computed: true, @@ -200,7 +195,6 @@ func resourceGlobalReplicationGroupRead(d *schema.ResourceData, meta interface{} d.Set("cluster_enabled", globalReplicationGroup.ClusterEnabled) d.Set("engine", globalReplicationGroup.Engine) d.Set("engine_version_actual", globalReplicationGroup.EngineVersion) - d.Set("actual_engine_version", globalReplicationGroup.EngineVersion) d.Set("global_replication_group_description", globalReplicationGroup.GlobalReplicationGroupDescription) d.Set("global_replication_group_id", globalReplicationGroup.GlobalReplicationGroupId) d.Set("transit_encryption_enabled", globalReplicationGroup.TransitEncryptionEnabled) diff --git a/internal/service/elasticache/global_replication_group_test.go b/internal/service/elasticache/global_replication_group_test.go index 8af6895e985..14334fa3fe5 100644 --- a/internal/service/elasticache/global_replication_group_test.go +++ b/internal/service/elasticache/global_replication_group_test.go @@ -46,7 +46,6 @@ func TestAccElastiCacheGlobalReplicationGroup_basic(t *testing.T) { resource.TestCheckResourceAttrPair(resourceName, "cluster_enabled", primaryReplicationGroupResourceName, "cluster_enabled"), resource.TestCheckResourceAttrPair(resourceName, "engine", primaryReplicationGroupResourceName, "engine"), resource.TestCheckResourceAttrPair(resourceName, "engine_version_actual", primaryReplicationGroupResourceName, "engine_version"), - resource.TestCheckResourceAttrPair(resourceName, "actual_engine_version", primaryReplicationGroupResourceName, "engine_version"), resource.TestCheckResourceAttr(resourceName, "global_replication_group_id_suffix", rName), resource.TestMatchResourceAttr(resourceName, "global_replication_group_id", regexp.MustCompile(tfelasticache.GlobalReplicationGroupRegionPrefixFormat+rName)), resource.TestCheckResourceAttr(resourceName, "global_replication_group_description", tfelasticache.EmptyDescription), diff --git a/internal/service/elasticbeanstalk/application_version_test.go b/internal/service/elasticbeanstalk/application_version_test.go index f1dbbe0d6ae..19a6195896f 100644 --- a/internal/service/elasticbeanstalk/application_version_test.go +++ b/internal/service/elasticbeanstalk/application_version_test.go @@ -177,7 +177,7 @@ resource "aws_s3_bucket" "default" { bucket = "tftest.applicationversion.bucket-%d" } -resource "aws_s3_bucket_object" "default" { +resource "aws_s3_object" "default" { bucket = aws_s3_bucket.default.id key = "beanstalk/python-v1.zip" source = "test-fixtures/python-v1.zip" @@ -192,7 +192,7 @@ resource "aws_elastic_beanstalk_application_version" "default" { application = aws_elastic_beanstalk_application.default.name name = "tf-test-version-label-%d" bucket = aws_s3_bucket.default.id - key = aws_s3_bucket_object.default.id + key = aws_s3_object.default.id } `, randInt, randInt, randInt) } @@ -203,7 +203,7 @@ resource "aws_s3_bucket" "default" { bucket = "tftest.applicationversion.bucket-%d" } -resource "aws_s3_bucket_object" "default" { +resource "aws_s3_object" "default" { bucket = aws_s3_bucket.default.id key = "beanstalk/python-v1.zip" source = "test-fixtures/python-v1.zip" @@ -218,7 +218,7 @@ resource "aws_elastic_beanstalk_application_version" "first" { application = aws_elastic_beanstalk_application.first.name name = "tf-test-version-label-%d" bucket = aws_s3_bucket.default.id - key = aws_s3_bucket_object.default.id + key = aws_s3_object.default.id } resource "aws_elastic_beanstalk_application" "second" { @@ -230,7 +230,7 @@ resource "aws_elastic_beanstalk_application_version" "second" { application = aws_elastic_beanstalk_application.second.name name = "tf-test-version-label-%d" bucket = aws_s3_bucket.default.id - key = aws_s3_bucket_object.default.id + key = aws_s3_object.default.id } `, randInt, randInt, randInt, randInt, randInt) } @@ -241,7 +241,7 @@ resource "aws_s3_bucket" "default" { bucket = "tftest.applicationversion.bucket-%[1]d" } -resource "aws_s3_bucket_object" "default" { +resource "aws_s3_object" "default" { bucket = aws_s3_bucket.default.id key = "beanstalk/python-v1.zip" source = "test-fixtures/python-v1.zip" @@ -256,7 +256,7 @@ resource "aws_elastic_beanstalk_application_version" "default" { application = aws_elastic_beanstalk_application.default.name name = "tf-test-version-label-%[1]d" bucket = aws_s3_bucket.default.id - key = aws_s3_bucket_object.default.id + key = aws_s3_object.default.id tags = { firstTag = "%[2]s" @@ -272,7 +272,7 @@ resource "aws_s3_bucket" "default" { bucket = "tftest.applicationversion.bucket-%[1]d" } -resource "aws_s3_bucket_object" "default" { +resource "aws_s3_object" "default" { bucket = aws_s3_bucket.default.id key = "beanstalk/python-v1.zip" source = "test-fixtures/python-v1.zip" @@ -287,7 +287,7 @@ resource "aws_elastic_beanstalk_application_version" "default" { application = aws_elastic_beanstalk_application.default.name name = "tf-test-version-label-%[1]d" bucket = aws_s3_bucket.default.id - key = aws_s3_bucket_object.default.id + key = aws_s3_object.default.id tags = { firstTag = "%[2]s" diff --git a/internal/service/elasticbeanstalk/environment_test.go b/internal/service/elasticbeanstalk/environment_test.go index f091135c4d4..e3980fa0c35 100644 --- a/internal/service/elasticbeanstalk/environment_test.go +++ b/internal/service/elasticbeanstalk/environment_test.go @@ -1376,7 +1376,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "python-v1.zip" source = "test-fixtures/python-v1.zip" @@ -1385,7 +1385,7 @@ resource "aws_s3_bucket_object" "test" { resource "aws_elastic_beanstalk_application_version" "test" { application = aws_elastic_beanstalk_application.test.name bucket = aws_s3_bucket.test.id - key = aws_s3_bucket_object.test.id + key = aws_s3_object.test.id name = "%[1]s-1" } @@ -1440,7 +1440,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "python-v1.zip" source = "test-fixtures/python-v1.zip" @@ -1449,7 +1449,7 @@ resource "aws_s3_bucket_object" "test" { resource "aws_elastic_beanstalk_application_version" "test" { application = aws_elastic_beanstalk_application.test.name bucket = aws_s3_bucket.test.id - key = aws_s3_bucket_object.test.id + key = aws_s3_object.test.id name = "%[1]s-2" } diff --git a/internal/service/emr/cluster_test.go b/internal/service/emr/cluster_test.go index 38d3a686c61..d9ebc175730 100644 --- a/internal/service/emr/cluster_test.go +++ b/internal/service/emr/cluster_test.go @@ -1894,7 +1894,7 @@ resource "aws_s3_bucket" "tester" { acl = "public-read" } -resource "aws_s3_bucket_object" "testobject" { +resource "aws_s3_object" "testobject" { bucket = aws_s3_bucket.tester.bucket key = "testscript.sh" content = < 0 { + rule.AllowedHeaders = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["allowed_methods"].(*schema.Set); ok && v.Len() > 0 { + rule.AllowedMethods = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["allowed_origins"].(*schema.Set); ok && v.Len() > 0 { + rule.AllowedOrigins = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["expose_headers"].(*schema.Set); ok && v.Len() > 0 { + rule.ExposeHeaders = flex.ExpandStringSet(v) + } + + if v, ok := tfMap["id"].(string); ok && v != "" { + rule.ID = aws.String(v) + } + + if v, ok := tfMap["max_age_seconds"].(int); ok { + rule.MaxAgeSeconds = aws.Int64(int64(v)) + } + + rules = append(rules, rule) + } + + return rules +} + +func flattenBucketCorsConfigurationCorsRules(rules []*s3.CORSRule) []interface{} { + var results []interface{} + + for _, rule := range rules { + if rule == nil { + continue + } + + m := make(map[string]interface{}) + + if len(rule.AllowedHeaders) > 0 { + m["allowed_headers"] = flex.FlattenStringSet(rule.AllowedHeaders) + } + + if len(rule.AllowedMethods) > 0 { + m["allowed_methods"] = flex.FlattenStringSet(rule.AllowedMethods) + } + + if len(rule.AllowedOrigins) > 0 { + m["allowed_origins"] = flex.FlattenStringSet(rule.AllowedOrigins) + } + + if len(rule.ExposeHeaders) > 0 { + m["expose_headers"] = flex.FlattenStringSet(rule.ExposeHeaders) + } + + if rule.ID != nil { + m["id"] = aws.StringValue(rule.ID) + } + + if rule.MaxAgeSeconds != nil { + m["max_age_seconds"] = aws.Int64Value(rule.MaxAgeSeconds) + } + + results = append(results, m) + } + + return results +} diff --git a/internal/service/s3/bucket_cors_configuration_test.go b/internal/service/s3/bucket_cors_configuration_test.go new file mode 100644 index 00000000000..0f0e30adee3 --- /dev/null +++ b/internal/service/s3/bucket_cors_configuration_test.go @@ -0,0 +1,377 @@ +package s3_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" +) + +func TestAccS3BucketCorsConfiguration_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_cors_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketCorsConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketCorsConfigurationBasicConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_methods.#": "1", + "allowed_origins.#": "1", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "PUT"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_origins.*", "https://www.example.com"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccS3BucketCorsConfiguration_disappears(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_cors_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketCorsConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketCorsConfigurationBasicConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfs3.ResourceBucketCorsConfiguration(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccS3BucketCorsConfiguration_update(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_cors_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketCorsConfigurationDestroy, + Steps: []resource.TestStep{ + + { + Config: testAccBucketCorsConfigurationCompleteConfig_SingleRule(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_headers.#": "1", + "allowed_methods.#": "3", + "allowed_origins.#": "1", + "expose_headers.#": "1", + "id": rName, + "max_age_seconds": "3000", + }), + ), + }, + { + Config: testAccBucketCorsConfigurationConfig_MultipleRules(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_headers.#": "1", + "allowed_methods.#": "3", + "allowed_origins.#": "1", + }), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_methods.#": "1", + "allowed_origins.#": "1", + }), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccBucketCorsConfigurationBasicConfig(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_methods.#": "1", + "allowed_origins.#": "1", + }), + ), + }, + }, + }) +} + +func TestAccS3BucketCorsConfiguration_SingleRule(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_cors_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketCorsConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketCorsConfigurationCompleteConfig_SingleRule(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "1"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_headers.#": "1", + "allowed_methods.#": "3", + "allowed_origins.#": "1", + "expose_headers.#": "1", + "id": rName, + "max_age_seconds": "3000", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_headers.*", "*"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "DELETE"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "POST"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "PUT"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_origins.*", "https://www.example.com"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.expose_headers.*", "ETag"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccS3BucketCorsConfiguration_MultipleRules(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_cors_configuration.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + Providers: acctest.Providers, + CheckDestroy: testAccCheckBucketCorsConfigurationDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketCorsConfigurationConfig_MultipleRules(rName), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketCorsConfigurationExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), + resource.TestCheckResourceAttr(resourceName, "cors_rule.#", "2"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_headers.#": "1", + "allowed_methods.#": "3", + "allowed_origins.#": "1", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_headers.*", "*"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "DELETE"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "POST"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "PUT"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_origins.*", "https://www.example.com"), + resource.TestCheckTypeSetElemNestedAttrs(resourceName, "cors_rule.*", map[string]string{ + "allowed_methods.#": "1", + "allowed_origins.#": "1", + }), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_methods.*", "GET"), + resource.TestCheckTypeSetElemAttr(resourceName, "cors_rule.*.allowed_origins.*", "*"), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckBucketCorsConfigurationDestroy(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_s3_bucket_cors_configuration" { + continue + } + + bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) + if err != nil { + return err + } + + input := &s3.GetBucketCorsInput{ + Bucket: aws.String(bucket), + } + + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + output, err := conn.GetBucketCors(input) + + if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket, tfs3.ErrCodeNoSuchCORSConfiguration) { + continue + } + + if err != nil { + return fmt.Errorf("error getting S3 Bucket CORS configuration (%s): %w", rs.Primary.ID, err) + } + + if output != nil { + return fmt.Errorf("S3 Bucket CORS configuration (%s) still exists", rs.Primary.ID) + } + } + + return nil +} + +func testAccCheckBucketCorsConfigurationExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("Resource (%s) ID not set", resourceName) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + bucket, expectedBucketOwner, err := tfs3.ParseResourceID(rs.Primary.ID) + if err != nil { + return err + } + + input := &s3.GetBucketCorsInput{ + Bucket: aws.String(bucket), + } + + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + output, err := conn.GetBucketCors(input) + + if err != nil { + return fmt.Errorf("error getting S3 Bucket CORS configuration (%s): %w", rs.Primary.ID, err) + } + + if output == nil || len(output.CORSRules) == 0 { + return fmt.Errorf("S3 Bucket CORS configuration (%s) not found", rs.Primary.ID) + } + + return nil + } +} + +func testAccBucketCorsConfigurationBasicConfig(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + lifecycle { + ignore_changes = [ + cors_rule + ] + } +} + +resource "aws_s3_bucket_cors_configuration" "test" { + bucket = aws_s3_bucket.test.id + + cors_rule { + allowed_methods = ["PUT"] + allowed_origins = ["https://www.example.com"] + } +} +`, rName) +} + +func testAccBucketCorsConfigurationCompleteConfig_SingleRule(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + lifecycle { + ignore_changes = [ + cors_rule + ] + } +} + +resource "aws_s3_bucket_cors_configuration" "test" { + bucket = aws_s3_bucket.test.id + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST", "DELETE"] + allowed_origins = ["https://www.example.com"] + expose_headers = ["ETag"] + id = %[1]q + max_age_seconds = 3000 + } +} +`, rName) +} + +func testAccBucketCorsConfigurationConfig_MultipleRules(rName string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + + lifecycle { + ignore_changes = [ + cors_rule + ] + } +} + +resource "aws_s3_bucket_cors_configuration" "test" { + bucket = aws_s3_bucket.test.id + + cors_rule { + allowed_headers = ["*"] + allowed_methods = ["PUT", "POST", "DELETE"] + allowed_origins = ["https://www.example.com"] + } + + cors_rule { + allowed_methods = ["GET"] + allowed_origins = ["*"] + } +} +`, rName) +} diff --git a/internal/service/s3/bucket_test.go b/internal/service/s3/bucket_test.go index 491029b2bd0..a80e3f076e1 100644 --- a/internal/service/s3/bucket_test.go +++ b/internal/service/s3/bucket_test.go @@ -2528,7 +2528,7 @@ func TestAccS3Bucket_Manage_objectLock(t *testing.T) { CheckDestroy: testAccCheckBucketDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectLockEnabledNoDefaultRetention(bucketName), + Config: testAccObjectLockEnabledNoDefaultRetention(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckBucketExists(resourceName), resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), @@ -2543,7 +2543,7 @@ func TestAccS3Bucket_Manage_objectLock(t *testing.T) { ImportStateVerifyIgnore: []string{"force_destroy", "acl"}, }, { - Config: testAccBucketObjectLockEnabledWithDefaultRetention(bucketName), + Config: testAccObjectLockEnabledWithDefaultRetention(bucketName), Check: resource.ComposeTestCheckFunc( testAccCheckBucketExists(resourceName), resource.TestCheckResourceAttr(resourceName, "object_lock_configuration.#", "1"), @@ -2580,7 +2580,7 @@ func TestAccS3Bucket_Basic_forceDestroy(t *testing.T) { // By default, the AWS Go SDK cleans up URIs by removing extra slashes // when the service API requests use the URI as part of making a request. -// While the aws_s3_bucket_object resource automatically cleans the key +// While the aws_s3_object resource automatically cleans the key // to not contain these extra slashes, out-of-band handling and other AWS // services may create keys with extra slashes (empty "directory" prefixes). func TestAccS3Bucket_Basic_forceDestroyWithEmptyPrefixes(t *testing.T) { @@ -5042,7 +5042,7 @@ resource "aws_s3_bucket" "bucket" { `, randInt) } -func testAccBucketObjectLockEnabledNoDefaultRetention(bucketName string) string { +func testAccObjectLockEnabledNoDefaultRetention(bucketName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "arbitrary" { bucket = %[1]q @@ -5054,7 +5054,7 @@ resource "aws_s3_bucket" "arbitrary" { `, bucketName) } -func testAccBucketObjectLockEnabledWithDefaultRetention(bucketName string) string { +func testAccObjectLockEnabledWithDefaultRetention(bucketName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "arbitrary" { bucket = %[1]q diff --git a/internal/service/s3/bucket_versioning.go b/internal/service/s3/bucket_versioning.go new file mode 100644 index 00000000000..cfd797c3337 --- /dev/null +++ b/internal/service/s3/bucket_versioning.go @@ -0,0 +1,249 @@ +package s3 + +import ( + "context" + "fmt" + "log" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + "github.com/hashicorp/terraform-plugin-sdk/v2/diag" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + "github.com/hashicorp/terraform-provider-aws/internal/verify" +) + +func ResourceBucketVersioning() *schema.Resource { + return &schema.Resource{ + CreateContext: resourceBucketVersioningCreate, + ReadContext: resourceBucketVersioningRead, + UpdateContext: resourceBucketVersioningUpdate, + DeleteContext: resourceBucketVersioningDelete, + Importer: &schema.ResourceImporter{ + StateContext: schema.ImportStatePassthroughContext, + }, + + Schema: map[string]*schema.Schema{ + "bucket": { + Type: schema.TypeString, + Required: true, + ForceNew: true, + ValidateFunc: validation.StringLenBetween(1, 63), + }, + "expected_bucket_owner": { + Type: schema.TypeString, + Optional: true, + ForceNew: true, + ValidateFunc: verify.ValidAccountID, + }, + "mfa": { + Type: schema.TypeString, + Optional: true, + }, + "versioning_configuration": { + Type: schema.TypeList, + Required: true, + MaxItems: 1, + Elem: &schema.Resource{ + Schema: map[string]*schema.Schema{ + "mfa_delete": { + Type: schema.TypeString, + Optional: true, + ValidateFunc: validation.StringInSlice(s3.MFADelete_Values(), false), + }, + "status": { + Type: schema.TypeString, + Required: true, + ValidateFunc: validation.StringInSlice(s3.BucketVersioningStatus_Values(), false), + }, + }, + }, + }, + }, + } +} + +func resourceBucketVersioningCreate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).S3Conn + + bucket := d.Get("bucket").(string) + expectedBucketOwner := d.Get("expected_bucket_owner").(string) + + input := &s3.PutBucketVersioningInput{ + Bucket: aws.String(bucket), + VersioningConfiguration: expandBucketVersioningConfiguration(d.Get("versioning_configuration").([]interface{})), + } + + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + if v, ok := d.GetOk("mfa"); ok { + input.MFA = aws.String(v.(string)) + } + + _, err := verify.RetryOnAWSCode(s3.ErrCodeNoSuchBucket, func() (interface{}, error) { + return conn.PutBucketVersioningWithContext(ctx, input) + }) + + if err != nil { + return diag.FromErr(fmt.Errorf("error creating S3 bucket versioning for %s: %w", bucket, err)) + } + + d.SetId(CreateResourceID(bucket, expectedBucketOwner)) + + return resourceBucketVersioningRead(ctx, d, meta) +} + +func resourceBucketVersioningRead(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).S3Conn + + bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + input := &s3.GetBucketVersioningInput{ + Bucket: aws.String(bucket), + } + + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + output, err := conn.GetBucketVersioningWithContext(ctx, input) + + if !d.IsNewResource() && tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + log.Printf("[WARN] S3 Bucket Versioning (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + if err != nil { + return diag.FromErr(fmt.Errorf("error getting S3 bucket versioning (%s): %w", d.Id(), err)) + } + + if output == nil { + if d.IsNewResource() { + return diag.FromErr(fmt.Errorf("error getting S3 bucket versioning (%s): empty output", d.Id())) + } + log.Printf("[WARN] S3 Bucket Versioning (%s) not found, removing from state", d.Id()) + d.SetId("") + return nil + } + + d.Set("bucket", bucket) + d.Set("expected_bucket_owner", expectedBucketOwner) + if err := d.Set("versioning_configuration", flattenBucketVersioningConfiguration(output)); err != nil { + return diag.FromErr(fmt.Errorf("error setting versioning_configuration: %w", err)) + } + + return nil +} + +func resourceBucketVersioningUpdate(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).S3Conn + + bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + input := &s3.PutBucketVersioningInput{ + Bucket: aws.String(bucket), + VersioningConfiguration: expandBucketVersioningConfiguration(d.Get("versioning_configuration").([]interface{})), + } + + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + if v, ok := d.GetOk("mfa"); ok { + input.MFA = aws.String(v.(string)) + } + + _, err = conn.PutBucketVersioningWithContext(ctx, input) + + if err != nil { + return diag.FromErr(fmt.Errorf("error updating S3 bucket versioning (%s): %w", d.Id(), err)) + } + + return resourceBucketVersioningRead(ctx, d, meta) +} + +func resourceBucketVersioningDelete(ctx context.Context, d *schema.ResourceData, meta interface{}) diag.Diagnostics { + conn := meta.(*conns.AWSClient).S3Conn + + bucket, expectedBucketOwner, err := ParseResourceID(d.Id()) + if err != nil { + return diag.FromErr(err) + } + + input := &s3.PutBucketVersioningInput{ + Bucket: aws.String(bucket), + VersioningConfiguration: &s3.VersioningConfiguration{ + // Status must be provided thus to "remove" this resource, + // we suspend versioning + Status: aws.String(s3.BucketVersioningStatusSuspended), + }, + } + + if expectedBucketOwner != "" { + input.ExpectedBucketOwner = aws.String(expectedBucketOwner) + } + + _, err = conn.PutBucketVersioningWithContext(ctx, input) + + if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + return nil + } + + if err != nil { + return diag.FromErr(fmt.Errorf("error deleting S3 bucket versioning (%s): %w", d.Id(), err)) + } + + return nil +} + +func expandBucketVersioningConfiguration(l []interface{}) *s3.VersioningConfiguration { + if len(l) == 0 || l[0] == nil { + return nil + } + + tfMap, ok := l[0].(map[string]interface{}) + if !ok { + return nil + } + + result := &s3.VersioningConfiguration{} + + if v, ok := tfMap["mfa_delete"].(string); ok && v != "" { + result.MFADelete = aws.String(v) + } + + if v, ok := tfMap["status"].(string); ok && v != "" { + result.Status = aws.String(v) + } + + return result +} + +func flattenBucketVersioningConfiguration(config *s3.GetBucketVersioningOutput) []interface{} { + if config == nil { + return []interface{}{} + } + + m := make(map[string]interface{}) + + if config.MFADelete != nil { + m["mfa_delete"] = aws.StringValue(config.MFADelete) + } + + if config.Status != nil { + m["status"] = aws.StringValue(config.Status) + } + + return []interface{}{m} +} diff --git a/internal/service/s3/bucket_versioning_test.go b/internal/service/s3/bucket_versioning_test.go new file mode 100644 index 00000000000..f0391f42b83 --- /dev/null +++ b/internal/service/s3/bucket_versioning_test.go @@ -0,0 +1,229 @@ +package s3_test + +import ( + "fmt" + "testing" + + "github.com/aws/aws-sdk-go/aws" + "github.com/aws/aws-sdk-go/service/s3" + "github.com/hashicorp/aws-sdk-go-base/tfawserr" + sdkacctest "github.com/hashicorp/terraform-plugin-sdk/v2/helper/acctest" + "github.com/hashicorp/terraform-plugin-sdk/v2/helper/resource" + "github.com/hashicorp/terraform-plugin-sdk/v2/terraform" + "github.com/hashicorp/terraform-provider-aws/internal/acctest" + "github.com/hashicorp/terraform-provider-aws/internal/conns" + tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" +) + +func TestAccS3BucketVersioning_basic(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_versioning.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketVersioningDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketVersioningBasicConfig(rName, s3.BucketVersioningStatusEnabled), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketVersioningExists(resourceName), + resource.TestCheckResourceAttrPair(resourceName, "bucket", "aws_s3_bucket.test", "id"), + resource.TestCheckResourceAttr(resourceName, "versioning_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "versioning_configuration.0.status", s3.BucketVersioningStatusEnabled), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func TestAccS3BucketVersioning_disappears(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_versioning.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketVersioningDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketVersioningBasicConfig(rName, s3.BucketVersioningStatusEnabled), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketVersioningExists(resourceName), + acctest.CheckResourceDisappears(acctest.Provider, tfs3.ResourceBucketVersioning(), resourceName), + ), + ExpectNonEmptyPlan: true, + }, + }, + }) +} + +func TestAccS3BucketVersioning_update(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_versioning.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketVersioningDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketVersioningBasicConfig(rName, s3.BucketVersioningStatusEnabled), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketVersioningExists(resourceName), + ), + }, + { + Config: testAccBucketVersioningBasicConfig(rName, s3.BucketVersioningStatusSuspended), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketVersioningExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning_configuration.0.status", s3.BucketVersioningStatusSuspended), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + { + Config: testAccBucketVersioningBasicConfig(rName, s3.BucketVersioningStatusEnabled), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketVersioningExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning_configuration.0.status", s3.BucketVersioningStatusEnabled), + ), + }, + }, + }) +} + +// TestAccBucketVersioning_MFADelete can only test for a "Disabled" +// mfa_delete configuration as the "mfa" argument is required if it's enabled +func TestAccS3BucketVersioning_MFADelete(t *testing.T) { + rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) + resourceName := "aws_s3_bucket_versioning.test" + + resource.ParallelTest(t, resource.TestCase{ + PreCheck: func() { acctest.PreCheck(t) }, + ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), + ProviderFactories: acctest.ProviderFactories, + CheckDestroy: testAccCheckBucketVersioningDestroy, + Steps: []resource.TestStep{ + { + Config: testAccBucketVersioningConfig_MFADelete(rName, s3.MFADeleteDisabled), + Check: resource.ComposeTestCheckFunc( + testAccCheckBucketVersioningExists(resourceName), + resource.TestCheckResourceAttr(resourceName, "versioning_configuration.#", "1"), + resource.TestCheckResourceAttr(resourceName, "versioning_configuration.0.mfa_delete", s3.MFADeleteDisabled), + resource.TestCheckResourceAttr(resourceName, "versioning_configuration.0.status", s3.BucketVersioningStatusEnabled), + ), + }, + { + ResourceName: resourceName, + ImportState: true, + ImportStateVerify: true, + }, + }, + }) +} + +func testAccCheckBucketVersioningDestroy(s *terraform.State) error { + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + for _, rs := range s.RootModule().Resources { + if rs.Type != "aws_s3_bucket_versioning" { + continue + } + + input := &s3.GetBucketVersioningInput{ + Bucket: aws.String(rs.Primary.ID), + } + + output, err := conn.GetBucketVersioning(input) + + if tfawserr.ErrCodeEquals(err, s3.ErrCodeNoSuchBucket) { + continue + } + + if err != nil { + return fmt.Errorf("error getting S3 bucket versioning (%s): %w", rs.Primary.ID, err) + } + + if output != nil && aws.StringValue(output.Status) != s3.BucketVersioningStatusSuspended { + return fmt.Errorf("S3 bucket versioning (%s) still exists", rs.Primary.ID) + } + } + + return nil +} + +func testAccCheckBucketVersioningExists(resourceName string) resource.TestCheckFunc { + return func(s *terraform.State) error { + rs, ok := s.RootModule().Resources[resourceName] + if !ok { + return fmt.Errorf("Not found: %s", resourceName) + } + + if rs.Primary.ID == "" { + return fmt.Errorf("Resource (%s) ID not set", resourceName) + } + + conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn + + input := &s3.GetBucketVersioningInput{ + Bucket: aws.String(rs.Primary.ID), + } + + output, err := conn.GetBucketVersioning(input) + + if err != nil { + return fmt.Errorf("error getting S3 bucket versioning (%s): %w", rs.Primary.ID, err) + } + + if output == nil { + return fmt.Errorf("S3 Bucket versioning (%s) not found", rs.Primary.ID) + } + + return nil + } +} + +func testAccBucketVersioningBasicConfig(rName, status string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = "private" +} + +resource "aws_s3_bucket_versioning" "test" { + bucket = aws_s3_bucket.test.id + versioning_configuration { + status = %[2]q + } +} +`, rName, status) +} + +func testAccBucketVersioningConfig_MFADelete(rName, mfaDelete string) string { + return fmt.Sprintf(` +resource "aws_s3_bucket" "test" { + bucket = %[1]q + acl = "private" +} + +resource "aws_s3_bucket_versioning" "test" { + bucket = aws_s3_bucket.test.id + versioning_configuration { + mfa_delete = %[2]q + status = "Enabled" + } +} +`, rName, mfaDelete) +} diff --git a/internal/service/s3/errors.go b/internal/service/s3/errors.go index bfbbb273ba7..425080a8817 100644 --- a/internal/service/s3/errors.go +++ b/internal/service/s3/errors.go @@ -5,6 +5,7 @@ package s3 const ( ErrCodeNoSuchConfiguration = "NoSuchConfiguration" + ErrCodeNoSuchCORSConfiguration = "NoSuchCORSConfiguration" ErrCodeNoSuchPublicAccessBlockConfiguration = "NoSuchPublicAccessBlockConfiguration" ErrCodeOperationAborted = "OperationAborted" ) diff --git a/internal/service/s3/id.go b/internal/service/s3/id.go new file mode 100644 index 00000000000..23e67789579 --- /dev/null +++ b/internal/service/s3/id.go @@ -0,0 +1,41 @@ +package s3 + +import ( + "fmt" + "strings" +) + +const resourceIDSeparator = "," + +// CreateResourceID is a generic method for creating an ID string for a bucket-related resource e.g. aws_s3_bucket_versioning. +// The method expects a bucket name and an optional accountID. +func CreateResourceID(bucket, expectedBucketOwner string) string { + if expectedBucketOwner == "" { + return bucket + } + + parts := []string{bucket, expectedBucketOwner} + id := strings.Join(parts, resourceIDSeparator) + + return id +} + +// ParseResourceID is a generic method for parsing an ID string +// for a bucket name and accountID if provided. +func ParseResourceID(id string) (bucket, expectedBucketOwner string, err error) { + parts := strings.Split(id, resourceIDSeparator) + + if len(parts) == 1 && parts[0] != "" { + bucket = parts[0] + return + } + + if len(parts) == 2 && parts[0] != "" && parts[1] != "" { + bucket = parts[0] + expectedBucketOwner = parts[1] + return + } + + err = fmt.Errorf("unexpected format for ID (%s), expected BUCKET or BUCKET%sEXPECTED_BUCKET_OWNER", id, resourceIDSeparator) + return +} diff --git a/internal/service/s3/id_test.go b/internal/service/s3/id_test.go new file mode 100644 index 00000000000..295d58164ea --- /dev/null +++ b/internal/service/s3/id_test.go @@ -0,0 +1,62 @@ +package s3_test + +import ( + "testing" + + tfs3 "github.com/hashicorp/terraform-provider-aws/internal/service/s3" +) + +func TestParseResourceID(t *testing.T) { + testCases := []struct { + TestName string + InputID string + ExpectError bool + ExpectedBucket string + ExpectedBucketOwner string + }{ + { + TestName: "empty ID", + InputID: "", + ExpectError: true, + }, + { + TestName: "incorrect format", + InputID: "test,example,123456789012", + ExpectError: true, + }, + { + TestName: "valid ID with bucket", + InputID: tfs3.CreateResourceID("example", ""), + ExpectedBucket: "example", + ExpectedBucketOwner: "", + }, + { + TestName: "valid ID with bucket and bucket owner", + InputID: tfs3.CreateResourceID("example", "123456789012"), + ExpectedBucket: "example", + ExpectedBucketOwner: "123456789012", + }, + } + + for _, testCase := range testCases { + t.Run(testCase.TestName, func(t *testing.T) { + gotBucket, gotExpectedBucketOwner, err := tfs3.ParseResourceID(testCase.InputID) + + if err == nil && testCase.ExpectError { + t.Fatalf("expected error") + } + + if err != nil && !testCase.ExpectError { + t.Fatalf("unexpected error") + } + + if gotBucket != testCase.ExpectedBucket { + t.Errorf("got bucket %s, expected %s", gotBucket, testCase.ExpectedBucket) + } + + if gotExpectedBucketOwner != testCase.ExpectedBucketOwner { + t.Errorf("got ExpectedBucketOwner %s, expected %s", gotExpectedBucketOwner, testCase.ExpectedBucketOwner) + } + }) + } +} diff --git a/internal/service/s3/bucket_object.go b/internal/service/s3/object.go similarity index 91% rename from internal/service/s3/bucket_object.go rename to internal/service/s3/object.go index 8dca8fc1625..c9ea297dd73 100644 --- a/internal/service/s3/bucket_object.go +++ b/internal/service/s3/object.go @@ -30,21 +30,21 @@ import ( "github.com/mitchellh/go-homedir" ) -const s3BucketObjectCreationTimeout = 2 * time.Minute +const s3ObjectCreationTimeout = 2 * time.Minute -func ResourceBucketObject() *schema.Resource { +func ResourceObject() *schema.Resource { return &schema.Resource{ - Create: resourceBucketObjectCreate, - Read: resourceBucketObjectRead, - Update: resourceBucketObjectUpdate, - Delete: resourceBucketObjectDelete, + Create: resourceObjectCreate, + Read: resourceObjectRead, + Update: resourceObjectUpdate, + Delete: resourceObjectDelete, Importer: &schema.ResourceImporter{ - State: resourceBucketObjectImport, + State: resourceObjectImport, }, CustomizeDiff: customdiff.Sequence( - resourceBucketObjectCustomizeDiff, + resourceObjectCustomizeDiff, verify.SetTagsDiff, ), @@ -186,11 +186,11 @@ func ResourceBucketObject() *schema.Resource { } } -func resourceBucketObjectCreate(d *schema.ResourceData, meta interface{}) error { - return resourceBucketObjectUpload(d, meta) +func resourceObjectCreate(d *schema.ResourceData, meta interface{}) error { + return resourceObjectUpload(d, meta) } -func resourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { +func resourceObjectRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).S3Conn defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -205,7 +205,7 @@ func resourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { var resp *s3.HeadObjectOutput - err := resource.Retry(s3BucketObjectCreationTimeout, func() *resource.RetryError { + err := resource.Retry(s3ObjectCreationTimeout, func() *resource.RetryError { var err error resp, err = conn.HeadObject(input) @@ -235,7 +235,7 @@ func resourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error reading S3 Object (%s): %w", d.Id(), err) } - log.Printf("[DEBUG] Reading S3 Bucket Object meta: %s", resp) + log.Printf("[DEBUG] Reading S3 Object meta: %s", resp) d.Set("bucket_key_enabled", resp.BucketKeyEnabled) d.Set("cache_control", resp.CacheControl) @@ -261,8 +261,8 @@ func resourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { d.Set("object_lock_mode", resp.ObjectLockMode) d.Set("object_lock_retain_until_date", flattenS3ObjectDate(resp.ObjectLockRetainUntilDate)) - if err := resourceBucketObjectSetKMS(d, meta, resp.SSEKMSKeyId); err != nil { - return fmt.Errorf("bucket object KMS: %w", err) + if err := resourceObjectSetKMS(d, meta, resp.SSEKMSKeyId); err != nil { + return fmt.Errorf("object KMS: %w", err) } // See https://forums.aws.amazon.com/thread.jspa?threadID=44003 @@ -304,9 +304,9 @@ func resourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { return nil } -func resourceBucketObjectUpdate(d *schema.ResourceData, meta interface{}) error { - if hasS3BucketObjectContentChanges(d) { - return resourceBucketObjectUpload(d, meta) +func resourceObjectUpdate(d *schema.ResourceData, meta interface{}) error { + if hasS3ObjectContentChanges(d) { + return resourceObjectUpload(d, meta) } conn := meta.(*conns.AWSClient).S3Conn @@ -372,10 +372,10 @@ func resourceBucketObjectUpdate(d *schema.ResourceData, meta interface{}) error } } - return resourceBucketObjectRead(d, meta) + return resourceObjectRead(d, meta) } -func resourceBucketObjectDelete(d *schema.ResourceData, meta interface{}) error { +func resourceObjectDelete(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).S3Conn bucket := d.Get("bucket").(string) @@ -399,7 +399,7 @@ func resourceBucketObjectDelete(d *schema.ResourceData, meta interface{}) error return nil } -func resourceBucketObjectImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { +func resourceObjectImport(d *schema.ResourceData, meta interface{}) ([]*schema.ResourceData, error) { id := d.Id() id = strings.TrimPrefix(id, "s3://") parts := strings.Split(id, "/") @@ -418,7 +418,7 @@ func resourceBucketObjectImport(d *schema.ResourceData, meta interface{}) ([]*sc return []*schema.ResourceData{d}, nil } -func resourceBucketObjectUpload(d *schema.ResourceData, meta interface{}) error { +func resourceObjectUpload(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).S3Conn uploader := s3manager.NewUploaderWithClient(conn) defaultTagsConfig := meta.(*conns.AWSClient).DefaultTagsConfig @@ -434,14 +434,14 @@ func resourceBucketObjectUpload(d *schema.ResourceData, meta interface{}) error } file, err := os.Open(path) if err != nil { - return fmt.Errorf("Error opening S3 bucket object source (%s): %s", path, err) + return fmt.Errorf("Error opening S3 object source (%s): %s", path, err) } body = file defer func() { err := file.Close() if err != nil { - log.Printf("[WARN] Error closing S3 bucket object source (%s): %s", path, err) + log.Printf("[WARN] Error closing S3 object source (%s): %s", path, err) } }() } else if v, ok := d.GetOk("content"); ok { @@ -538,10 +538,10 @@ func resourceBucketObjectUpload(d *schema.ResourceData, meta interface{}) error d.SetId(key) - return resourceBucketObjectRead(d, meta) + return resourceObjectRead(d, meta) } -func resourceBucketObjectSetKMS(d *schema.ResourceData, meta interface{}, sseKMSKeyId *string) error { +func resourceObjectSetKMS(d *schema.ResourceData, meta interface{}, sseKMSKeyId *string) error { // Only set non-default KMS key ID (one that doesn't match default) if sseKMSKeyId != nil { // retrieve S3 KMS Default Master Key @@ -572,8 +572,8 @@ func validateMetadataIsLowerCase(v interface{}, k string) (ws []string, errors [ return } -func resourceBucketObjectCustomizeDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { - if hasS3BucketObjectContentChanges(d) { +func resourceObjectCustomizeDiff(_ context.Context, d *schema.ResourceDiff, meta interface{}) error { + if hasS3ObjectContentChanges(d) { return d.SetNewComputed("version_id") } @@ -585,7 +585,7 @@ func resourceBucketObjectCustomizeDiff(_ context.Context, d *schema.ResourceDiff return nil } -func hasS3BucketObjectContentChanges(d verify.ResourceDiffer) bool { +func hasS3ObjectContentChanges(d verify.ResourceDiffer) bool { for _, key := range []string{ "bucket_key_enabled", "cache_control", @@ -749,7 +749,7 @@ func DeleteAllObjectVersions(conn *s3.S3, bucketName, key string, force, ignoreO return nil } -// deleteS3ObjectVersion deletes a specific bucket object version. +// deleteS3ObjectVersion deletes a specific object version. // Set force to true to override any S3 object lock protections. func deleteS3ObjectVersion(conn *s3.S3, b, k, v string, force bool) error { input := &s3.DeleteObjectInput{ diff --git a/internal/service/s3/object_copy.go b/internal/service/s3/object_copy.go index 5db14b97e91..e010d45731a 100644 --- a/internal/service/s3/object_copy.go +++ b/internal/service/s3/object_copy.go @@ -323,7 +323,7 @@ func resourceObjectCopyRead(d *schema.ResourceData, meta interface{}) error { return fmt.Errorf("error reading S3 Object (%s): empty response", d.Id()) } - log.Printf("[DEBUG] Reading S3 Bucket Object meta: %s", resp) + log.Printf("[DEBUG] Reading S3 Object meta: %s", resp) d.Set("bucket_key_enabled", resp.BucketKeyEnabled) d.Set("cache_control", resp.CacheControl) @@ -349,8 +349,8 @@ func resourceObjectCopyRead(d *schema.ResourceData, meta interface{}) error { d.Set("object_lock_mode", resp.ObjectLockMode) d.Set("object_lock_retain_until_date", flattenS3ObjectDate(resp.ObjectLockRetainUntilDate)) - if err := resourceBucketObjectSetKMS(d, meta, resp.SSEKMSKeyId); err != nil { - return fmt.Errorf("bucket object KMS: %w", err) + if err := resourceObjectSetKMS(d, meta, resp.SSEKMSKeyId); err != nil { + return fmt.Errorf("object KMS: %w", err) } // See https://forums.aws.amazon.com/thread.jspa?threadID=44003 @@ -647,7 +647,7 @@ func resourceObjectCopyDoCopy(d *schema.ResourceData, meta interface{}) error { d.Set("version_id", output.VersionId) d.SetId(d.Get("key").(string)) - return resourceBucketObjectRead(d, meta) + return resourceObjectRead(d, meta) } type s3Grants struct { diff --git a/internal/service/s3/object_copy_test.go b/internal/service/s3/object_copy_test.go index 97a18348325..73f4efb790d 100644 --- a/internal/service/s3/object_copy_test.go +++ b/internal/service/s3/object_copy_test.go @@ -17,7 +17,7 @@ func TestAccS3ObjectCopy_basic(t *testing.T) { rName1 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) rName2 := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resourceName := "aws_s3_object_copy.test" - sourceName := "aws_s3_bucket_object.source" + sourceName := "aws_s3_object.source" key := "HundBegraven" sourceKey := "WshngtnNtnls" @@ -112,7 +112,7 @@ func testAccCheckObjectCopyExists(n string) resource.TestCheckFunc { } if rs.Primary.ID == "" { - return fmt.Errorf("No S3 Bucket Object ID is set") + return fmt.Errorf("No S3 Object ID is set") } conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -123,7 +123,7 @@ func testAccCheckObjectCopyExists(n string) resource.TestCheckFunc { IfMatch: aws.String(rs.Primary.Attributes["etag"]), }) if err != nil { - return fmt.Errorf("S3Bucket Object error: %s", err) + return fmt.Errorf("S3 Object error: %s", err) } return nil @@ -136,7 +136,7 @@ resource "aws_s3_bucket" "source" { bucket = %[1]q } -resource "aws_s3_bucket_object" "source" { +resource "aws_s3_object" "source" { bucket = aws_s3_bucket.source.bucket key = %[2]q content = "Ingen ko på isen" @@ -149,7 +149,7 @@ resource "aws_s3_bucket" "target" { resource "aws_s3_object_copy" "test" { bucket = aws_s3_bucket.target.bucket key = %[4]q - source = "${aws_s3_bucket.source.bucket}/${aws_s3_bucket_object.source.key}" + source = "${aws_s3_bucket.source.bucket}/${aws_s3_object.source.key}" grant { uri = "http://acs.amazonaws.com/groups/global/AllUsers" @@ -163,7 +163,7 @@ resource "aws_s3_object_copy" "test" { func testAccObjectCopyConfig_BucketKeyEnabled_Bucket(rName string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { - description = "Encrypts test bucket objects" + description = "Encrypts test objects" deletion_window_in_days = 7 } @@ -171,7 +171,7 @@ resource "aws_s3_bucket" "source" { bucket = "%[1]s-source" } -resource "aws_s3_bucket_object" "source" { +resource "aws_s3_object" "source" { bucket = aws_s3_bucket.source.bucket content = "Ingen ko på isen" key = "test" @@ -194,7 +194,7 @@ resource "aws_s3_bucket" "target" { resource "aws_s3_object_copy" "test" { bucket = aws_s3_bucket.target.bucket key = "test" - source = "${aws_s3_bucket.source.bucket}/${aws_s3_bucket_object.source.key}" + source = "${aws_s3_bucket.source.bucket}/${aws_s3_object.source.key}" } `, rName) } @@ -202,7 +202,7 @@ resource "aws_s3_object_copy" "test" { func testAccObjectCopyConfig_BucketKeyEnabled_Object(rName string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { - description = "Encrypts test bucket objects" + description = "Encrypts test objects" deletion_window_in_days = 7 } @@ -210,7 +210,7 @@ resource "aws_s3_bucket" "source" { bucket = "%[1]s-source" } -resource "aws_s3_bucket_object" "source" { +resource "aws_s3_object" "source" { bucket = aws_s3_bucket.source.bucket content = "Ingen ko på isen" key = "test" @@ -225,7 +225,7 @@ resource "aws_s3_object_copy" "test" { bucket_key_enabled = true key = "test" kms_key_id = aws_kms_key.test.arn - source = "${aws_s3_bucket.source.bucket}/${aws_s3_bucket_object.source.key}" + source = "${aws_s3_bucket.source.bucket}/${aws_s3_object.source.key}" } `, rName) } diff --git a/internal/service/s3/bucket_object_data_source.go b/internal/service/s3/object_data_source.go similarity index 96% rename from internal/service/s3/bucket_object_data_source.go rename to internal/service/s3/object_data_source.go index 0f548d308ca..767aca9a499 100644 --- a/internal/service/s3/bucket_object_data_source.go +++ b/internal/service/s3/object_data_source.go @@ -16,9 +16,9 @@ import ( tftags "github.com/hashicorp/terraform-provider-aws/internal/tags" ) -func DataSourceBucketObject() *schema.Resource { +func DataSourceObject() *schema.Resource { return &schema.Resource{ - Read: dataSourceBucketObjectRead, + Read: dataSourceObjectRead, Schema: map[string]*schema.Schema{ "body": { @@ -125,7 +125,7 @@ func DataSourceBucketObject() *schema.Resource { } } -func dataSourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error { +func dataSourceObjectRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).S3Conn ignoreTagsConfig := meta.(*conns.AWSClient).IgnoreTagsConfig @@ -150,7 +150,7 @@ func dataSourceBucketObjectRead(d *schema.ResourceData, meta interface{}) error uniqueId += "@" + v.(string) } - log.Printf("[DEBUG] Reading S3 Bucket Object: %s", input) + log.Printf("[DEBUG] Reading S3 Object: %s", input) out, err := conn.HeadObject(&input) if err != nil { return fmt.Errorf("failed getting S3 Bucket (%s) Object (%s): %w", bucket, key, err) diff --git a/internal/service/s3/bucket_object_data_source_test.go b/internal/service/s3/object_data_source_test.go similarity index 85% rename from internal/service/s3/bucket_object_data_source_test.go rename to internal/service/s3/object_data_source_test.go index 733e97b054f..821a21d273a 100644 --- a/internal/service/s3/bucket_object_data_source_test.go +++ b/internal/service/s3/object_data_source_test.go @@ -17,14 +17,14 @@ import ( const rfc1123RegexPattern = `^[a-zA-Z]{3}, [0-9]+ [a-zA-Z]+ [0-9]{4} [0-9:]+ [A-Z]+$` -func TestAccS3BucketObjectDataSource_basic(t *testing.T) { +func TestAccS3ObjectDataSource_basic(t *testing.T) { rInt := sdkacctest.RandInt() var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName := "data.aws_s3_bucket_object.obj" + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, @@ -35,7 +35,7 @@ func TestAccS3BucketObjectDataSource_basic(t *testing.T) { { Config: testAccObjectDataSourceConfig_basic(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), resource.TestCheckResourceAttr(dataSourceName, "content_length", "11"), resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), @@ -51,12 +51,12 @@ func TestAccS3BucketObjectDataSource_basic(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_basicViaAccessPoint(t *testing.T) { +func TestAccS3ObjectDataSource_basicViaAccessPoint(t *testing.T) { var dsObj, rObj s3.GetObjectOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - dataSourceName := "data.aws_s3_bucket_object.test" - resourceName := "aws_s3_bucket_object.test" + dataSourceName := "data.aws_s3_object.test" + resourceName := "aws_s3_object.test" accessPointResourceName := "aws_s3_access_point.test" resource.ParallelTest(t, resource.TestCase{ @@ -67,9 +67,9 @@ func TestAccS3BucketObjectDataSource_basicViaAccessPoint(t *testing.T) { { Config: testAccObjectDataSourceConfig_basicViaAccessPoint(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), resource.TestCheckResourceAttrPair(dataSourceName, "bucket", accessPointResourceName, "arn"), resource.TestCheckResourceAttrPair(dataSourceName, "key", resourceName, "key"), ), @@ -78,14 +78,14 @@ func TestAccS3BucketObjectDataSource_basicViaAccessPoint(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_readableBody(t *testing.T) { +func TestAccS3ObjectDataSource_readableBody(t *testing.T) { rInt := sdkacctest.RandInt() var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName := "data.aws_s3_bucket_object.obj" + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, @@ -96,7 +96,7 @@ func TestAccS3BucketObjectDataSource_readableBody(t *testing.T) { { Config: testAccObjectDataSourceConfig_readableBody(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), resource.TestCheckResourceAttr(dataSourceName, "content_length", "3"), resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), @@ -112,14 +112,14 @@ func TestAccS3BucketObjectDataSource_readableBody(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_kmsEncrypted(t *testing.T) { +func TestAccS3ObjectDataSource_kmsEncrypted(t *testing.T) { rInt := sdkacctest.RandInt() var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName := "data.aws_s3_bucket_object.obj" + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, @@ -130,7 +130,7 @@ func TestAccS3BucketObjectDataSource_kmsEncrypted(t *testing.T) { { Config: testAccObjectDataSourceConfig_kmsEncrypted(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), resource.TestCheckResourceAttr(dataSourceName, "content_length", "22"), resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), @@ -148,14 +148,14 @@ func TestAccS3BucketObjectDataSource_kmsEncrypted(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_bucketKeyEnabled(t *testing.T) { +func TestAccS3ObjectDataSource_bucketKeyEnabled(t *testing.T) { rInt := sdkacctest.RandInt() var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName := "data.aws_s3_bucket_object.obj" + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, @@ -166,7 +166,7 @@ func TestAccS3BucketObjectDataSource_bucketKeyEnabled(t *testing.T) { { Config: testAccObjectDataSourceConfig_bucketKeyEnabled(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), resource.TestCheckResourceAttr(dataSourceName, "content_length", "22"), resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), @@ -185,14 +185,14 @@ func TestAccS3BucketObjectDataSource_bucketKeyEnabled(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_allParams(t *testing.T) { +func TestAccS3ObjectDataSource_allParams(t *testing.T) { rInt := sdkacctest.RandInt() var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName := "data.aws_s3_bucket_object.obj" + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, @@ -203,7 +203,7 @@ func TestAccS3BucketObjectDataSource_allParams(t *testing.T) { { Config: testAccObjectDataSourceConfig_allParams(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), resource.TestCheckResourceAttr(dataSourceName, "content_length", "25"), resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), @@ -222,7 +222,7 @@ func TestAccS3BucketObjectDataSource_allParams(t *testing.T) { // Supported, but difficult to reproduce in short testing time resource.TestCheckResourceAttrPair(dataSourceName, "storage_class", resourceName, "storage_class"), resource.TestCheckResourceAttr(dataSourceName, "expiration", ""), - // Currently unsupported in aws_s3_bucket_object resource + // Currently unsupported in aws_s3_object resource resource.TestCheckResourceAttr(dataSourceName, "expires", ""), resource.TestCheckResourceAttrPair(dataSourceName, "website_redirect_location", resourceName, "website_redirect"), resource.TestCheckResourceAttr(dataSourceName, "metadata.%", "0"), @@ -236,14 +236,14 @@ func TestAccS3BucketObjectDataSource_allParams(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_objectLockLegalHoldOff(t *testing.T) { +func TestAccS3ObjectDataSource_objectLockLegalHoldOff(t *testing.T) { rInt := sdkacctest.RandInt() var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName := "data.aws_s3_bucket_object.obj" + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, @@ -254,7 +254,7 @@ func TestAccS3BucketObjectDataSource_objectLockLegalHoldOff(t *testing.T) { { Config: testAccObjectDataSourceConfig_objectLockLegalHoldOff(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), resource.TestCheckResourceAttr(dataSourceName, "content_length", "11"), resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), @@ -270,15 +270,15 @@ func TestAccS3BucketObjectDataSource_objectLockLegalHoldOff(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_objectLockLegalHoldOn(t *testing.T) { +func TestAccS3ObjectDataSource_objectLockLegalHoldOn(t *testing.T) { rInt := sdkacctest.RandInt() retainUntilDate := time.Now().UTC().AddDate(0, 0, 10).Format(time.RFC3339) var rObj s3.GetObjectOutput var dsObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName := "data.aws_s3_bucket_object.obj" + resourceName := "aws_s3_object.object" + dataSourceName := "data.aws_s3_object.obj" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, @@ -289,7 +289,7 @@ func TestAccS3BucketObjectDataSource_objectLockLegalHoldOn(t *testing.T) { { Config: testAccObjectDataSourceConfig_objectLockLegalHoldOn(rInt, retainUntilDate), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), testAccCheckObjectExistsDataSource(dataSourceName, &dsObj), resource.TestCheckResourceAttr(dataSourceName, "content_length", "11"), resource.TestCheckResourceAttrPair(dataSourceName, "content_type", resourceName, "content_type"), @@ -305,14 +305,14 @@ func TestAccS3BucketObjectDataSource_objectLockLegalHoldOn(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_leadingSlash(t *testing.T) { +func TestAccS3ObjectDataSource_leadingSlash(t *testing.T) { var rObj s3.GetObjectOutput var dsObj1, dsObj2, dsObj3 s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" - dataSourceName1 := "data.aws_s3_bucket_object.obj1" - dataSourceName2 := "data.aws_s3_bucket_object.obj2" - dataSourceName3 := "data.aws_s3_bucket_object.obj3" + resourceName := "aws_s3_object.object" + dataSourceName1 := "data.aws_s3_object.obj1" + dataSourceName2 := "data.aws_s3_object.obj2" + dataSourceName3 := "data.aws_s3_object.obj3" rInt := sdkacctest.RandInt() resourceOnlyConf, conf := testAccObjectDataSourceConfig_leadingSlash(rInt) @@ -326,7 +326,7 @@ func TestAccS3BucketObjectDataSource_leadingSlash(t *testing.T) { { Config: resourceOnlyConf, Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &rObj), + testAccCheckObjectExists(resourceName, &rObj), ), }, { @@ -358,15 +358,15 @@ func TestAccS3BucketObjectDataSource_leadingSlash(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_multipleSlashes(t *testing.T) { +func TestAccS3ObjectDataSource_multipleSlashes(t *testing.T) { var rObj1, rObj2 s3.GetObjectOutput var dsObj1, dsObj2, dsObj3 s3.GetObjectOutput - resourceName1 := "aws_s3_bucket_object.object1" - resourceName2 := "aws_s3_bucket_object.object2" - dataSourceName1 := "data.aws_s3_bucket_object.obj1" - dataSourceName2 := "data.aws_s3_bucket_object.obj2" - dataSourceName3 := "data.aws_s3_bucket_object.obj3" + resourceName1 := "aws_s3_object.object1" + resourceName2 := "aws_s3_object.object2" + dataSourceName1 := "data.aws_s3_object.obj1" + dataSourceName2 := "data.aws_s3_object.obj2" + dataSourceName3 := "data.aws_s3_object.obj3" rInt := sdkacctest.RandInt() resourceOnlyConf, conf := testAccObjectDataSourceConfig_multipleSlashes(rInt) @@ -380,8 +380,8 @@ func TestAccS3BucketObjectDataSource_multipleSlashes(t *testing.T) { { Config: resourceOnlyConf, Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName1, &rObj1), - testAccCheckBucketObjectExists(resourceName2, &rObj2), + testAccCheckObjectExists(resourceName1, &rObj1), + testAccCheckObjectExists(resourceName2, &rObj2), ), }, { @@ -408,9 +408,9 @@ func TestAccS3BucketObjectDataSource_multipleSlashes(t *testing.T) { }) } -func TestAccS3BucketObjectDataSource_singleSlashAsKey(t *testing.T) { +func TestAccS3ObjectDataSource_singleSlashAsKey(t *testing.T) { var dsObj s3.GetObjectOutput - dataSourceName := "data.aws_s3_bucket_object.test" + dataSourceName := "data.aws_s3_object.test" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ @@ -463,15 +463,15 @@ resource "aws_s3_bucket" "object_bucket" { bucket = "tf-object-test-bucket-%[1]d" } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[1]d" content = "Hello World" } -data "aws_s3_bucket_object" "obj" { +data "aws_s3_object" "obj" { bucket = aws_s3_bucket.object_bucket.bucket - key = aws_s3_bucket_object.object.key + key = aws_s3_object.object.key } `, randInt) } @@ -487,15 +487,15 @@ resource "aws_s3_access_point" "test" { name = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = %[1]q content = "Hello World" } -data "aws_s3_bucket_object" "test" { +data "aws_s3_object" "test" { bucket = aws_s3_access_point.test.arn - key = aws_s3_bucket_object.test.key + key = aws_s3_object.test.key } `, rName) } @@ -506,16 +506,16 @@ resource "aws_s3_bucket" "object_bucket" { bucket = "tf-object-test-bucket-%[1]d" } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[1]d-readable" content = "yes" content_type = "text/plain" } -data "aws_s3_bucket_object" "obj" { +data "aws_s3_object" "obj" { bucket = aws_s3_bucket.object_bucket.bucket - key = aws_s3_bucket_object.object.key + key = aws_s3_object.object.key } `, randInt) } @@ -531,7 +531,7 @@ resource "aws_kms_key" "example" { deletion_window_in_days = 7 } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[1]d-encrypted" content = "Keep Calm and Carry On" @@ -539,9 +539,9 @@ resource "aws_s3_bucket_object" "object" { kms_key_id = aws_kms_key.example.arn } -data "aws_s3_bucket_object" "obj" { +data "aws_s3_object" "obj" { bucket = aws_s3_bucket.object_bucket.bucket - key = aws_s3_bucket_object.object.key + key = aws_s3_object.object.key } `, randInt) } @@ -557,7 +557,7 @@ resource "aws_kms_key" "example" { deletion_window_in_days = 7 } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[1]d-encrypted" content = "Keep Calm and Carry On" @@ -566,9 +566,9 @@ resource "aws_s3_bucket_object" "object" { bucket_key_enabled = true } -data "aws_s3_bucket_object" "obj" { +data "aws_s3_object" "obj" { bucket = aws_s3_bucket.object_bucket.bucket - key = aws_s3_bucket_object.object.key + key = aws_s3_object.object.key } `, randInt) } @@ -583,7 +583,7 @@ resource "aws_s3_bucket" "object_bucket" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[1]d-all-params" @@ -603,9 +603,9 @@ CONTENT } } -data "aws_s3_bucket_object" "obj" { +data "aws_s3_object" "obj" { bucket = aws_s3_bucket.object_bucket.bucket - key = aws_s3_bucket_object.object.key + key = aws_s3_object.object.key } `, randInt) } @@ -624,16 +624,16 @@ resource "aws_s3_bucket" "object_bucket" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[1]d" content = "Hello World" object_lock_legal_hold_status = "OFF" } -data "aws_s3_bucket_object" "obj" { +data "aws_s3_object" "obj" { bucket = aws_s3_bucket.object_bucket.bucket - key = aws_s3_bucket_object.object.key + key = aws_s3_object.object.key } `, randInt) } @@ -652,7 +652,7 @@ resource "aws_s3_bucket" "object_bucket" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[1]d" content = "Hello World" @@ -662,9 +662,9 @@ resource "aws_s3_bucket_object" "object" { object_lock_retain_until_date = "%[2]s" } -data "aws_s3_bucket_object" "obj" { +data "aws_s3_object" "obj" { bucket = aws_s3_bucket.object_bucket.bucket - key = aws_s3_bucket_object.object.key + key = aws_s3_object.object.key } `, randInt, retainUntilDate) } @@ -675,7 +675,7 @@ resource "aws_s3_bucket" "object_bucket" { bucket = "tf-object-test-bucket-%[1]d" } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket.bucket key = "//tf-testing-obj-%[1]d-readable" content = "yes" @@ -686,17 +686,17 @@ resource "aws_s3_bucket_object" "object" { both := fmt.Sprintf(` %[1]s -data "aws_s3_bucket_object" "obj1" { +data "aws_s3_object" "obj1" { bucket = aws_s3_bucket.object_bucket.bucket key = "tf-testing-obj-%[2]d-readable" } -data "aws_s3_bucket_object" "obj2" { +data "aws_s3_object" "obj2" { bucket = aws_s3_bucket.object_bucket.bucket key = "/tf-testing-obj-%[2]d-readable" } -data "aws_s3_bucket_object" "obj3" { +data "aws_s3_object" "obj3" { bucket = aws_s3_bucket.object_bucket.bucket key = "//tf-testing-obj-%[2]d-readable" } @@ -711,7 +711,7 @@ resource "aws_s3_bucket" "object_bucket" { bucket = "tf-object-test-bucket-%[1]d" } -resource "aws_s3_bucket_object" "object1" { +resource "aws_s3_object" "object1" { bucket = aws_s3_bucket.object_bucket.bucket key = "first//second///third//" content = "yes" @@ -719,7 +719,7 @@ resource "aws_s3_bucket_object" "object1" { } # Without a trailing slash. -resource "aws_s3_bucket_object" "object2" { +resource "aws_s3_object" "object2" { bucket = aws_s3_bucket.object_bucket.bucket key = "/first////second/third" content = "no" @@ -730,17 +730,17 @@ resource "aws_s3_bucket_object" "object2" { both := fmt.Sprintf(` %s -data "aws_s3_bucket_object" "obj1" { +data "aws_s3_object" "obj1" { bucket = aws_s3_bucket.object_bucket.bucket key = "first/second/third/" } -data "aws_s3_bucket_object" "obj2" { +data "aws_s3_object" "obj2" { bucket = aws_s3_bucket.object_bucket.bucket key = "first//second///third//" } -data "aws_s3_bucket_object" "obj3" { +data "aws_s3_object" "obj3" { bucket = aws_s3_bucket.object_bucket.bucket key = "first/second/third" } @@ -755,7 +755,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -data "aws_s3_bucket_object" "test" { +data "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = "/" } diff --git a/internal/service/s3/bucket_object_test.go b/internal/service/s3/object_test.go similarity index 64% rename from internal/service/s3/bucket_object_test.go rename to internal/service/s3/object_test.go index e46d9314eb0..12e95f71c99 100644 --- a/internal/service/s3/bucket_object_test.go +++ b/internal/service/s3/object_test.go @@ -25,7 +25,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/tfresource" ) -func TestAccS3BucketObject_noNameNoKey(t *testing.T) { +func TestAccS3Object_noNameNoKey(t *testing.T) { bucketError := regexp.MustCompile(`bucket must not be empty`) keyError := regexp.MustCompile(`key must not be empty`) @@ -33,39 +33,39 @@ func TestAccS3BucketObject_noNameNoKey(t *testing.T) { PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectBasicConfig("", "a key"), + Config: testAccObjectBasicConfig("", "a key"), ExpectError: bucketError, }, { PreConfig: func() {}, - Config: testAccBucketObjectBasicConfig("a name", ""), + Config: testAccObjectBasicConfig("a name", ""), ExpectError: keyError, }, }, }) } -func TestAccS3BucketObject_empty(t *testing.T) { +func TestAccS3Object_empty(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectEmptyConfig(rName), + Config: testAccObjectEmptyConfig(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, ""), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, ""), ), }, { @@ -79,25 +79,25 @@ func TestAccS3BucketObject_empty(t *testing.T) { }) } -func TestAccS3BucketObject_source(t *testing.T) { +func TestAccS3Object_source(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectSourceConfig(rName, source), + Config: testAccObjectSourceConfig(rName, source), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "{anything will do }"), ), }, { @@ -111,23 +111,23 @@ func TestAccS3BucketObject_source(t *testing.T) { }) } -func TestAccS3BucketObject_content(t *testing.T) { +func TestAccS3Object_content(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectContentConfig(rName, "some_bucket_content"), + Config: testAccObjectContentConfig(rName, "some_bucket_content"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "some_bucket_content"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "some_bucket_content"), ), }, { @@ -141,25 +141,25 @@ func TestAccS3BucketObject_content(t *testing.T) { }) } -func TestAccS3BucketObject_etagEncryption(t *testing.T) { +func TestAccS3Object_etagEncryption(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectEtagEncryption(rName, source), + Config: testAccObjectEtagEncryption(rName, source), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "{anything will do }"), resource.TestCheckResourceAttr(resourceName, "etag", "7b006ff4d70f68cc65061acf2f802e6f"), ), }, @@ -174,38 +174,38 @@ func TestAccS3BucketObject_etagEncryption(t *testing.T) { }) } -func TestAccS3BucketObject_contentBase64(t *testing.T) { +func TestAccS3Object_contentBase64(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectContentBase64Config(rName, base64.StdEncoding.EncodeToString([]byte("some_bucket_content"))), + Config: testAccObjectContentBase64Config(rName, base64.StdEncoding.EncodeToString([]byte("some_bucket_content"))), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "some_bucket_content"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "some_bucket_content"), ), }, }, }) } -func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { +func TestAccS3Object_sourceHashTrigger(t *testing.T) { var obj, updated_obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) startingData := "Ebben!" changingData := "Ne andrò lontana" - filename := testAccBucketObjectCreateTempFile(t, startingData) + filename := testAccObjectCreateTempFile(t, startingData) defer os.Remove(filename) rewriteFile := func(*terraform.State) error { @@ -220,14 +220,14 @@ func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectConfig_sourceHashTrigger(rName, filename), + Config: testAccObjectConfig_sourceHashTrigger(rName, filename), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "Ebben!"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "Ebben!"), resource.TestCheckResourceAttr(resourceName, "source_hash", "7c7e02a79f28968882bb1426c8f8bfc6"), rewriteFile, ), @@ -235,10 +235,10 @@ func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_sourceHashTrigger(rName, filename), + Config: testAccObjectConfig_sourceHashTrigger(rName, filename), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &updated_obj), - testAccCheckBucketObjectBody(&updated_obj, "Ne andrò lontana"), + testAccCheckObjectExists(resourceName, &updated_obj), + testAccCheckObjectBody(&updated_obj, "Ne andrò lontana"), resource.TestCheckResourceAttr(resourceName, "source_hash", "cffc5e20de2d21764145b1124c9b337b"), ), }, @@ -253,25 +253,25 @@ func TestAccS3BucketObject_sourceHashTrigger(t *testing.T) { }) } -func TestAccS3BucketObject_withContentCharacteristics(t *testing.T) { +func TestAccS3Object_withContentCharacteristics(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_withContentCharacteristics(rName, source), + Config: testAccObjectConfig_withContentCharacteristics(rName, source), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "{anything will do }"), resource.TestCheckResourceAttr(resourceName, "content_type", "binary/octet-stream"), resource.TestCheckResourceAttr(resourceName, "website_redirect", "http://google.com"), ), @@ -280,24 +280,24 @@ func TestAccS3BucketObject_withContentCharacteristics(t *testing.T) { }) } -func TestAccS3BucketObject_nonVersioned(t *testing.T) { - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial object state") +func TestAccS3Object_nonVersioned(t *testing.T) { + sourceInitial := testAccObjectCreateTempFile(t, "initial object state") defer os.Remove(sourceInitial) rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) var originalObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t); acctest.PreCheckAssumeRoleARN(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_nonVersioned(rName, sourceInitial), + Config: testAccObjectConfig_nonVersioned(rName, sourceInitial), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial object state"), + testAccCheckObjectExists(resourceName, &originalObj), + testAccCheckObjectBody(&originalObj, "initial object state"), resource.TestCheckResourceAttr(resourceName, "version_id", ""), ), }, @@ -312,27 +312,27 @@ func TestAccS3BucketObject_nonVersioned(t *testing.T) { }) } -func TestAccS3BucketObject_updates(t *testing.T) { +func TestAccS3Object_updates(t *testing.T) { var originalObj, modifiedObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial object state") + sourceInitial := testAccObjectCreateTempFile(t, "initial object state") defer os.Remove(sourceInitial) - sourceModified := testAccBucketObjectCreateTempFile(t, "modified object") + sourceModified := testAccObjectCreateTempFile(t, "modified object") defer os.Remove(sourceInitial) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_updateable(rName, false, sourceInitial), + Config: testAccObjectConfig_updateable(rName, false, sourceInitial), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial object state"), + testAccCheckObjectExists(resourceName, &originalObj), + testAccCheckObjectBody(&originalObj, "initial object state"), resource.TestCheckResourceAttr(resourceName, "etag", "647d1d58e1011c743ec67d5e8af87b53"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), @@ -340,10 +340,10 @@ func TestAccS3BucketObject_updates(t *testing.T) { ), }, { - Config: testAccBucketObjectConfig_updateable(rName, false, sourceModified), + Config: testAccObjectConfig_updateable(rName, false, sourceModified), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, "modified object"), + testAccCheckObjectExists(resourceName, &modifiedObj), + testAccCheckObjectBody(&modifiedObj, "modified object"), resource.TestCheckResourceAttr(resourceName, "etag", "1c7fd13df1515c2a13ad9eb068931f09"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), @@ -361,15 +361,15 @@ func TestAccS3BucketObject_updates(t *testing.T) { }) } -func TestAccS3BucketObject_updateSameFile(t *testing.T) { +func TestAccS3Object_updateSameFile(t *testing.T) { var originalObj, modifiedObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) startingData := "lane 8" changingData := "chicane" - filename := testAccBucketObjectCreateTempFile(t, startingData) + filename := testAccObjectCreateTempFile(t, startingData) defer os.Remove(filename) rewriteFile := func(*terraform.State) error { @@ -384,23 +384,23 @@ func TestAccS3BucketObject_updateSameFile(t *testing.T) { PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_updateable(rName, false, filename), + Config: testAccObjectConfig_updateable(rName, false, filename), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, startingData), + testAccCheckObjectExists(resourceName, &originalObj), + testAccCheckObjectBody(&originalObj, startingData), resource.TestCheckResourceAttr(resourceName, "etag", "aa48b42f36a2652cbee40c30a5df7d25"), rewriteFile, ), ExpectNonEmptyPlan: true, }, { - Config: testAccBucketObjectConfig_updateable(rName, false, filename), + Config: testAccObjectConfig_updateable(rName, false, filename), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, changingData), + testAccCheckObjectExists(resourceName, &modifiedObj), + testAccCheckObjectBody(&modifiedObj, changingData), resource.TestCheckResourceAttr(resourceName, "etag", "fafc05f8c4da0266a99154681ab86e8c"), ), }, @@ -408,37 +408,37 @@ func TestAccS3BucketObject_updateSameFile(t *testing.T) { }) } -func TestAccS3BucketObject_updatesWithVersioning(t *testing.T) { +func TestAccS3Object_updatesWithVersioning(t *testing.T) { var originalObj, modifiedObj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial versioned object state") + sourceInitial := testAccObjectCreateTempFile(t, "initial versioned object state") defer os.Remove(sourceInitial) - sourceModified := testAccBucketObjectCreateTempFile(t, "modified versioned object") + sourceModified := testAccObjectCreateTempFile(t, "modified versioned object") defer os.Remove(sourceInitial) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_updateable(rName, true, sourceInitial), + Config: testAccObjectConfig_updateable(rName, true, sourceInitial), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial versioned object state"), + testAccCheckObjectExists(resourceName, &originalObj), + testAccCheckObjectBody(&originalObj, "initial versioned object state"), resource.TestCheckResourceAttr(resourceName, "etag", "cee4407fa91906284e2a5e5e03e86b1b"), ), }, { - Config: testAccBucketObjectConfig_updateable(rName, true, sourceModified), + Config: testAccObjectConfig_updateable(rName, true, sourceModified), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, "modified versioned object"), + testAccCheckObjectExists(resourceName, &modifiedObj), + testAccCheckObjectBody(&modifiedObj, "modified versioned object"), resource.TestCheckResourceAttr(resourceName, "etag", "00b8c73b1b50e7cc932362c7225b8e29"), - testAccCheckBucketObjectVersionIdDiffers(&modifiedObj, &originalObj), + testAccCheckObjectVersionIdDiffers(&modifiedObj, &originalObj), ), }, { @@ -452,66 +452,66 @@ func TestAccS3BucketObject_updatesWithVersioning(t *testing.T) { }) } -func TestAccS3BucketObject_updatesWithVersioningViaAccessPoint(t *testing.T) { +func TestAccS3Object_updatesWithVersioningViaAccessPoint(t *testing.T) { var originalObj, modifiedObj s3.GetObjectOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_s3_bucket_object.test" + resourceName := "aws_s3_object.test" accessPointResourceName := "aws_s3_access_point.test" - sourceInitial := testAccBucketObjectCreateTempFile(t, "initial versioned object state") + sourceInitial := testAccObjectCreateTempFile(t, "initial versioned object state") defer os.Remove(sourceInitial) - sourceModified := testAccBucketObjectCreateTempFile(t, "modified versioned object") + sourceModified := testAccObjectCreateTempFile(t, "modified versioned object") defer os.Remove(sourceInitial) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_updateableViaAccessPoint(rName, true, sourceInitial), + Config: testAccObjectConfig_updateableViaAccessPoint(rName, true, sourceInitial), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &originalObj), - testAccCheckBucketObjectBody(&originalObj, "initial versioned object state"), + testAccCheckObjectExists(resourceName, &originalObj), + testAccCheckObjectBody(&originalObj, "initial versioned object state"), resource.TestCheckResourceAttrPair(resourceName, "bucket", accessPointResourceName, "arn"), resource.TestCheckResourceAttr(resourceName, "etag", "cee4407fa91906284e2a5e5e03e86b1b"), ), }, { - Config: testAccBucketObjectConfig_updateableViaAccessPoint(rName, true, sourceModified), + Config: testAccObjectConfig_updateableViaAccessPoint(rName, true, sourceModified), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &modifiedObj), - testAccCheckBucketObjectBody(&modifiedObj, "modified versioned object"), + testAccCheckObjectExists(resourceName, &modifiedObj), + testAccCheckObjectBody(&modifiedObj, "modified versioned object"), resource.TestCheckResourceAttr(resourceName, "etag", "00b8c73b1b50e7cc932362c7225b8e29"), - testAccCheckBucketObjectVersionIdDiffers(&modifiedObj, &originalObj), + testAccCheckObjectVersionIdDiffers(&modifiedObj, &originalObj), ), }, }, }) } -func TestAccS3BucketObject_kms(t *testing.T) { +func TestAccS3Object_kms(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withKMSID(rName, source), + Config: testAccObjectConfig_withKMSID(rName, source), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectSSE(resourceName, "aws:kms"), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectSSE(resourceName, "aws:kms"), + testAccCheckObjectBody(&obj, "{anything will do }"), ), }, { @@ -525,27 +525,27 @@ func TestAccS3BucketObject_kms(t *testing.T) { }) } -func TestAccS3BucketObject_sse(t *testing.T) { +func TestAccS3Object_sse(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - source := testAccBucketObjectCreateTempFile(t, "{anything will do }") + source := testAccObjectCreateTempFile(t, "{anything will do }") defer os.Remove(source) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withSSE(rName, source), + Config: testAccObjectConfig_withSSE(rName, source), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectSSE(resourceName, "AES256"), - testAccCheckBucketObjectBody(&obj, "{anything will do }"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectSSE(resourceName, "AES256"), + testAccCheckObjectBody(&obj, "{anything will do }"), ), }, { @@ -559,44 +559,44 @@ func TestAccS3BucketObject_sse(t *testing.T) { }) } -func TestAccS3BucketObject_acl(t *testing.T) { +func TestAccS3Object_acl(t *testing.T) { var obj1, obj2, obj3 s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_acl(rName, "some_bucket_content", "private"), + Config: testAccObjectConfig_acl(rName, "some_bucket_content", "private"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "some_bucket_content"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "some_bucket_content"), resource.TestCheckResourceAttr(resourceName, "acl", "private"), - testAccCheckBucketObjectACL(resourceName, []string{"FULL_CONTROL"}), + testAccCheckObjectACL(resourceName, []string{"FULL_CONTROL"}), ), }, { - Config: testAccBucketObjectConfig_acl(rName, "some_bucket_content", "public-read"), + Config: testAccObjectConfig_acl(rName, "some_bucket_content", "public-read"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "some_bucket_content"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "some_bucket_content"), resource.TestCheckResourceAttr(resourceName, "acl", "public-read"), - testAccCheckBucketObjectACL(resourceName, []string{"FULL_CONTROL", "READ"}), + testAccCheckObjectACL(resourceName, []string{"FULL_CONTROL", "READ"}), ), }, { - Config: testAccBucketObjectConfig_acl(rName, "changed_some_bucket_content", "private"), + Config: testAccObjectConfig_acl(rName, "changed_some_bucket_content", "private"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdDiffers(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "changed_some_bucket_content"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdDiffers(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "changed_some_bucket_content"), resource.TestCheckResourceAttr(resourceName, "acl", "private"), - testAccCheckBucketObjectACL(resourceName, []string{"FULL_CONTROL"}), + testAccCheckObjectACL(resourceName, []string{"FULL_CONTROL"}), ), }, { @@ -610,39 +610,39 @@ func TestAccS3BucketObject_acl(t *testing.T) { }) } -func TestAccS3BucketObject_metadata(t *testing.T) { +func TestAccS3Object_metadata(t *testing.T) { rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_withMetadata(rName, "key1", "value1", "key2", "value2"), + Config: testAccObjectConfig_withMetadata(rName, "key1", "value1", "key2", "value2"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), + testAccCheckObjectExists(resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "metadata.%", "2"), resource.TestCheckResourceAttr(resourceName, "metadata.key1", "value1"), resource.TestCheckResourceAttr(resourceName, "metadata.key2", "value2"), ), }, { - Config: testAccBucketObjectConfig_withMetadata(rName, "key1", "value1updated", "key3", "value3"), + Config: testAccObjectConfig_withMetadata(rName, "key1", "value1updated", "key3", "value3"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), + testAccCheckObjectExists(resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "metadata.%", "2"), resource.TestCheckResourceAttr(resourceName, "metadata.key1", "value1updated"), resource.TestCheckResourceAttr(resourceName, "metadata.key3", "value3"), ), }, { - Config: testAccBucketObjectEmptyConfig(rName), + Config: testAccObjectEmptyConfig(rName), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), + testAccCheckObjectExists(resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "metadata.%", "0"), ), }, @@ -657,56 +657,56 @@ func TestAccS3BucketObject_metadata(t *testing.T) { }) } -func TestAccS3BucketObject_storageClass(t *testing.T) { +func TestAccS3Object_storageClass(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectContentConfig(rName, "some_bucket_content"), + Config: testAccObjectContentConfig(rName, "some_bucket_content"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), + testAccCheckObjectExists(resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "storage_class", "STANDARD"), - testAccCheckBucketObjectStorageClass(resourceName, "STANDARD"), + testAccCheckObjectStorageClass(resourceName, "STANDARD"), ), }, { - Config: testAccBucketObjectConfig_storageClass(rName, "REDUCED_REDUNDANCY"), + Config: testAccObjectConfig_storageClass(rName, "REDUCED_REDUNDANCY"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), + testAccCheckObjectExists(resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "storage_class", "REDUCED_REDUNDANCY"), - testAccCheckBucketObjectStorageClass(resourceName, "REDUCED_REDUNDANCY"), + testAccCheckObjectStorageClass(resourceName, "REDUCED_REDUNDANCY"), ), }, { - Config: testAccBucketObjectConfig_storageClass(rName, "GLACIER"), + Config: testAccObjectConfig_storageClass(rName, "GLACIER"), Check: resource.ComposeTestCheckFunc( // Can't GetObject on an object in Glacier without restoring it. resource.TestCheckResourceAttr(resourceName, "storage_class", "GLACIER"), - testAccCheckBucketObjectStorageClass(resourceName, "GLACIER"), + testAccCheckObjectStorageClass(resourceName, "GLACIER"), ), }, { - Config: testAccBucketObjectConfig_storageClass(rName, "INTELLIGENT_TIERING"), + Config: testAccObjectConfig_storageClass(rName, "INTELLIGENT_TIERING"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), + testAccCheckObjectExists(resourceName, &obj), resource.TestCheckResourceAttr(resourceName, "storage_class", "INTELLIGENT_TIERING"), - testAccCheckBucketObjectStorageClass(resourceName, "INTELLIGENT_TIERING"), + testAccCheckObjectStorageClass(resourceName, "INTELLIGENT_TIERING"), ), }, { - Config: testAccBucketObjectConfig_storageClass(rName, "DEEP_ARCHIVE"), + Config: testAccObjectConfig_storageClass(rName, "DEEP_ARCHIVE"), Check: resource.ComposeTestCheckFunc( // Can't GetObject on an object in DEEP_ARCHIVE without restoring it. resource.TestCheckResourceAttr(resourceName, "storage_class", "DEEP_ARCHIVE"), - testAccCheckBucketObjectStorageClass(resourceName, "DEEP_ARCHIVE"), + testAccCheckObjectStorageClass(resourceName, "DEEP_ARCHIVE"), ), }, { @@ -720,24 +720,24 @@ func TestAccS3BucketObject_storageClass(t *testing.T) { }) } -func TestAccS3BucketObject_tags(t *testing.T) { +func TestAccS3Object_tags(t *testing.T) { var obj1, obj2, obj3, obj4 s3.GetObjectOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" key := "test-key" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "stuff"), + Config: testAccObjectConfig_withTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -746,11 +746,11 @@ func TestAccS3BucketObject_tags(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withUpdatedTags(rName, key, "stuff"), + Config: testAccObjectConfig_withUpdatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -760,21 +760,21 @@ func TestAccS3BucketObject_tags(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withNoTags(rName, key, "stuff"), + Config: testAccObjectConfig_withNoTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "changed stuff"), + Config: testAccObjectConfig_withTags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectExists(resourceName, &obj4), + testAccCheckObjectVersionIdDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -792,24 +792,24 @@ func TestAccS3BucketObject_tags(t *testing.T) { }) } -func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { +func TestAccS3Object_tagsLeadingSingleSlash(t *testing.T) { var obj1, obj2, obj3, obj4 s3.GetObjectOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" key := "/test-key" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "stuff"), + Config: testAccObjectConfig_withTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -818,11 +818,11 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withUpdatedTags(rName, key, "stuff"), + Config: testAccObjectConfig_withUpdatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -832,21 +832,21 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withNoTags(rName, key, "stuff"), + Config: testAccObjectConfig_withNoTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "changed stuff"), + Config: testAccObjectConfig_withTags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectExists(resourceName, &obj4), + testAccCheckObjectVersionIdDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -864,24 +864,24 @@ func TestAccS3BucketObject_tagsLeadingSingleSlash(t *testing.T) { }) } -func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { +func TestAccS3Object_tagsLeadingMultipleSlashes(t *testing.T) { var obj1, obj2, obj3, obj4 s3.GetObjectOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" key := "/////test-key" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "stuff"), + Config: testAccObjectConfig_withTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -890,11 +890,11 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withUpdatedTags(rName, key, "stuff"), + Config: testAccObjectConfig_withUpdatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -904,21 +904,21 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withNoTags(rName, key, "stuff"), + Config: testAccObjectConfig_withNoTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "changed stuff"), + Config: testAccObjectConfig_withTags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectExists(resourceName, &obj4), + testAccCheckObjectVersionIdDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -929,24 +929,24 @@ func TestAccS3BucketObject_tagsLeadingMultipleSlashes(t *testing.T) { }) } -func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { +func TestAccS3Object_tagsMultipleSlashes(t *testing.T) { var obj1, obj2, obj3, obj4 s3.GetObjectOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" key := "first//second///third//" resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "stuff"), + Config: testAccObjectConfig_withTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -955,11 +955,11 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withUpdatedTags(rName, key, "stuff"), + Config: testAccObjectConfig_withUpdatedTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "4"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "B@BB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "X X"), @@ -969,21 +969,21 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withNoTags(rName, key, "stuff"), + Config: testAccObjectConfig_withNoTags(rName, key, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), ), }, { PreConfig: func() {}, - Config: testAccBucketObjectConfig_withTags(rName, key, "changed stuff"), + Config: testAccObjectConfig_withTags(rName, key, "changed stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj4), - testAccCheckBucketObjectVersionIdDiffers(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "changed stuff"), + testAccCheckObjectExists(resourceName, &obj4), + testAccCheckObjectVersionIdDiffers(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), @@ -994,33 +994,33 @@ func TestAccS3BucketObject_tagsMultipleSlashes(t *testing.T) { }) } -func TestAccS3BucketObject_objectLockLegalHoldStartWithNone(t *testing.T) { +func TestAccS3Object_objectLockLegalHoldStartWithNone(t *testing.T) { var obj1, obj2, obj3 s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_noObjectLockLegalHold(rName, "stuff"), + Config: testAccObjectConfig_noObjectLockLegalHold(rName, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), ), }, { - Config: testAccBucketObjectConfig_withObjectLockLegalHold(rName, "stuff", "ON"), + Config: testAccObjectConfig_withObjectLockLegalHold(rName, "stuff", "ON"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "ON"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1028,11 +1028,11 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithNone(t *testing.T) { }, // Remove legal hold but create a new object version to test force_destroy { - Config: testAccBucketObjectConfig_withObjectLockLegalHold(rName, "changed stuff", "OFF"), + Config: testAccObjectConfig_withObjectLockLegalHold(rName, "changed stuff", "OFF"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdDiffers(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "changed stuff"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdDiffers(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "OFF"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1042,33 +1042,33 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithNone(t *testing.T) { }) } -func TestAccS3BucketObject_objectLockLegalHoldStartWithOn(t *testing.T) { +func TestAccS3Object_objectLockLegalHoldStartWithOn(t *testing.T) { var obj1, obj2 s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_withObjectLockLegalHold(rName, "stuff", "ON"), + Config: testAccObjectConfig_withObjectLockLegalHold(rName, "stuff", "ON"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "ON"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), ), }, { - Config: testAccBucketObjectConfig_withObjectLockLegalHold(rName, "stuff", "OFF"), + Config: testAccObjectConfig_withObjectLockLegalHold(rName, "stuff", "OFF"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", "OFF"), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1078,9 +1078,9 @@ func TestAccS3BucketObject_objectLockLegalHoldStartWithOn(t *testing.T) { }) } -func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { +func TestAccS3Object_objectLockRetentionStartWithNone(t *testing.T) { var obj1, obj2, obj3 s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) retainUntilDate := time.Now().UTC().AddDate(0, 0, 10).Format(time.RFC3339) @@ -1088,24 +1088,24 @@ func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_noObjectLockRetention(rName, "stuff"), + Config: testAccObjectConfig_noObjectLockRetention(rName, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), ), }, { - Config: testAccBucketObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate), + Config: testAccObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate), @@ -1113,11 +1113,11 @@ func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { }, // Remove retention period but create a new object version to test force_destroy { - Config: testAccBucketObjectConfig_noObjectLockRetention(rName, "changed stuff"), + Config: testAccObjectConfig_noObjectLockRetention(rName, "changed stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdDiffers(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "changed stuff"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdDiffers(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "changed stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1127,9 +1127,9 @@ func TestAccS3BucketObject_objectLockRetentionStartWithNone(t *testing.T) { }) } -func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { +func TestAccS3Object_objectLockRetentionStartWithSet(t *testing.T) { var obj1, obj2, obj3, obj4 s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) retainUntilDate1 := time.Now().UTC().AddDate(0, 0, 20).Format(time.RFC3339) retainUntilDate2 := time.Now().UTC().AddDate(0, 0, 30).Format(time.RFC3339) @@ -1139,46 +1139,46 @@ func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate1), + Config: testAccObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate1), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate1), ), }, { - Config: testAccBucketObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate2), + Config: testAccObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate2), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj2), - testAccCheckBucketObjectVersionIdEquals(&obj2, &obj1), - testAccCheckBucketObjectBody(&obj2, "stuff"), + testAccCheckObjectExists(resourceName, &obj2), + testAccCheckObjectVersionIdEquals(&obj2, &obj1), + testAccCheckObjectBody(&obj2, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate2), ), }, { - Config: testAccBucketObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate3), + Config: testAccObjectConfig_withObjectLockRetention(rName, "stuff", retainUntilDate3), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj3), - testAccCheckBucketObjectVersionIdEquals(&obj3, &obj2), - testAccCheckBucketObjectBody(&obj3, "stuff"), + testAccCheckObjectExists(resourceName, &obj3), + testAccCheckObjectVersionIdEquals(&obj3, &obj2), + testAccCheckObjectBody(&obj3, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", "GOVERNANCE"), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", retainUntilDate3), ), }, { - Config: testAccBucketObjectConfig_noObjectLockRetention(rName, "stuff"), + Config: testAccObjectConfig_noObjectLockRetention(rName, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj4), - testAccCheckBucketObjectVersionIdEquals(&obj4, &obj3), - testAccCheckBucketObjectBody(&obj4, "stuff"), + testAccCheckObjectExists(resourceName, &obj4), + testAccCheckObjectVersionIdEquals(&obj4, &obj3), + testAccCheckObjectBody(&obj4, "stuff"), resource.TestCheckResourceAttr(resourceName, "object_lock_legal_hold_status", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_mode", ""), resource.TestCheckResourceAttr(resourceName, "object_lock_retain_until_date", ""), @@ -1188,22 +1188,22 @@ func TestAccS3BucketObject_objectLockRetentionStartWithSet(t *testing.T) { }) } -func TestAccS3BucketObject_objectBucketKeyEnabled(t *testing.T) { +func TestAccS3Object_objectBucketKeyEnabled(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_objectBucketKeyEnabled(rName, "stuff"), + Config: testAccObjectConfig_objectBucketKeyEnabled(rName, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "stuff"), resource.TestCheckResourceAttr(resourceName, "bucket_key_enabled", "true"), ), }, @@ -1211,22 +1211,22 @@ func TestAccS3BucketObject_objectBucketKeyEnabled(t *testing.T) { }) } -func TestAccS3BucketObject_bucketBucketKeyEnabled(t *testing.T) { +func TestAccS3Object_bucketBucketKeyEnabled(t *testing.T) { var obj s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_bucketBucketKeyEnabled(rName, "stuff"), + Config: testAccObjectConfig_bucketBucketKeyEnabled(rName, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "stuff"), resource.TestCheckResourceAttr(resourceName, "bucket_key_enabled", "true"), ), }, @@ -1234,32 +1234,32 @@ func TestAccS3BucketObject_bucketBucketKeyEnabled(t *testing.T) { }) } -func TestAccS3BucketObject_defaultBucketSSE(t *testing.T) { +func TestAccS3Object_defaultBucketSSE(t *testing.T) { var obj1 s3.GetObjectOutput - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) resource.ParallelTest(t, resource.TestCase{ PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), Providers: acctest.Providers, - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { - Config: testAccBucketObjectConfig_defaultBucketSSE(rName, "stuff"), + Config: testAccObjectConfig_defaultBucketSSE(rName, "stuff"), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj1), - testAccCheckBucketObjectBody(&obj1, "stuff"), + testAccCheckObjectExists(resourceName, &obj1), + testAccCheckObjectBody(&obj1, "stuff"), ), }, }, }) } -func TestAccS3BucketObject_ignoreTags(t *testing.T) { +func TestAccS3Object_ignoreTags(t *testing.T) { var obj s3.GetObjectOutput rName := sdkacctest.RandomWithPrefix(acctest.ResourcePrefix) - resourceName := "aws_s3_bucket_object.object" + resourceName := "aws_s3_object.object" key := "test-key" var providers []*schema.Provider @@ -1267,19 +1267,19 @@ func TestAccS3BucketObject_ignoreTags(t *testing.T) { PreCheck: func() { acctest.PreCheck(t) }, ErrorCheck: acctest.ErrorCheck(t, s3.EndpointsID), ProviderFactories: acctest.FactoriesInternal(&providers), - CheckDestroy: testAccCheckBucketObjectDestroy, + CheckDestroy: testAccCheckObjectDestroy, Steps: []resource.TestStep{ { PreConfig: func() {}, Config: acctest.ConfigCompose( acctest.ConfigIgnoreTagsKeyPrefixes1("ignorekey"), - testAccBucketObjectConfig_withNoTags(rName, key, "stuff")), + testAccObjectConfig_withNoTags(rName, key, "stuff")), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), - testAccCheckBucketObjectUpdateTags(resourceName, nil, map[string]string{"ignorekey1": "ignorevalue1"}), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "stuff"), + testAccCheckObjectUpdateTags(resourceName, nil, map[string]string{"ignorekey1": "ignorevalue1"}), resource.TestCheckResourceAttr(resourceName, "tags.%", "0"), - testAccCheckBucketObjectCheckTags(resourceName, map[string]string{ + testAccCheckObjectCheckTags(resourceName, map[string]string{ "ignorekey1": "ignorevalue1", }), ), @@ -1288,15 +1288,15 @@ func TestAccS3BucketObject_ignoreTags(t *testing.T) { PreConfig: func() {}, Config: acctest.ConfigCompose( acctest.ConfigIgnoreTagsKeyPrefixes1("ignorekey"), - testAccBucketObjectConfig_withTags(rName, key, "stuff")), + testAccObjectConfig_withTags(rName, key, "stuff")), Check: resource.ComposeTestCheckFunc( - testAccCheckBucketObjectExists(resourceName, &obj), - testAccCheckBucketObjectBody(&obj, "stuff"), + testAccCheckObjectExists(resourceName, &obj), + testAccCheckObjectBody(&obj, "stuff"), resource.TestCheckResourceAttr(resourceName, "tags.%", "3"), resource.TestCheckResourceAttr(resourceName, "tags.Key1", "A@AA"), resource.TestCheckResourceAttr(resourceName, "tags.Key2", "BBB"), resource.TestCheckResourceAttr(resourceName, "tags.Key3", "CCC"), - testAccCheckBucketObjectCheckTags(resourceName, map[string]string{ + testAccCheckObjectCheckTags(resourceName, map[string]string{ "ignorekey1": "ignorevalue1", "Key1": "A@AA", "Key2": "BBB", @@ -1308,7 +1308,7 @@ func TestAccS3BucketObject_ignoreTags(t *testing.T) { }) } -func testAccCheckBucketObjectVersionIdDiffers(first, second *s3.GetObjectOutput) resource.TestCheckFunc { +func testAccCheckObjectVersionIdDiffers(first, second *s3.GetObjectOutput) resource.TestCheckFunc { return func(s *terraform.State) error { if first.VersionId == nil { return fmt.Errorf("Expected first object to have VersionId: %s", first) @@ -1325,7 +1325,7 @@ func testAccCheckBucketObjectVersionIdDiffers(first, second *s3.GetObjectOutput) } } -func testAccCheckBucketObjectVersionIdEquals(first, second *s3.GetObjectOutput) resource.TestCheckFunc { +func testAccCheckObjectVersionIdEquals(first, second *s3.GetObjectOutput) resource.TestCheckFunc { return func(s *terraform.State) error { if first.VersionId == nil { return fmt.Errorf("Expected first object to have VersionId: %s", first) @@ -1342,11 +1342,11 @@ func testAccCheckBucketObjectVersionIdEquals(first, second *s3.GetObjectOutput) } } -func testAccCheckBucketObjectDestroy(s *terraform.State) error { +func testAccCheckObjectDestroy(s *terraform.State) error { conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn for _, rs := range s.RootModule().Resources { - if rs.Type != "aws_s3_bucket_object" { + if rs.Type != "aws_s3_object" { continue } @@ -1363,7 +1363,7 @@ func testAccCheckBucketObjectDestroy(s *terraform.State) error { return nil } -func testAccCheckBucketObjectExists(n string, obj *s3.GetObjectOutput) resource.TestCheckFunc { +func testAccCheckObjectExists(n string, obj *s3.GetObjectOutput) resource.TestCheckFunc { return func(s *terraform.State) error { rs, ok := s.RootModule().Resources[n] if !ok { @@ -1371,7 +1371,7 @@ func testAccCheckBucketObjectExists(n string, obj *s3.GetObjectOutput) resource. } if rs.Primary.ID == "" { - return fmt.Errorf("No S3 Bucket Object ID is set") + return fmt.Errorf("No S3 Object ID is set") } conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -1405,7 +1405,7 @@ func testAccCheckBucketObjectExists(n string, obj *s3.GetObjectOutput) resource. } if err != nil { - return fmt.Errorf("S3Bucket Object error: %s", err) + return fmt.Errorf("S3 Object error: %s", err) } *obj = *out @@ -1414,7 +1414,7 @@ func testAccCheckBucketObjectExists(n string, obj *s3.GetObjectOutput) resource. } } -func testAccCheckBucketObjectBody(obj *s3.GetObjectOutput, want string) resource.TestCheckFunc { +func testAccCheckObjectBody(obj *s3.GetObjectOutput, want string) resource.TestCheckFunc { return func(s *terraform.State) error { body, err := io.ReadAll(obj.Body) if err != nil { @@ -1430,7 +1430,7 @@ func testAccCheckBucketObjectBody(obj *s3.GetObjectOutput, want string) resource } } -func testAccCheckBucketObjectACL(n string, expectedPerms []string) resource.TestCheckFunc { +func testAccCheckObjectACL(n string, expectedPerms []string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -1458,7 +1458,7 @@ func testAccCheckBucketObjectACL(n string, expectedPerms []string) resource.Test } } -func testAccCheckBucketObjectStorageClass(n, expectedClass string) resource.TestCheckFunc { +func testAccCheckObjectStorageClass(n, expectedClass string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -1488,7 +1488,7 @@ func testAccCheckBucketObjectStorageClass(n, expectedClass string) resource.Test } } -func testAccCheckBucketObjectSSE(n, expectedSSE string) resource.TestCheckFunc { +func testAccCheckObjectSSE(n, expectedSSE string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -1516,7 +1516,7 @@ func testAccCheckBucketObjectSSE(n, expectedSSE string) resource.TestCheckFunc { } } -func testAccBucketObjectCreateTempFile(t *testing.T, data string) string { +func testAccObjectCreateTempFile(t *testing.T, data string) string { tmpFile, err := os.CreateTemp("", "tf-acc-s3-obj") if err != nil { t.Fatal(err) @@ -1532,7 +1532,7 @@ func testAccBucketObjectCreateTempFile(t *testing.T, data string) string { return filename } -func testAccCheckBucketObjectUpdateTags(n string, oldTags, newTags map[string]string) resource.TestCheckFunc { +func testAccCheckObjectUpdateTags(n string, oldTags, newTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -1541,7 +1541,7 @@ func testAccCheckBucketObjectUpdateTags(n string, oldTags, newTags map[string]st } } -func testAccCheckBucketObjectCheckTags(n string, expectedTags map[string]string) resource.TestCheckFunc { +func testAccCheckObjectCheckTags(n string, expectedTags map[string]string) resource.TestCheckFunc { return func(s *terraform.State) error { rs := s.RootModule().Resources[n] conn := acctest.Provider.Meta().(*conns.AWSClient).S3Conn @@ -1560,35 +1560,35 @@ func testAccCheckBucketObjectCheckTags(n string, expectedTags map[string]string) } } -func testAccBucketObjectBasicConfig(bucket, key string) string { +func testAccObjectBasicConfig(bucket, key string) string { return fmt.Sprintf(` -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = %[1]q key = %[2]q } `, bucket, key) } -func testAccBucketObjectEmptyConfig(rName string) string { +func testAccObjectEmptyConfig(rName string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" } `, rName) } -func testAccBucketObjectSourceConfig(rName string, source string) string { +func testAccObjectSourceConfig(rName string, source string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" source = %[2]q @@ -1597,13 +1597,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, source) } -func testAccBucketObjectConfig_withContentCharacteristics(rName string, source string) string { +func testAccObjectConfig_withContentCharacteristics(rName string, source string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" source = %[2]q @@ -1614,13 +1614,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, source) } -func testAccBucketObjectContentConfig(rName string, content string) string { +func testAccObjectContentConfig(rName string, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %[2]q @@ -1628,13 +1628,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, content) } -func testAccBucketObjectEtagEncryption(rName string, source string) string { +func testAccObjectEtagEncryption(rName string, source string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" server_side_encryption = "AES256" @@ -1644,13 +1644,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, source) } -func testAccBucketObjectContentBase64Config(rName string, contentBase64 string) string { +func testAccObjectContentBase64Config(rName string, contentBase64 string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content_base64 = %[2]q @@ -1658,13 +1658,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, contentBase64) } -func testAccBucketObjectConfig_sourceHashTrigger(rName string, source string) string { +func testAccObjectConfig_sourceHashTrigger(rName string, source string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" source = %[2]q @@ -1673,7 +1673,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, source) } -func testAccBucketObjectConfig_updateable(rName string, bucketVersioning bool, source string) string { +func testAccObjectConfig_updateable(rName string, bucketVersioning bool, source string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "object_bucket_3" { bucket = %[1]q @@ -1683,7 +1683,7 @@ resource "aws_s3_bucket" "object_bucket_3" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket_3.bucket key = "updateable-key" source = %[3]q @@ -1692,7 +1692,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, bucketVersioning, source) } -func testAccBucketObjectConfig_updateableViaAccessPoint(rName string, bucketVersioning bool, source string) string { +func testAccObjectConfig_updateableViaAccessPoint(rName string, bucketVersioning bool, source string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1707,7 +1707,7 @@ resource "aws_s3_access_point" "test" { name = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_access_point.test.arn key = "updateable-key" source = %[3]q @@ -1716,7 +1716,7 @@ resource "aws_s3_bucket_object" "test" { `, rName, bucketVersioning, source) } -func testAccBucketObjectConfig_withKMSID(rName string, source string) string { +func testAccObjectConfig_withKMSID(rName string, source string) string { return fmt.Sprintf(` resource "aws_kms_key" "kms_key_1" {} @@ -1724,7 +1724,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" source = %[2]q @@ -1733,13 +1733,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, source) } -func testAccBucketObjectConfig_withSSE(rName string, source string) string { +func testAccObjectConfig_withSSE(rName string, source string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" source = %[2]q @@ -1748,7 +1748,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, source) } -func testAccBucketObjectConfig_acl(rName string, content, acl string) string { +func testAccObjectConfig_acl(rName string, content, acl string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1758,7 +1758,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %[2]q @@ -1767,13 +1767,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, content, acl) } -func testAccBucketObjectConfig_storageClass(rName string, storage_class string) string { +func testAccObjectConfig_storageClass(rName string, storage_class string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = "some_bucket_content" @@ -1782,7 +1782,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, storage_class) } -func testAccBucketObjectConfig_withTags(rName, key, content string) string { +func testAccObjectConfig_withTags(rName, key, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1792,7 +1792,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = %[2]q content = %[3]q @@ -1806,7 +1806,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, key, content) } -func testAccBucketObjectConfig_withUpdatedTags(rName, key, content string) string { +func testAccObjectConfig_withUpdatedTags(rName, key, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1816,7 +1816,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = %[2]q content = %[3]q @@ -1831,7 +1831,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, key, content) } -func testAccBucketObjectConfig_withNoTags(rName, key, content string) string { +func testAccObjectConfig_withNoTags(rName, key, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1841,7 +1841,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = %[2]q content = %[3]q @@ -1849,13 +1849,13 @@ resource "aws_s3_bucket_object" "object" { `, rName, key, content) } -func testAccBucketObjectConfig_withMetadata(rName string, metadataKey1, metadataValue1, metadataKey2, metadataValue2 string) string { +func testAccObjectConfig_withMetadata(rName string, metadataKey1, metadataValue1, metadataKey2, metadataValue2 string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" @@ -1867,7 +1867,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, metadataKey1, metadataValue1, metadataKey2, metadataValue2) } -func testAccBucketObjectConfig_noObjectLockLegalHold(rName string, content string) string { +func testAccObjectConfig_noObjectLockLegalHold(rName string, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1881,7 +1881,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %[2]q @@ -1890,7 +1890,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, content) } -func testAccBucketObjectConfig_withObjectLockLegalHold(rName string, content, legalHoldStatus string) string { +func testAccObjectConfig_withObjectLockLegalHold(rName string, content, legalHoldStatus string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1904,7 +1904,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %[2]q @@ -1914,7 +1914,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, content, legalHoldStatus) } -func testAccBucketObjectConfig_noObjectLockRetention(rName string, content string) string { +func testAccObjectConfig_noObjectLockRetention(rName string, content string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1928,7 +1928,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %[2]q @@ -1937,7 +1937,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, content) } -func testAccBucketObjectConfig_withObjectLockRetention(rName string, content, retainUntilDate string) string { +func testAccObjectConfig_withObjectLockRetention(rName string, content, retainUntilDate string) string { return fmt.Sprintf(` resource "aws_s3_bucket" "test" { bucket = %[1]q @@ -1951,7 +1951,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %[2]q @@ -1962,7 +1962,7 @@ resource "aws_s3_bucket_object" "object" { `, rName, content, retainUntilDate) } -func testAccBucketObjectConfig_nonVersioned(rName string, source string) string { +func testAccObjectConfig_nonVersioned(rName string, source string) string { policy := `{ "Version": "2012-10-17", "Statement": [ @@ -1989,7 +1989,7 @@ resource "aws_s3_bucket" "object_bucket_3" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.object_bucket_3.bucket key = "updateable-key" source = %[2]q @@ -1998,10 +1998,10 @@ resource "aws_s3_bucket_object" "object" { `, rName, source) } -func testAccBucketObjectConfig_objectBucketKeyEnabled(rName string, content string) string { +func testAccObjectConfig_objectBucketKeyEnabled(rName string, content string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { - description = "Encrypts test bucket objects" + description = "Encrypts test objects" deletion_window_in_days = 7 } @@ -2009,7 +2009,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %q @@ -2019,10 +2019,10 @@ resource "aws_s3_bucket_object" "object" { `, rName, content) } -func testAccBucketObjectConfig_bucketBucketKeyEnabled(rName string, content string) string { +func testAccObjectConfig_bucketBucketKeyEnabled(rName string, content string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { - description = "Encrypts test bucket objects" + description = "Encrypts test objects" deletion_window_in_days = 7 } @@ -2040,7 +2040,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %q @@ -2048,10 +2048,10 @@ resource "aws_s3_bucket_object" "object" { `, rName, content) } -func testAccBucketObjectConfig_defaultBucketSSE(rName string, content string) string { +func testAccObjectConfig_defaultBucketSSE(rName string, content string) string { return fmt.Sprintf(` resource "aws_kms_key" "test" { - description = "Encrypts test bucket objects" + description = "Encrypts test objects" deletion_window_in_days = 7 } @@ -2067,7 +2067,7 @@ resource "aws_s3_bucket" "test" { } } -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = aws_s3_bucket.test.bucket key = "test-key" content = %[2]q diff --git a/internal/service/s3/bucket_objects_data_source.go b/internal/service/s3/objects_data_source.go similarity index 95% rename from internal/service/s3/bucket_objects_data_source.go rename to internal/service/s3/objects_data_source.go index 9c9fc2d856d..5c6503829c2 100644 --- a/internal/service/s3/bucket_objects_data_source.go +++ b/internal/service/s3/objects_data_source.go @@ -11,9 +11,9 @@ import ( const keyRequestPageSize = 1000 -func DataSourceBucketObjects() *schema.Resource { +func DataSourceObjects() *schema.Resource { return &schema.Resource{ - Read: dataSourceBucketObjectsRead, + Read: dataSourceObjectsRead, Schema: map[string]*schema.Schema{ "bucket": { @@ -64,7 +64,7 @@ func DataSourceBucketObjects() *schema.Resource { } } -func dataSourceBucketObjectsRead(d *schema.ResourceData, meta interface{}) error { +func dataSourceObjectsRead(d *schema.ResourceData, meta interface{}) error { conn := meta.(*conns.AWSClient).S3Conn bucket := d.Get("bucket").(string) diff --git a/internal/service/s3/bucket_objects_data_source_test.go b/internal/service/s3/objects_data_source_test.go similarity index 63% rename from internal/service/s3/bucket_objects_data_source_test.go rename to internal/service/s3/objects_data_source_test.go index 54b2beb3c27..9d90aca9bd9 100644 --- a/internal/service/s3/bucket_objects_data_source_test.go +++ b/internal/service/s3/objects_data_source_test.go @@ -11,7 +11,7 @@ import ( "github.com/hashicorp/terraform-provider-aws/internal/acctest" ) -func TestAccS3BucketObjectsDataSource_basic(t *testing.T) { +func TestAccS3ObjectsDataSource_basic(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -27,17 +27,17 @@ func TestAccS3BucketObjectsDataSource_basic(t *testing.T) { { Config: testAccObjectsBasicDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "2"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.0", "arch/navajo/north_window"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.1", "arch/navajo/sand_dune"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "2"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.0", "arch/navajo/north_window"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.1", "arch/navajo/sand_dune"), ), }, }, }) } -func TestAccS3BucketObjectsDataSource_basicViaAccessPoint(t *testing.T) { +func TestAccS3ObjectsDataSource_basicViaAccessPoint(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -53,17 +53,17 @@ func TestAccS3BucketObjectsDataSource_basicViaAccessPoint(t *testing.T) { { Config: testAccObjectsBasicViaAccessPointDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "2"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.0", "arch/navajo/north_window"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.1", "arch/navajo/sand_dune"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "2"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.0", "arch/navajo/north_window"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.1", "arch/navajo/sand_dune"), ), }, }, }) } -func TestAccS3BucketObjectsDataSource_all(t *testing.T) { +func TestAccS3ObjectsDataSource_all(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -79,22 +79,22 @@ func TestAccS3BucketObjectsDataSource_all(t *testing.T) { { Config: testAccObjectsAllDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "7"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.0", "arch/courthouse_towers/landscape"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.1", "arch/navajo/north_window"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.2", "arch/navajo/sand_dune"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.3", "arch/partition/park_avenue"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.4", "arch/rubicon"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.5", "arch/three_gossips/broken"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.6", "arch/three_gossips/turret"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "7"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.0", "arch/courthouse_towers/landscape"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.1", "arch/navajo/north_window"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.2", "arch/navajo/sand_dune"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.3", "arch/partition/park_avenue"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.4", "arch/rubicon"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.5", "arch/three_gossips/broken"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.6", "arch/three_gossips/turret"), ), }, }, }) } -func TestAccS3BucketObjectsDataSource_prefixes(t *testing.T) { +func TestAccS3ObjectsDataSource_prefixes(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -110,21 +110,21 @@ func TestAccS3BucketObjectsDataSource_prefixes(t *testing.T) { { Config: testAccObjectsPrefixesDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "1"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.0", "arch/rubicon"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "common_prefixes.#", "4"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "common_prefixes.0", "arch/courthouse_towers/"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "common_prefixes.1", "arch/navajo/"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "common_prefixes.2", "arch/partition/"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "common_prefixes.3", "arch/three_gossips/"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "1"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.0", "arch/rubicon"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "common_prefixes.#", "4"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "common_prefixes.0", "arch/courthouse_towers/"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "common_prefixes.1", "arch/navajo/"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "common_prefixes.2", "arch/partition/"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "common_prefixes.3", "arch/three_gossips/"), ), }, }, }) } -func TestAccS3BucketObjectsDataSource_encoded(t *testing.T) { +func TestAccS3ObjectsDataSource_encoded(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -140,17 +140,17 @@ func TestAccS3BucketObjectsDataSource_encoded(t *testing.T) { { Config: testAccObjectsEncodedDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "2"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.0", "arch/ru+b+ic+on"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.1", "arch/rubicon"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "2"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.0", "arch/ru+b+ic+on"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.1", "arch/rubicon"), ), }, }, }) } -func TestAccS3BucketObjectsDataSource_maxKeys(t *testing.T) { +func TestAccS3ObjectsDataSource_maxKeys(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -166,17 +166,17 @@ func TestAccS3BucketObjectsDataSource_maxKeys(t *testing.T) { { Config: testAccObjectsMaxKeysDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "2"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.0", "arch/courthouse_towers/landscape"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.1", "arch/navajo/north_window"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "2"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.0", "arch/courthouse_towers/landscape"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.1", "arch/navajo/north_window"), ), }, }, }) } -func TestAccS3BucketObjectsDataSource_startAfter(t *testing.T) { +func TestAccS3ObjectsDataSource_startAfter(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -192,16 +192,16 @@ func TestAccS3BucketObjectsDataSource_startAfter(t *testing.T) { { Config: testAccObjectsStartAfterDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "1"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.0", "arch/three_gossips/turret"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "1"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.0", "arch/three_gossips/turret"), ), }, }, }) } -func TestAccS3BucketObjectsDataSource_fetchOwner(t *testing.T) { +func TestAccS3ObjectsDataSource_fetchOwner(t *testing.T) { rInt := sdkacctest.RandInt() resource.ParallelTest(t, resource.TestCase{ @@ -217,9 +217,9 @@ func TestAccS3BucketObjectsDataSource_fetchOwner(t *testing.T) { { Config: testAccObjectsOwnersDataSourceConfig(rInt), Check: resource.ComposeTestCheckFunc( - testAccCheckObjectsExistsDataSource("data.aws_s3_bucket_objects.yesh"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "keys.#", "2"), - resource.TestCheckResourceAttr("data.aws_s3_bucket_objects.yesh", "owners.#", "2"), + testAccCheckObjectsExistsDataSource("data.aws_s3_objects.yesh"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "keys.#", "2"), + resource.TestCheckResourceAttr("data.aws_s3_objects.yesh", "owners.#", "2"), ), }, }, @@ -247,43 +247,43 @@ resource "aws_s3_bucket" "objects_bucket" { bucket = "tf-acc-objects-test-bucket-%d" } -resource "aws_s3_bucket_object" "object1" { +resource "aws_s3_object" "object1" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/three_gossips/turret" content = "Delicate" } -resource "aws_s3_bucket_object" "object2" { +resource "aws_s3_object" "object2" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/three_gossips/broken" content = "Dark Angel" } -resource "aws_s3_bucket_object" "object3" { +resource "aws_s3_object" "object3" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/navajo/north_window" content = "Balanced Rock" } -resource "aws_s3_bucket_object" "object4" { +resource "aws_s3_object" "object4" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/navajo/sand_dune" content = "Queen Victoria Rock" } -resource "aws_s3_bucket_object" "object5" { +resource "aws_s3_object" "object5" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/partition/park_avenue" content = "Double-O" } -resource "aws_s3_bucket_object" "object6" { +resource "aws_s3_object" "object6" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/courthouse_towers/landscape" content = "Fiery Furnace" } -resource "aws_s3_bucket_object" "object7" { +resource "aws_s3_object" "object7" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/rubicon" content = "Devils Garden" @@ -304,7 +304,7 @@ func testAccObjectsBasicDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_bucket.objects_bucket.id prefix = "arch/navajo/" delimiter = "/" @@ -314,7 +314,7 @@ data "aws_s3_bucket_objects" "yesh" { func testAccObjectsBasicViaAccessPointDataSourceConfig(randInt int) string { return testAccObjectsResourcesPlusAccessPointDataSourceConfig(randInt) + ` -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_access_point.test.arn prefix = "arch/navajo/" delimiter = "/" @@ -326,7 +326,7 @@ func testAccObjectsAllDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_bucket.objects_bucket.id } `, testAccObjectsResourcesDataSourceConfig(randInt)) @@ -336,7 +336,7 @@ func testAccObjectsPrefixesDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_bucket.objects_bucket.id prefix = "arch/" delimiter = "/" @@ -348,7 +348,7 @@ func testAccObjectsExtraResourceDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -resource "aws_s3_bucket_object" "object8" { +resource "aws_s3_object" "object8" { bucket = aws_s3_bucket.objects_bucket.id key = "arch/ru b ic on" content = "Goose Island" @@ -360,7 +360,7 @@ func testAccObjectsEncodedDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_bucket.objects_bucket.id encoding_type = "url" prefix = "arch/ru" @@ -372,7 +372,7 @@ func testAccObjectsMaxKeysDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_bucket.objects_bucket.id max_keys = 2 } @@ -383,7 +383,7 @@ func testAccObjectsStartAfterDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_bucket.objects_bucket.id start_after = "arch/three_gossips/broken" } @@ -394,7 +394,7 @@ func testAccObjectsOwnersDataSourceConfig(randInt int) string { return fmt.Sprintf(` %s -data "aws_s3_bucket_objects" "yesh" { +data "aws_s3_objects" "yesh" { bucket = aws_s3_bucket.objects_bucket.id prefix = "arch/three_gossips/" fetch_owner = true diff --git a/internal/service/s3/sweep.go b/internal/service/s3/sweep.go index 849816cf0e0..7d4289bc336 100644 --- a/internal/service/s3/sweep.go +++ b/internal/service/s3/sweep.go @@ -22,9 +22,9 @@ import ( ) func init() { - resource.AddTestSweepers("aws_s3_bucket_object", &resource.Sweeper{ - Name: "aws_s3_bucket_object", - F: sweepBucketObjects, + resource.AddTestSweepers("aws_s3_object", &resource.Sweeper{ + Name: "aws_s3_object", + F: sweepObjects, }) resource.AddTestSweepers("aws_s3_bucket", &resource.Sweeper{ @@ -32,13 +32,13 @@ func init() { F: sweepBuckets, Dependencies: []string{ "aws_s3_access_point", - "aws_s3_bucket_object", + "aws_s3_object", "aws_s3control_multi_region_access_point", }, }) } -func sweepBucketObjects(region string) error { +func sweepObjects(region string) error { client, err := sweep.SharedRegionalSweepClient(region) if err != nil { return fmt.Errorf("error getting client: %s", err) @@ -50,16 +50,16 @@ func sweepBucketObjects(region string) error { output, err := conn.ListBuckets(input) if sweep.SkipSweepError(err) { - log.Printf("[WARN] Skipping S3 Bucket Objects sweep for %s: %s", region, err) + log.Printf("[WARN] Skipping S3 Objects sweep for %s: %s", region, err) return nil } if err != nil { - return fmt.Errorf("error listing S3 Bucket Objects: %s", err) + return fmt.Errorf("error listing S3 Objects: %s", err) } if len(output.Buckets) == 0 { - log.Print("[DEBUG] No S3 Bucket Objects to sweep") + log.Print("[DEBUG] No S3 Objects to sweep") return nil } @@ -93,7 +93,7 @@ func sweepBucketObjects(region string) error { continue } - objectLockEnabled, err := bucketObjectLockEnabled(conn, bucketName) + objectLockEnabled, err := objectLockEnabled(conn, bucketName) if err != nil { log.Printf("[ERROR] Error getting S3 Bucket (%s) Object Lock: %s", bucketName, err) @@ -217,7 +217,7 @@ func bucketRegion(conn *s3.S3, bucket string) (string, error) { return region, nil } -func bucketObjectLockEnabled(conn *s3.S3, bucket string) (bool, error) { +func objectLockEnabled(conn *s3.S3, bucket string) (bool, error) { input := &s3.GetObjectLockConfigurationInput{ Bucket: aws.String(bucket), } diff --git a/internal/service/sagemaker/endpoint_test.go b/internal/service/sagemaker/endpoint_test.go index 791a1bc82eb..f37366babdd 100644 --- a/internal/service/sagemaker/endpoint_test.go +++ b/internal/service/sagemaker/endpoint_test.go @@ -308,7 +308,7 @@ resource "aws_s3_bucket" "test" { bucket = %[1]q } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "model.tar.gz" source = "test-fixtures/sagemaker-tensorflow-serving-test-model.tar.gz" @@ -325,7 +325,7 @@ resource "aws_sagemaker_model" "test" { primary_container { image = data.aws_sagemaker_prebuilt_ecr_image.test.registry_path - model_data_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + model_data_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" } depends_on = [aws_iam_role_policy.test] diff --git a/internal/service/sagemaker/model_test.go b/internal/service/sagemaker/model_test.go index 6ad01eae710..32eb1736ecf 100644 --- a/internal/service/sagemaker/model_test.go +++ b/internal/service/sagemaker/model_test.go @@ -514,7 +514,7 @@ resource "aws_sagemaker_model" "test" { primary_container { image = data.aws_sagemaker_prebuilt_ecr_image.test.registry_path - model_data_url = "https://s3.amazonaws.com/${aws_s3_bucket_object.test.bucket}/${aws_s3_bucket_object.test.key}" + model_data_url = "https://s3.amazonaws.com/${aws_s3_object.test.bucket}/${aws_s3_object.test.key}" } } @@ -569,7 +569,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = "model.tar.gz" content = "some-data" diff --git a/internal/service/sagemaker/project_test.go b/internal/service/sagemaker/project_test.go index 95ca2b247e8..3066c874590 100644 --- a/internal/service/sagemaker/project_test.go +++ b/internal/service/sagemaker/project_test.go @@ -219,7 +219,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "%[1]s.json" @@ -264,7 +264,7 @@ resource "aws_servicecatalog_product" "test" { provisioning_artifact_parameters { disable_template_validation = true name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } } diff --git a/internal/service/servicecatalog/constraint_test.go b/internal/service/servicecatalog/constraint_test.go index 71e719ab69d..d5b3139b5a8 100644 --- a/internal/service/servicecatalog/constraint_test.go +++ b/internal/service/servicecatalog/constraint_test.go @@ -161,7 +161,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "%[1]s.json" @@ -196,7 +196,7 @@ resource "aws_servicecatalog_product" "test" { provisioning_artifact_parameters { disable_template_validation = true name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } diff --git a/internal/service/servicecatalog/product_test.go b/internal/service/servicecatalog/product_test.go index 03269e168ec..dea7c7060df 100644 --- a/internal/service/servicecatalog/product_test.go +++ b/internal/service/servicecatalog/product_test.go @@ -268,7 +268,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "%[1]s.json" @@ -315,7 +315,7 @@ resource "aws_servicecatalog_product" "test" { description = "artefaktbeskrivning" disable_template_validation = true name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } @@ -344,7 +344,7 @@ resource "aws_servicecatalog_product" "test" { description = "artefaktbeskrivning" disable_template_validation = true name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } diff --git a/internal/service/servicecatalog/provisioned_product_test.go b/internal/service/servicecatalog/provisioned_product_test.go index aa70ee33ae5..ea0aef9440b 100644 --- a/internal/service/servicecatalog/provisioned_product_test.go +++ b/internal/service/servicecatalog/provisioned_product_test.go @@ -169,7 +169,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "%[1]s.json" @@ -232,7 +232,7 @@ resource "aws_servicecatalog_product" "test" { description = "artefaktbeskrivning" disable_template_validation = true name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } diff --git a/internal/service/servicecatalog/provisioning_artifact_test.go b/internal/service/servicecatalog/provisioning_artifact_test.go index 8fda5ac7b88..d38ca2b75f5 100644 --- a/internal/service/servicecatalog/provisioning_artifact_test.go +++ b/internal/service/servicecatalog/provisioning_artifact_test.go @@ -248,7 +248,7 @@ resource "aws_s3_bucket" "test" { force_destroy = true } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.id key = "%[1]s.json" @@ -289,7 +289,7 @@ resource "aws_servicecatalog_product" "test" { description = "artefaktbeskrivning" disable_template_validation = true name = %[1]q - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } @@ -310,7 +310,7 @@ resource "aws_servicecatalog_provisioning_artifact" "test" { guidance = "DEFAULT" name = "%[1]s-2" product_id = aws_servicecatalog_product.test.id - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } `, rName)) @@ -326,7 +326,7 @@ resource "aws_servicecatalog_provisioning_artifact" "test" { guidance = "DEPRECATED" name = "%[1]s-3" product_id = aws_servicecatalog_product.test.id - template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_bucket_object.test.key}" + template_url = "https://${aws_s3_bucket.test.bucket_regional_domain_name}/${aws_s3_object.test.key}" type = "CLOUD_FORMATION_TEMPLATE" } `, rName)) diff --git a/internal/service/signer/signing_job_data_source_test.go b/internal/service/signer/signing_job_data_source_test.go index 9d48469a0f3..04f6598b3cb 100644 --- a/internal/service/signer/signing_job_data_source_test.go +++ b/internal/service/signer/signing_job_data_source_test.go @@ -56,7 +56,7 @@ resource "aws_s3_bucket" "destination" { force_destroy = true } -resource "aws_s3_bucket_object" "source" { +resource "aws_s3_object" "source" { bucket = aws_s3_bucket.source.bucket key = "lambdatest.zip" source = "test-fixtures/lambdatest.zip" @@ -67,9 +67,9 @@ resource "aws_signer_signing_job" "test" { source { s3 { - bucket = aws_s3_bucket_object.source.bucket - key = aws_s3_bucket_object.source.key - version = aws_s3_bucket_object.source.version_id + bucket = aws_s3_object.source.bucket + key = aws_s3_object.source.key + version = aws_s3_object.source.version_id } } diff --git a/internal/service/signer/signing_job_test.go b/internal/service/signer/signing_job_test.go index d03f958e59e..426486e761b 100644 --- a/internal/service/signer/signing_job_test.go +++ b/internal/service/signer/signing_job_test.go @@ -65,7 +65,7 @@ resource "aws_s3_bucket" "destination" { force_destroy = true } -resource "aws_s3_bucket_object" "source" { +resource "aws_s3_object" "source" { bucket = aws_s3_bucket.source.bucket key = "lambdatest.zip" source = "test-fixtures/lambdatest.zip" @@ -76,9 +76,9 @@ resource "aws_signer_signing_job" "test" { source { s3 { - bucket = aws_s3_bucket_object.source.bucket - key = aws_s3_bucket_object.source.key - version = aws_s3_bucket_object.source.version_id + bucket = aws_s3_object.source.bucket + key = aws_s3_object.source.key + version = aws_s3_object.source.version_id } } diff --git a/internal/service/ssm/document_test.go b/internal/service/ssm/document_test.go index c6ac7b3ff4a..d837c6dee9e 100644 --- a/internal/service/ssm/document_test.go +++ b/internal/service/ssm/document_test.go @@ -1085,7 +1085,7 @@ resource "aws_s3_bucket" "test" { bucket = "tf-object-test-bucket-%[2]d" } -resource "aws_s3_bucket_object" "test" { +resource "aws_s3_object" "test" { bucket = aws_s3_bucket.test.bucket key = "test.zip" source = "test-fixtures/ssm-doc-acc-test.zip" @@ -1098,7 +1098,7 @@ resource "aws_ssm_document" "test" { attachments_source { key = "SourceUrl" - values = ["s3://${aws_s3_bucket_object.test.bucket}"] + values = ["s3://${aws_s3_object.test.bucket}"] } content = < **NOTE on `max_keys`:** Retrieving very large numbers of keys can adversely affect Terraform's performance. -The bucket-objects data source returns keys (i.e., file names) and other metadata about objects in an S3 bucket. +The objects data source returns keys (i.e., file names) and other metadata about objects in an S3 bucket. ## Example Usage The following example retrieves a list of all object keys in an S3 bucket and creates corresponding Terraform object data sources: ```terraform -data "aws_s3_bucket_objects" "my_objects" { +data "aws_s3_objects" "my_objects" { bucket = "ourcorp" } -data "aws_s3_bucket_object" "object_info" { - count = length(data.aws_s3_bucket_objects.my_objects.keys) - key = element(data.aws_s3_bucket_objects.my_objects.keys, count.index) - bucket = data.aws_s3_bucket_objects.my_objects.bucket +data "aws_s3_object" "object_info" { + count = length(data.aws_s3_objects.my_objects.keys) + key = element(data.aws_s3_objects.my_objects.keys, count.index) + bucket = data.aws_s3_objects.my_objects.bucket } ``` diff --git a/website/docs/d/vpcs.html.markdown b/website/docs/d/vpcs.html.markdown index 56b26fcc18a..8cd42429528 100644 --- a/website/docs/d/vpcs.html.markdown +++ b/website/docs/d/vpcs.html.markdown @@ -71,4 +71,4 @@ which take the following arguments: ## Attributes Reference * `id` - AWS Region. -* `ids` - A list of all the VPC Ids found. This data source will fail if none are found. +* `ids` - A list of all the VPC Ids found. diff --git a/website/docs/r/cloudformation_type.html.markdown b/website/docs/r/cloudformation_type.html.markdown index b3819d49c16..c4dd058d23a 100644 --- a/website/docs/r/cloudformation_type.html.markdown +++ b/website/docs/r/cloudformation_type.html.markdown @@ -16,7 +16,7 @@ Manages a version of a CloudFormation Type. ```terraform resource "aws_cloudformation_type" "example" { - schema_handler_package = "s3://${aws_s3_bucket_object.example.bucket}/${aws_s3_bucket_object.example.key}" + schema_handler_package = "s3://${aws_s3_object.example.bucket}/${aws_s3_object.example.key}" type = "RESOURCE" type_name = "ExampleCompany::ExampleService::ExampleResource" diff --git a/website/docs/r/cloudtrail.html.markdown b/website/docs/r/cloudtrail.html.markdown index b9f861a5083..363aadab7d9 100644 --- a/website/docs/r/cloudtrail.html.markdown +++ b/website/docs/r/cloudtrail.html.markdown @@ -70,7 +70,7 @@ POLICY ### Data Event Logging -CloudTrail can log [Data Events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) for certain services such as S3 bucket objects and Lambda function invocations. Additional information about data event configuration can be found in the following links: +CloudTrail can log [Data Events](https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html) for certain services such as S3 objects and Lambda function invocations. Additional information about data event configuration can be found in the following links: * [CloudTrail API DataResource documentation](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_DataResource.html) (for basic event selector). * [CloudTrail API AdvancedFieldSelector documentation](https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/API_AdvancedFieldSelector.html) (for advanced event selector). @@ -93,7 +93,7 @@ resource "aws_cloudtrail" "example" { } ``` -#### Logging All S3 Bucket Object Events By Using Basic Event Selectors +#### Logging All S3 Object Events By Using Basic Event Selectors ```terraform resource "aws_cloudtrail" "example" { @@ -136,7 +136,7 @@ resource "aws_cloudtrail" "example" { } ``` -#### Logging All S3 Bucket Object Events Except For Two S3 Buckets By Using Advanced Event Selectors +#### Logging All S3 Object Events Except For Two S3 Buckets By Using Advanced Event Selectors ```terraform data "aws_s3_bucket" "not-important-bucket-1" { @@ -151,7 +151,7 @@ resource "aws_cloudtrail" "example" { # ... other configuration ... advanced_event_selector { - name = "Log all S3 buckets objects events except for two S3 buckets" + name = "Log all S3 objects events except for two S3 buckets" field_selector { field = "eventCategory" diff --git a/website/docs/r/config_conformance_pack.html.markdown b/website/docs/r/config_conformance_pack.html.markdown index 3f135afec28..ac10f34fd48 100644 --- a/website/docs/r/config_conformance_pack.html.markdown +++ b/website/docs/r/config_conformance_pack.html.markdown @@ -53,7 +53,7 @@ EOT ```terraform resource "aws_config_conformance_pack" "example" { name = "example" - template_s3_uri = "s3://${aws_s3_bucket.example.bucket}/${aws_s3_bucket_object.example.key}" + template_s3_uri = "s3://${aws_s3_bucket.example.bucket}/${aws_s3_object.example.key}" depends_on = [aws_config_configuration_recorder.example] } @@ -62,7 +62,7 @@ resource "aws_s3_bucket" "example" { bucket = "example" } -resource "aws_s3_bucket_object" "example" { +resource "aws_s3_object" "example" { bucket = aws_s3_bucket.example.id key = "example-key" content = < **NOTE:** If you are enabling versioning on the bucket for the first time, AWS recommends that you wait for 15 minutes after enabling versioning before issuing write operations (PUT or DELETE) on objects in the bucket. + +## Example Usage + +```terraform +resource "aws_s3_bucket" "example" { + bucket = "example-bucket" + acl = "private" +} + +resource "aws_s3_bucket_versioning" "versioning_example" { + bucket = aws_s3_bucket.example.id + versioning_configuration { + status = "Enabled" + } +} +``` + +## Argument Reference + +The following arguments are supported: + +* `bucket` - (Required, Forces new resource) The name of the S3 bucket. +* `versioning_configuration` - (Required) Configuration block for the versioning parameters [detailed below](#versioning_configuration). +* `expected_bucket_owner` - (Optional, Forces new resource) The account ID of the expected bucket owner. +* `mfa` - (Optional, Required if `versioning_configuration` `mfa_delete` is enabled) The concatenation of the authentication device's serial number, a space, and the value that is displayed on your authentication device. + +### versioning_configuration + +The `versioning_configuration` configuration block supports the following arguments: + +* `status` - (Required) The versioning state of the bucket. Valid values: `Enabled` or `Suspended`. +* `mfa_delete` - (Optional) Specifies whether MFA delete is enabled in the bucket versioning configuration. Valid values: `Enabled` or `Disabled`. + +## Attributes Reference + +In addition to all arguments above, the following attributes are exported: + +* `id` - The `bucket` or `bucket` and `expected_bucket_owner` separated by a comma (`,`) if the latter is provided. + +## Import + +S3 bucket versioning can be imported using the `bucket`, e.g. + +``` +$ terraform import aws_s3_bucket_versioning.example bucket-name +``` + +In addition, S3 bucket versioning can be imported using the `bucket` and `expected_bucket_owner` separated by a comma (`,`), e.g. + +``` +$ terraform import aws_s3_bucket_versioning.example bucket-name,123456789012 +``` diff --git a/website/docs/r/s3_bucket_object.html.markdown b/website/docs/r/s3_object.html.markdown similarity index 94% rename from website/docs/r/s3_bucket_object.html.markdown rename to website/docs/r/s3_object.html.markdown index b662a0ca516..fad048f3b29 100644 --- a/website/docs/r/s3_bucket_object.html.markdown +++ b/website/docs/r/s3_object.html.markdown @@ -1,21 +1,21 @@ --- subcategory: "S3" layout: "aws" -page_title: "AWS: aws_s3_bucket_object" +page_title: "AWS: aws_s3_object" description: |- - Provides a S3 bucket object resource. + Provides an S3 object resource. --- -# Resource: aws_s3_bucket_object +# Resource: aws_s3_object -Provides a S3 bucket object resource. +Provides an S3 object resource. ## Example Usage ### Uploading a file to a bucket ```terraform -resource "aws_s3_bucket_object" "object" { +resource "aws_s3_object" "object" { bucket = "your_bucket_name" key = "new_object_key" source = "path/to/file" @@ -40,7 +40,7 @@ resource "aws_s3_bucket" "examplebucket" { acl = "private" } -resource "aws_s3_bucket_object" "examplebucket_object" { +resource "aws_s3_object" "example" { key = "someobject" bucket = aws_s3_bucket.examplebucket.id source = "index.html" @@ -56,7 +56,7 @@ resource "aws_s3_bucket" "examplebucket" { acl = "private" } -resource "aws_s3_bucket_object" "examplebucket_object" { +resource "aws_s3_object" "example" { key = "someobject" bucket = aws_s3_bucket.examplebucket.id source = "index.html" @@ -72,7 +72,7 @@ resource "aws_s3_bucket" "examplebucket" { acl = "private" } -resource "aws_s3_bucket_object" "examplebucket_object" { +resource "aws_s3_object" "example" { key = "someobject" bucket = aws_s3_bucket.examplebucket.id source = "index.html" @@ -96,7 +96,7 @@ resource "aws_s3_bucket" "examplebucket" { } } -resource "aws_s3_bucket_object" "examplebucket_object" { +resource "aws_s3_object" "example" { key = "someobject" bucket = aws_s3_bucket.examplebucket.id source = "important.txt" @@ -161,11 +161,11 @@ In addition to all arguments above, the following attributes are exported: Objects can be imported using the `id`. The `id` is the bucket name and the key together e.g., ``` -$ terraform import aws_s3_bucket_object.object some-bucket-name/some/key.txt +$ terraform import aws_s3_object.object some-bucket-name/some/key.txt ``` Additionally, s3 url syntax can be used, e.g., ``` -$ terraform import aws_s3_bucket_object.object s3://some-bucket-name/some/key.txt +$ terraform import aws_s3_object.object s3://some-bucket-name/some/key.txt ``` diff --git a/website/docs/r/servicecatalog_provisioning_artifact.html.markdown b/website/docs/r/servicecatalog_provisioning_artifact.html.markdown index c950bbe3cc4..bbe1c07a01a 100644 --- a/website/docs/r/servicecatalog_provisioning_artifact.html.markdown +++ b/website/docs/r/servicecatalog_provisioning_artifact.html.markdown @@ -25,7 +25,7 @@ resource "aws_servicecatalog_provisioning_artifact" "example" { name = "example" product_id = aws_servicecatalog_product.example.id type = "CLOUD_FORMATION_TEMPLATE" - template_url = "https://${aws_s3_bucket.example.bucket_regional_domain_name}/${aws_s3_bucket_object.example.key}" + template_url = "https://${aws_s3_bucket.example.bucket_regional_domain_name}/${aws_s3_object.example.key}" } ``` diff --git a/website/docs/r/signer_signing_job.html.markdown b/website/docs/r/signer_signing_job.html.markdown index 3022b433121..63eecd9afa4 100644 --- a/website/docs/r/signer_signing_job.html.markdown +++ b/website/docs/r/signer_signing_job.html.markdown @@ -57,7 +57,7 @@ The source configuration block supports the following arguments: The configuration block supports the following arguments: * `bucket` - (Required) Name of the S3 bucket. -* `key` - (Required) Key name of the bucket object that contains your unsigned code. +* `key` - (Required) Key name of the object that contains your unsigned code. * `version` - (Required) Version of your source image in your version enabled S3 bucket. ### Destination diff --git a/website/docs/r/spot_instance_request.html.markdown b/website/docs/r/spot_instance_request.html.markdown index 35f61ec88fe..ce75d2cc733 100644 --- a/website/docs/r/spot_instance_request.html.markdown +++ b/website/docs/r/spot_instance_request.html.markdown @@ -64,7 +64,6 @@ Spot Instance Requests support all the same arguments as The duration period starts as soon as your Spot instance receives its instance ID. At the end of the duration period, Amazon EC2 marks the Spot instance for termination and provides a Spot instance termination notice, which gives the instance a two-minute warning before it terminates. Note that you can't specify an Availability Zone group or a launch group if you specify a duration. * `instance_interruption_behavior` - (Optional) Indicates Spot instance behavior when it is interrupted. Valid values are `terminate`, `stop`, or `hibernate`. Default value is `terminate`. -* `instance_interruption_behaviour` - (Optional, **Deprecated**) Indicates Spot instance behavior when it is interrupted. Valid values are `terminate`, `stop`, or `hibernate`. Default value is `terminate`. Use the argument `instance_interruption_behavior` instead. * `valid_until` - (Optional) The end date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). At this point, no new Spot instance requests are placed or enabled to fulfill the request. The default end date is 7 days from the current date. * `valid_from` - (Optional) The start date and time of the request, in UTC [RFC3339](https://tools.ietf.org/html/rfc3339#section-5.8) format(for example, YYYY-MM-DDTHH:MM:SSZ). The default is to start fulfilling the request immediately. * `tags` - (Optional) A map of tags to assign to the Spot Instance Request. These tags are not automatically applied to the launched Instance. If configured with a provider [`default_tags` configuration block](/docs/providers/aws/index.html#default_tags-configuration-block) present, tags with matching keys will overwrite those defined at the provider-level. diff --git a/website/docs/r/storagegateway_nfs_file_share.html.markdown b/website/docs/r/storagegateway_nfs_file_share.html.markdown index ef3083836db..5e9cb2d474f 100644 --- a/website/docs/r/storagegateway_nfs_file_share.html.markdown +++ b/website/docs/r/storagegateway_nfs_file_share.html.markdown @@ -36,7 +36,7 @@ The following arguments are supported: * `kms_key_arn` - (Optional) Amazon Resource Name (ARN) for KMS key used for Amazon S3 server side encryption. This value can only be set when `kms_encrypted` is true. * `nfs_file_share_defaults` - (Optional) Nested argument with file share default values. More information below. see [NFS File Share Defaults](#nfs_file_share_defaults) for more details. * `cache_attributes` - (Optional) Refresh cache information. see [Cache Attributes](#cache_attributes) for more details. -* `object_acl` - (Optional) Access Control List permission for S3 bucket objects. Defaults to `private`. +* `object_acl` - (Optional) Access Control List permission for S3 objects. Defaults to `private`. * `read_only` - (Optional) Boolean to indicate write status of file share. File share does not accept writes if `true`. Defaults to `false`. * `requester_pays` - (Optional) Boolean who pays the cost of the request and the data download from the Amazon S3 bucket. Set this value to `true` if you want the requester to pay instead of the bucket owner. Defaults to `false`. * `squash` - (Optional) Maps a user to anonymous user. Defaults to `RootSquash`. Valid values: `RootSquash` (only root is mapped to anonymous user), `NoSquash` (no one is mapped to anonymous user), `AllSquash` (everyone is mapped to anonymous user) diff --git a/website/docs/r/storagegateway_smb_file_share.html.markdown b/website/docs/r/storagegateway_smb_file_share.html.markdown index e235f442230..223f9c1812e 100644 --- a/website/docs/r/storagegateway_smb_file_share.html.markdown +++ b/website/docs/r/storagegateway_smb_file_share.html.markdown @@ -56,7 +56,7 @@ The following arguments are supported: * `invalid_user_list` - (Optional) A list of users in the Active Directory that are not allowed to access the file share. Only valid if `authentication` is set to `ActiveDirectory`. * `kms_encrypted` - (Optional) Boolean value if `true` to use Amazon S3 server side encryption with your own AWS KMS key, or `false` to use a key managed by Amazon S3. Defaults to `false`. * `kms_key_arn` - (Optional) Amazon Resource Name (ARN) for KMS key used for Amazon S3 server side encryption. This value can only be set when `kms_encrypted` is true. -* `object_acl` - (Optional) Access Control List permission for S3 bucket objects. Defaults to `private`. +* `object_acl` - (Optional) Access Control List permission for S3 objects. Defaults to `private`. * `oplocks_enabled` - (Optional) Boolean to indicate Opportunistic lock (oplock) status. Defaults to `true`. * `cache_attributes` - (Optional) Refresh cache information. see [Cache Attributes](#cache_attributes) for more details. * `read_only` - (Optional) Boolean to indicate write status of file share. File share does not accept writes if `true`. Defaults to `false`.