-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(release): 2.132.0 #29414
chore(release): 2.132.0 #29414
Commits on Mar 1, 2024
-
fix(sqs):
redrivePermission
is set tobyQueue
no matter what valu……e is specified (#29130) ### Issue #29129 Closes #29129. ### Reason for this change When `redriveAllowPolicy.redrivePermission` is specified, any value will be output to template as `byQueue` ### Description of changes 1. Fix the evaluation order by enclosing the ternary operators in parentheses ```typescript ?? (props.redriveAllowPolicy.sourceQueues ? RedrivePermission.BY_QUEUE : RedrivePermission.ALLOW_ALL), ``` 2. Added a test case in `packages/aws-cdk-lib/aws-sqs/test/sqs.test.ts` when redrivePermission is specified other than `BY_QUEUE`. 3. Added an integ test case in `packages/@aws-cdk-testing/framework-integ/test/aws-sqs/test/integ.sqs-source-queue-permission.ts` ### Description of how you validated changes Added a test case in `packages/aws-cdk-lib/aws-sqs/test/sqs.test.ts` when redrivePermission is specified other than `BY_QUEUE`. Added an integ test case in `packages/@aws-cdk-testing/framework-integ/test/aws-sqs/test/integ.sqs-source-queue-permission.ts` And ran the test case. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for aa8484a - Browse repository at this point
Copy the full SHA aa8484aView commit details -
fix(batch): windows does not support readonlyRootFilesystem (#29145)
Here's from the k8s docs: ``` securityContext.readOnlyRootFilesystem - not possible on Windows; write access is required for registry & system processes to run inside the container ``` Closes #29140. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 7205143 - Browse repository at this point
Copy the full SHA 7205143View commit details -
fix(cloudwatch): allow up to 30 dimensions for metric (#29341)
### Issue # (if applicable) Closes #29322. ### Reason for this change [AWS::CloudWatch::Alarm](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-cloudwatch-alarm.html) allows up to 30 dimension items, while the L2 [construct](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_cloudwatch.Metric.html#dimensions) for Metric allows up to 10. ### Description of changes Increased hard limit from 10 -> 30 ### Description of how you validated changes Updated unit test, added new integration test ### Checklist - [X] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for ebe2adf - Browse repository at this point
Copy the full SHA ebe2adfView commit details -
Configuration menu - View commit details
-
Copy full SHA for 74f447c - Browse repository at this point
Copy the full SHA 74f447cView commit details
Commits on Mar 2, 2024
-
Configuration menu - View commit details
-
Copy full SHA for c9d8add - Browse repository at this point
Copy the full SHA c9d8addView commit details -
fix(stepfunctions): maxConcurrency does not support JsonPath (#29330)
### Issue # (if applicable) Relates to #20835 ### Reason for this change `MaxConcurrency` does not support `JsonPath`. This change adds `MaxConcurrencyPath` so that CDK users can specify a `JsonPath` for their `MaxConcurrency` _Note_ : This does not invalidate JsonPaths for `MaxConcurrency`, as I'm unsure how to do so without reverting #20279 . Open to suggestions ### Description of changes Added a new `maxConcurrencyPath` field that accepts a `JsonPath` value. Decided to go with another explicit field as it is similar to what is done for `ErrorPath` and `CausePath`, in addition to most other Path fields ### Description of how you validated changes Added unit tests ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for b19f822 - Browse repository at this point
Copy the full SHA b19f822View commit details
Commits on Mar 4, 2024
-
docs(ecs-patterns): remove references to REMOVE_DEFAULT_DESIRED_COUNT (…
…#29344) ### Issue # (if applicable) Closes # 29325 ### Reason for this change The `REMOVE_DEFAULT_DESIRED_COUNT` feature flag is always enabled in CDKv2, and throws builds errors if explicitly set. The `ecs-patterns` docs still reference it as "opt-in", which is misleading. Ref: [list of deprecated feature flags for v2](https://github.com/aws/aws-cdk/blob/3cbad4a2164a41f5529e04aba4d15085c71b7849/packages/aws-cdk-lib/cx-api/FEATURE_FLAGS.md?plain=1#L145) See [Issue 29325](#29325) for a sample build error when trying to follow the current example code in docs for enabling the flag. I did NOT remove the actual conditionals in the construct code, that check the (now always true) feature flag. This is dead code that can probably be removed as a chore task. My focus here was on removing friction for developers reading documentation. ### Description of changes I removed the section in the README of `ecs-patterns` showing how to manually enable this flag. I also updated the default cases in docstrings that referenced the flag. ### Description of how you validated changes Doc change only, no functional changes. I did double check that the defaults described in the docstrings (when the feature flag is enabled) were still accurate. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 704f596 - Browse repository at this point
Copy the full SHA 704f596View commit details -
feat: update L1 CloudFormation resource definitions (#29349)
Updates the L1 CloudFormation resource definitions with the latest changes from `@aws-cdk/aws-service-spec` **L1 CloudFormation resource definition changes:** ``` ├[~] service aws-amazonmq │ └ resources │ └[~] resource AWS::AmazonMQ::Broker │ └ attributes │ ├ AmqpEndpoints: (documentation changed) │ ├ IpAddresses: (documentation changed) │ ├ MqttEndpoints: (documentation changed) │ ├ OpenWireEndpoints: (documentation changed) │ ├ StompEndpoints: (documentation changed) │ └ WssEndpoints: (documentation changed) ├[~] service aws-amplify │ └ resources │ ├[~] resource AWS::Amplify::App │ │ ├ properties │ │ │ ├ AccessToken: (documentation changed) │ │ │ ├ BuildSpec: (documentation changed) │ │ │ ├ CustomHeaders: (documentation changed) │ │ │ ├ Description: (documentation changed) │ │ │ ├ IAMServiceRole: (documentation changed) │ │ │ ├ Name: (documentation changed) │ │ │ ├ OauthToken: (documentation changed) │ │ │ └ Repository: (documentation changed) │ │ └ types │ │ ├[~] type AutoBranchCreationConfig │ │ │ └ properties │ │ │ ├ BuildSpec: (documentation changed) │ │ │ └ PullRequestEnvironmentName: (documentation changed) │ │ ├[~] type BasicAuthConfig │ │ │ └ properties │ │ │ ├ Password: (documentation changed) │ │ │ └ Username: (documentation changed) │ │ ├[~] type CustomRule │ │ │ └ properties │ │ │ ├ Condition: (documentation changed) │ │ │ ├ Source: (documentation changed) │ │ │ ├ Status: (documentation changed) │ │ │ └ Target: (documentation changed) │ │ └[~] type EnvironmentVariable │ │ └ properties │ │ ├ Name: (documentation changed) │ │ └ Value: (documentation changed) │ ├[~] resource AWS::Amplify::Branch │ │ ├ properties │ │ │ ├ Backend: (documentation changed) │ │ │ ├ BranchName: (documentation changed) │ │ │ ├ BuildSpec: (documentation changed) │ │ │ ├ Description: (documentation changed) │ │ │ ├ PullRequestEnvironmentName: (documentation changed) │ │ │ └ Stage: (documentation changed) │ │ └ types │ │ ├[~] type BasicAuthConfig │ │ │ └ properties │ │ │ ├ Password: (documentation changed) │ │ │ └ Username: (documentation changed) │ │ └[~] type EnvironmentVariable │ │ └ properties │ │ ├ Name: (documentation changed) │ │ └ Value: (documentation changed) │ └[~] resource AWS::Amplify::Domain │ ├ - documentation: The AWS::Amplify::Domain resource allows you to connect a custom domain to your app. │ │ + documentation: Specifies the AWS::Amplify::Domain resource that enables you to connect a custom domain to your app. │ ├ properties │ │ ├ AppId: (documentation changed) │ │ ├ AutoSubDomainIAMRole: (documentation changed) │ │ ├[+] Certificate: Certificate │ │ ├[+] CertificateSettings: CertificateSettings │ │ ├ DomainName: (documentation changed) │ │ └[+] UpdateStatus: string │ ├ attributes │ │ └ AutoSubDomainCreationPatterns: (documentation changed) │ └ types │ ├[+] type Certificate │ │ ├ documentation: Describes the SSL/TLS certificate for the domain association. This can be your own custom certificate or the default certificate that Amplify provisions for you. │ │ │ If you are updating your domain to use a different certificate, `Certificate` points to the new certificate that is being created instead of the current active certificate. Otherwise, `Certificate` points to the current active certificate. │ │ │ name: Certificate │ │ └ properties │ │ ├CertificateType: string │ │ ├CertificateArn: string │ │ └CertificateVerificationDNSRecord: string │ ├[+] type CertificateSettings │ │ ├ documentation: The type of SSL/TLS certificate to use for your custom domain. If a certificate type isn't specified, Amplify uses the default `AMPLIFY_MANAGED` certificate. │ │ │ name: CertificateSettings │ │ └ properties │ │ ├CertificateType: string │ │ └CustomCertificateArn: string │ └[~] type SubDomainSetting │ └ properties │ └ Prefix: (documentation changed) ├[~] service aws-appstream │ └ resources │ └[~] resource AWS::AppStream::Fleet │ └ properties │ └ DisconnectTimeoutInSeconds: (documentation changed) ├[~] service aws-aps │ └ resources │ ├[~] resource AWS::APS::RuleGroupsNamespace │ │ ├ - documentation: The `AWS::APS::RuleGroupsNamespace` resource creates or updates a rule groups namespace within a Amazon Managed Service for Prometheus workspace. For more information, see [Recording rules and alerting rules](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-Ruler.html) . │ │ │ + documentation: The definition of a rule groups namespace in an Amazon Managed Service for Prometheus workspace. A rule groups namespace is associated with exactly one rules file. A workspace can have multiple rule groups namespaces. For more information about rules files, seee [Creating a rules file](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-ruler-rulesfile.html) , in the *Amazon Managed Service for Prometheus User Guide* . │ │ ├ properties │ │ │ ├ Data: (documentation changed) │ │ │ ├ Name: (documentation changed) │ │ │ ├ Tags: (documentation changed) │ │ │ └ Workspace: (documentation changed) │ │ └ attributes │ │ └ Arn: (documentation changed) │ └[~] resource AWS::APS::Workspace │ ├ - documentation: The `AWS::APS::Workspace` type specifies an Amazon Managed Service for Prometheus ( Amazon Managed Service for Prometheus ) workspace. A *workspace* is a logical and isolated Prometheus server dedicated to Prometheus resources such as metrics. You can have one or more workspaces in each Region in your account. │ │ + documentation: An Amazon Managed Service for Prometheus workspace is a logical and isolated Prometheus server dedicated to ingesting, storing, and querying your Prometheus-compatible metrics. │ ├ properties │ │ ├ AlertManagerDefinition: (documentation changed) │ │ ├ Alias: (documentation changed) │ │ ├ KmsKeyArn: (documentation changed) │ │ ├ LoggingConfiguration: (documentation changed) │ │ └ Tags: (documentation changed) │ ├ attributes │ │ ├ Arn: (documentation changed) │ │ ├ PrometheusEndpoint: (documentation changed) │ │ └ WorkspaceId: (documentation changed) │ └ types │ └[~] type LoggingConfiguration │ ├ - documentation: The LoggingConfiguration attribute sets the logging configuration for the workspace. │ │ + documentation: Contains information about the logging configuration for the workspace. │ └ properties │ └ LogGroupArn: (documentation changed) ├[~] service aws-b2bi │ └ resources │ └[~] resource AWS::B2BI::Transformer │ └ attributes │ └[+] ModifiedAt: string ├[~] service aws-backup │ └ resources │ ├[~] resource AWS::Backup::BackupPlan │ │ └ types │ │ └[~] type BackupRuleResourceType │ │ └ properties │ │ └ ScheduleExpressionTimezone: (documentation changed) │ ├[~] resource AWS::Backup::Framework │ │ └ types │ │ └[~] type ControlScope │ │ └ properties │ │ └ Tags: (documentation changed) │ └[~] resource AWS::Backup::RestoreTestingPlan │ └ properties │ └ Tags: (documentation changed) ├[~] service aws-batch │ └ resources │ ├[~] resource AWS::Batch::ComputeEnvironment │ │ └ types │ │ └[~] type ComputeResources │ │ └ properties │ │ ├ Ec2Configuration: (documentation changed) │ │ ├ Ec2KeyPair: (documentation changed) │ │ ├ SecurityGroupIds: (documentation changed) │ │ ├ Subnets: (documentation changed) │ │ └ Tags: (documentation changed) │ ├[~] resource AWS::Batch::JobDefinition │ │ ├ properties │ │ │ ├ ContainerProperties: (documentation changed) │ │ │ ├ EksProperties: (documentation changed) │ │ │ ├ NodeProperties: (documentation changed) │ │ │ └ Type: (documentation changed) │ │ └ types │ │ ├[~] type ContainerProperties │ │ │ └ properties │ │ │ ├ FargatePlatformConfiguration: (documentation changed) │ │ │ ├ LogConfiguration: (documentation changed) │ │ │ ├ Memory: (documentation changed) │ │ │ ├ NetworkConfiguration: (documentation changed) │ │ │ └ Vcpus: (documentation changed) │ │ ├[~] type EksContainer │ │ │ └ properties │ │ │ └ Args: (documentation changed) │ │ ├[~] type FargatePlatformConfiguration │ │ │ └ - documentation: The platform configuration for jobs that are running on Fargate resources. Jobs that run on EC2 resources must not specify this parameter. │ │ │ + documentation: The platform configuration for jobs that are running on Fargate resources. Jobs that run on Amazon EC2 resources must not specify this parameter. │ │ ├[~] type NetworkConfiguration │ │ │ └ - documentation: The network configuration for jobs that are running on Fargate resources. Jobs that are running on EC2 resources must not specify this parameter. │ │ │ + documentation: The network configuration for jobs that are running on Fargate resources. Jobs that are running on Amazon EC2 resources must not specify this parameter. │ │ ├[~] type NodeRangeProperty │ │ │ └ - documentation: An object that represents the properties of the node range for a multi-node parallel job. │ │ │ + documentation: This is an object that represents the properties of the node range for a multi-node parallel job. │ │ └[~] type ResourceRequirement │ │ └ properties │ │ └ Value: (documentation changed) │ └[~] resource AWS::Batch::JobQueue │ └ types │ └[~] type ComputeEnvironmentOrder │ └ - documentation: The order that compute environments are tried in for job placement within a queue. Compute environments are tried in ascending order. For example, if two compute environments are associated with a job queue, the compute environment with a lower order integer value is tried for job placement first. Compute environments must be in the `VALID` state before you can associate them with a job queue. All of the compute environments must be either EC2 ( `EC2` or `SPOT` ) or Fargate ( `FARGATE` or `FARGATE_SPOT` ); EC2 and Fargate compute environments can't be mixed. │ > All compute environments that are associated with a job queue must share the same architecture. AWS Batch doesn't support mixing compute environment architecture types in a single job queue. │ + documentation: The order that compute environments are tried in for job placement within a queue. Compute environments are tried in ascending order. For example, if two compute environments are associated with a job queue, the compute environment with a lower order integer value is tried for job placement first. Compute environments must be in the `VALID` state before you can associate them with a job queue. All of the compute environments must be either EC2 ( `EC2` or `SPOT` ) or Fargate ( `FARGATE` or `FARGATE_SPOT` ); Amazon EC2 and Fargate compute environments can't be mixed. │ > All compute environments that are associated with a job queue must share the same architecture. AWS Batch doesn't support mixing compute environment architecture types in a single job queue. ├[~] service aws-cloudformation │ └ resources │ └[~] resource AWS::CloudFormation::Stack │ └ attributes │ └ Outputs: (documentation changed) ├[~] service aws-cloudfront │ └ resources │ └[~] resource AWS::CloudFront::Distribution │ └ types │ └[~] type CacheBehavior │ └ - documentation: A complex type that describes how CloudFront processes requests. │ You must create at least as many cache behaviors (including the default cache behavior) as you have origins if you want CloudFront to serve objects from all of the origins. Each cache behavior specifies the one origin from which you want CloudFront to get objects. If you have two origins and only the default cache behavior, the default cache behavior will cause CloudFront to get objects from one of the origins, but the other origin is never used. │ For the current quota (formerly known as limit) on the number of cache behaviors that you can add to a distribution, see [Quotas](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-limits.html) in the *Amazon CloudFront Developer Guide* . │ If you don't want to specify any cache behaviors, include only an empty `CacheBehaviors` element. Don't include an empty `CacheBehavior` element because this is invalid. │ To delete all cache behaviors in an existing distribution, update the distribution configuration and include only an empty `CacheBehaviors` element. │ To add, change, or remove one or more cache behaviors, update the distribution configuration and specify all of the cache behaviors that you want to include in the updated distribution. │ For more information about cache behaviors, see [Cache Behavior Settings](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior) in the *Amazon CloudFront Developer Guide* . │ + documentation: A complex type that describes how CloudFront processes requests. │ You must create at least as many cache behaviors (including the default cache behavior) as you have origins if you want CloudFront to serve objects from all of the origins. Each cache behavior specifies the one origin from which you want CloudFront to get objects. If you have two origins and only the default cache behavior, the default cache behavior will cause CloudFront to get objects from one of the origins, but the other origin is never used. │ For the current quota (formerly known as limit) on the number of cache behaviors that you can add to a distribution, see [Quotas](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/cloudfront-limits.html) in the *Amazon CloudFront Developer Guide* . │ If you don't want to specify any cache behaviors, include only an empty `CacheBehaviors` element. For more information, see [CacheBehaviors](https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CacheBehaviors.html) . Don't include an empty `CacheBehavior` element because this is invalid. │ To delete all cache behaviors in an existing distribution, update the distribution configuration and include only an empty `CacheBehaviors` element. │ To add, change, or remove one or more cache behaviors, update the distribution configuration and specify all of the cache behaviors that you want to include in the updated distribution. │ For more information about cache behaviors, see [Cache Behavior Settings](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-web-values-specify.html#DownloadDistValuesCacheBehavior) in the *Amazon CloudFront Developer Guide* . ├[~] service aws-cognito │ └ resources │ ├[~] resource AWS::Cognito::UserPool │ │ └ properties │ │ └ DeletionProtection: (documentation changed) │ └[~] resource AWS::Cognito::UserPoolIdentityProvider │ └ properties │ └ ProviderDetails: (documentation changed) ├[~] service aws-datasync │ └ resources │ └[~] resource AWS::DataSync::Task │ └ attributes │ ├ DestinationNetworkInterfaceArns: (documentation changed) │ └ SourceNetworkInterfaceArns: (documentation changed) ├[~] service aws-directoryservice │ └ resources │ ├[~] resource AWS::DirectoryService::MicrosoftAD │ │ └ attributes │ │ └ DnsIpAddresses: (documentation changed) │ └[~] resource AWS::DirectoryService::SimpleAD │ └ attributes │ └ DnsIpAddresses: (documentation changed) ├[~] service aws-dynamodb │ └ resources │ ├[~] resource AWS::DynamoDB::GlobalTable │ │ └ types │ │ ├[~] type AttributeDefinition │ │ │ └ - documentation: Represents an attribute for describing the key schema for the table and indexes. │ │ │ + documentation: Represents an attribute for describing the schema for the table and indexes. │ │ └[~] type Projection │ │ └ properties │ │ └ ProjectionType: (documentation changed) │ └[~] resource AWS::DynamoDB::Table │ └ types │ ├[~] type AttributeDefinition │ │ └ - documentation: Represents an attribute for describing the key schema for the table and indexes. │ │ + documentation: Represents an attribute for describing the schema for the table and indexes. │ └[~] type Projection │ └ properties │ └ ProjectionType: (documentation changed) ├[~] service aws-ec2 │ └ resources │ ├[~] resource AWS::EC2::EC2Fleet │ │ └ types │ │ └[~] type FleetLaunchTemplateOverridesRequest │ │ └ properties │ │ └ WeightedCapacity: (documentation changed) │ ├[~] resource AWS::EC2::NetworkInsightsAnalysis │ │ └ attributes │ │ ├ AlternatePathHints: (documentation changed) │ │ ├ Explanations: (documentation changed) │ │ ├ ForwardPathComponents: (documentation changed) │ │ ├ ReturnPathComponents: (documentation changed) │ │ └ SuggestedAccounts: (documentation changed) │ ├[~] resource AWS::EC2::NetworkInterface │ │ └ attributes │ │ └ SecondaryPrivateIpAddresses: (documentation changed) │ ├[~] resource AWS::EC2::NetworkInterfaceAttachment │ │ ├ properties │ │ │ └[+] EnaSrdSpecification: EnaSrdSpecification │ │ └ types │ │ ├[+] type EnaSrdSpecification │ │ │ ├ documentation: ENA Express uses AWS Scalable Reliable Datagram (SRD) technology to increase the maximum bandwidth used per stream and minimize tail latency of network traffic between EC2 instances. With ENA Express, you can communicate between two EC2 instances in the same subnet within the same account, or in different accounts. Both sending and receiving instances must have ENA Express enabled. │ │ │ │ To improve the reliability of network packet delivery, ENA Express reorders network packets on the receiving end by default. However, some UDP-based applications are designed to handle network packets that are out of order to reduce the overhead for packet delivery at the network layer. When ENA Express is enabled, you can specify whether UDP network traffic uses it. │ │ │ │ name: EnaSrdSpecification │ │ │ └ properties │ │ │ ├EnaSrdEnabled: boolean │ │ │ └EnaSrdUdpSpecification: EnaSrdUdpSpecification │ │ └[+] type EnaSrdUdpSpecification │ │ ├ documentation: ENA Express is compatible with both TCP and UDP transport protocols. When it's enabled, TCP traffic automatically uses it. However, some UDP-based applications are designed to handle network packets that are out of order, without a need for retransmission, such as live video broadcasting or other near-real-time applications. For UDP traffic, you can specify whether to use ENA Express, based on your application environment needs. │ │ │ name: EnaSrdUdpSpecification │ │ └ properties │ │ └EnaSrdUdpEnabled: boolean │ ├[~] resource AWS::EC2::VPC │ │ └ attributes │ │ ├ CidrBlockAssociations: (documentation changed) │ │ └ Ipv6CidrBlocks: (documentation changed) │ └[~] resource AWS::EC2::VPCEndpoint │ └ attributes │ ├ DnsEntries: (documentation changed) │ └ NetworkInterfaceIds: (documentation changed) ├[~] service aws-ecs │ └ resources │ └[~] resource AWS::ECS::TaskSet │ ├ - tagInformation: undefined │ │ + tagInformation: {"tagPropertyName":"Tags","variant":"standard"} │ └ properties │ └[+] Tags: Array<tag> ├[~] service aws-elasticache │ └ resources │ ├[~] resource AWS::ElastiCache::ParameterGroup │ │ └ attributes │ │ └[-] CacheParameterGroupName: string │ └[~] resource AWS::ElastiCache::ReplicationGroup │ └ attributes │ ├ ReadEndPoint.Addresses.List: (documentation changed) │ └ ReadEndPoint.Ports.List: (documentation changed) ├[~] service aws-elasticloadbalancingv2 │ └ resources │ ├[~] resource AWS::ElasticLoadBalancingV2::LoadBalancer │ │ └ attributes │ │ └ SecurityGroups: (documentation changed) │ ├[~] resource AWS::ElasticLoadBalancingV2::TargetGroup │ │ └ attributes │ │ └ LoadBalancerArns: (documentation changed) │ └[~] resource AWS::ElasticLoadBalancingV2::TrustStoreRevocation │ └ attributes │ └ TrustStoreRevocations: (documentation changed) ├[~] service aws-fsx │ └ resources │ └[~] resource AWS::FSx::Volume │ └ types │ └[~] type OntapConfiguration │ └ properties │ ├ SecurityStyle: (documentation changed) │ └ SizeInMegabytes: (documentation changed) ├[~] service aws-globalaccelerator │ └ resources │ └[~] resource AWS::GlobalAccelerator::Accelerator │ └ attributes │ ├ Ipv4Addresses: (documentation changed) │ └ Ipv6Addresses: (documentation changed) ├[~] service aws-iam │ └ resources │ └[~] resource AWS::IAM::Policy │ └ attributes │ └ Id: (documentation changed) ├[~] service aws-iot │ └ resources │ ├[~] resource AWS::IoT::DomainConfiguration │ │ └ attributes │ │ └ ServerCertificates: (documentation changed) │ └[~] resource AWS::IoT::TopicRule │ └ properties │ └ RuleName: (documentation changed) ├[~] service aws-iotsitewise │ └ resources │ ├[~] resource AWS::IoTSiteWise::Asset │ │ ├ properties │ │ │ └[+] AssetExternalId: string │ │ └ types │ │ ├[~] type AssetHierarchy │ │ │ └ properties │ │ │ ├[+] ExternalId: string │ │ │ ├[+] Id: string │ │ │ └ LogicalId: - string (required) │ │ │ + string │ │ └[~] type AssetProperty │ │ └ properties │ │ ├[+] ExternalId: string │ │ ├[+] Id: string │ │ └ LogicalId: - string (required) │ │ + string │ └[~] resource AWS::IoTSiteWise::AssetModel │ ├ properties │ │ ├[+] AssetModelExternalId: string │ │ └[+] AssetModelType: string (immutable) │ └ types │ ├[~] type AssetModelCompositeModel │ │ └ properties │ │ ├[+] ComposedAssetModelId: string │ │ ├[+] ExternalId: string │ │ ├[+] Id: string │ │ ├[+] ParentAssetModelCompositeModelExternalId: string │ │ └[+] Path: Array<string> │ ├[~] type AssetModelHierarchy │ │ └ properties │ │ ├[+] ExternalId: string │ │ ├[+] Id: string │ │ └ LogicalId: - string (required) │ │ + string │ ├[~] type AssetModelProperty │ │ └ properties │ │ ├[+] ExternalId: string │ │ ├[+] Id: string │ │ └ LogicalId: - string (required) │ │ + string │ ├[+] type PropertyPathDefinition │ │ ├ documentation: The definition for property path which is used to reference properties in transforms/metrics │ │ │ name: PropertyPathDefinition │ │ └ properties │ │ └Name: string (required) │ └[~] type VariableValue │ └ properties │ ├[+] HierarchyExternalId: string │ ├[+] HierarchyId: string │ ├[+] PropertyExternalId: string │ ├[+] PropertyId: string │ ├ PropertyLogicalId: - string (required) │ │ + string │ └[+] PropertyPath: Array<PropertyPathDefinition> ├[~] service aws-iotwireless │ └ resources │ └[~] resource AWS::IoTWireless::WirelessDevice │ └ properties │ └[+] Positioning: string ├[~] service aws-kinesisfirehose │ └ resources │ └[~] resource AWS::KinesisFirehose::DeliveryStream │ └ types │ └[~] type ExtendedS3DestinationConfiguration │ └ properties │ ├[+] CustomTimeZone: string │ └[+] FileExtension: string ├[~] service aws-mediaconnect │ └ resources │ └[~] resource AWS::MediaConnect::FlowVpcInterface │ └ attributes │ └ NetworkInterfaceIds: (documentation changed) ├[~] service aws-medialive │ └ resources │ ├[~] resource AWS::MediaLive::Channel │ │ └ attributes │ │ └ Inputs: (documentation changed) │ └[~] resource AWS::MediaLive::Input │ └ attributes │ ├ Destinations: (documentation changed) │ └ Sources: (documentation changed) ├[~] service aws-mediapackagev2 │ └ resources │ └[~] resource AWS::MediaPackageV2::Channel │ └ attributes │ └ IngestEndpoints: (documentation changed) ├[~] service aws-networkfirewall │ └ resources │ └[~] resource AWS::NetworkFirewall::Firewall │ └ attributes │ └ EndpointIds: (documentation changed) ├[~] service aws-networkmanager │ └ resources │ └[~] resource AWS::NetworkManager::CoreNetwork │ └ attributes │ ├ Edges: (documentation changed) │ └ Segments: (documentation changed) ├[~] service aws-nimblestudio │ └ resources │ └[~] resource AWS::NimbleStudio::StreamingImage │ └ attributes │ └ EulaIds: (documentation changed) ├[~] service aws-opensearchserverless │ └ resources │ └[~] resource AWS::OpenSearchServerless::Collection │ └ properties │ └ StandbyReplicas: (documentation changed) ├[~] service aws-osis │ └ resources │ └[~] resource AWS::OSIS::Pipeline │ └ attributes │ └ IngestEndpointUrls: (documentation changed) ├[~] service aws-quicksight │ └ resources │ ├[~] resource AWS::QuickSight::Analysis │ │ └ attributes │ │ └ DataSetArns: (documentation changed) │ ├[~] resource AWS::QuickSight::Dashboard │ │ └ properties │ │ └ LinkEntities: (documentation changed) │ └[~] resource AWS::QuickSight::VPCConnection │ └ attributes │ └ NetworkInterfaces: (documentation changed) ├[~] service aws-rds │ └ resources │ └[~] resource AWS::RDS::DBInstance │ └ properties │ └ DBClusterSnapshotIdentifier: (documentation changed) ├[~] service aws-redshift │ └ resources │ ├[~] resource AWS::Redshift::EndpointAccess │ │ └ attributes │ │ └ VpcSecurityGroups: (documentation changed) │ └[~] resource AWS::Redshift::EventSubscription │ └ attributes │ └ EventCategoriesList: (documentation changed) ├[~] service aws-redshiftserverless │ └ resources │ ├[~] resource AWS::RedshiftServerless::Namespace │ │ ├ properties │ │ │ └[+] SnapshotCopyConfigurations: Array<SnapshotCopyConfiguration> │ │ ├ attributes │ │ │ ├ Namespace.IamRoles: (documentation changed) │ │ │ └ Namespace.LogExports: (documentation changed) │ │ └ types │ │ └[+] type SnapshotCopyConfiguration │ │ ├ name: SnapshotCopyConfiguration │ │ └ properties │ │ ├DestinationRegion: string (required) │ │ ├DestinationKmsKeyId: string │ │ └SnapshotRetentionPeriod: integer │ └[~] resource AWS::RedshiftServerless::Workgroup │ ├ properties │ │ └ MaxCapacity: (documentation changed) │ ├ attributes │ │ ├ Workgroup.MaxCapacity: (documentation changed) │ │ ├ Workgroup.SecurityGroupIds: (documentation changed) │ │ └ Workgroup.SubnetIds: (documentation changed) │ └ types │ └[~] type Workgroup │ └ properties │ └ MaxCapacity: (documentation changed) ├[~] service aws-route53 │ └ resources │ └[~] resource AWS::Route53::HostedZone │ └ attributes │ └ NameServers: (documentation changed) ├[~] service aws-route53recoverycontrol │ └ resources │ └[~] resource AWS::Route53RecoveryControl::Cluster │ └ attributes │ └ ClusterEndpoints: (documentation changed) ├[~] service aws-route53recoveryreadiness │ └ resources │ └[~] resource AWS::Route53RecoveryReadiness::Cell │ └ attributes │ └ ParentReadinessScopes: (documentation changed) ├[~] service aws-route53resolver │ └ resources │ └[~] resource AWS::Route53Resolver::ResolverRule │ └ attributes │ └ TargetIps: (documentation changed) ├[~] service aws-s3outposts │ └ resources │ └[~] resource AWS::S3Outposts::Endpoint │ └ attributes │ └ NetworkInterfaces: (documentation changed) ├[~] service aws-sagemaker │ └ resources │ └[~] resource AWS::SageMaker::AppImageConfig │ └ types │ └[~] type JupyterLabAppImageConfig │ └ - documentation: The configuration for the file system and kernels in a SageMaker image running as a JupyterLab app. │ + documentation: The configuration for the file system and kernels in a SageMaker image running as a JupyterLab app. The `FileSystemConfig` object is not supported. ├[~] service aws-ssm │ └ resources │ ├[~] resource AWS::SSM::Association │ │ ├ properties │ │ │ ├ SyncCompliance: (documentation changed) │ │ │ └ Targets: (documentation changed) │ │ └ types │ │ └[~] type Target │ │ └ - documentation: `Target` is a property of the [AWS::SSM::Association](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-association.html) resource that specifies the targets for an SSM document in Systems Manager . You can target all instances in an AWS account by specifying the `InstanceIds` key with a value of `*` . To view a JSON and a YAML example that targets all instances, see "Create an association for all managed instances in an AWS account " on the Examples page. │ │ + documentation: `Target` is a property of the [AWS::SSM::Association](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-association.html) resource that specifies the targets for an SSM document in Systems Manager . You can target all instances in an AWS account by specifying the `InstanceIds` key with a value of `*` . To view a JSON and a YAML example that targets all instances, see the example "Create an association for all managed instances in an AWS account " later in this page. │ ├[~] resource AWS::SSM::Document │ │ ├ - documentation: The `AWS::SSM::Document` resource creates a Systems Manager (SSM) document in AWS Systems Manager . This document defines the actions that Systems Manager performs on your AWS resources. │ │ │ > This resource does not support AWS CloudFormation drift detection. │ │ │ + documentation: The `AWS::SSM::Document` resource creates a Systems Manager (SSM) document in AWS Systems Manager . This document d efines the actions that Systems Manager performs on your AWS resources. │ │ │ > This resource does not support AWS CloudFormation drift detection. │ │ └ properties │ │ └ DocumentFormat: (documentation changed) │ ├[~] resource AWS::SSM::MaintenanceWindow │ │ └ - documentation: The `AWS::SSM::MaintenanceWindow` resource represents general information about a maintenance window for AWS Systems Manager . Maintenance Windows let you define a schedule for when to perform potentially disruptive actions on your instances, such as patching an operating system (OS), updating drivers, or installing software. Each maintenance window has a schedule, a duration, a set of registered targets, and a set of registered tasks. │ │ For more information, see [Systems Manager Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html) in the *AWS Systems Manager User Guide* and [CreateMaintenanceWindow](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateMaintenanceWindow.html) in the *AWS Systems Manager API Reference* . │ │ + documentation: The `AWS::SSM::MaintenanceWindow` resource represents general information about a maintenance window for AWS Systems Manager . Maintenance windows let you define a schedule for when to perform potentially disruptive actions on your instances, such as patching an operating system (OS), updating drivers, or installing software. Each maintenance window has a schedule, a duration, a set of registered targets, and a set of registered tasks. │ │ For more information, see [Systems Manager Maintenance Windows](https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-maintenance.html) in the *AWS Systems Manager User Guide* and [CreateMaintenanceWindow](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_CreateMaintenanceWindow.html) in the *AWS Systems Manager API Reference* . │ ├[~] resource AWS::SSM::Parameter │ │ ├ - documentation: The `AWS::SSM::Parameter` resource creates an SSM parameter in AWS Systems Manager Parameter Store. │ │ │ > To create an SSM parameter, you must have the AWS Identity and Access Management ( IAM ) permissions `ssm:PutParameter` and `ssm:AddTagsToResource` . On stack creation, AWS CloudFormation adds the following three tags to the parameter: `aws:cloudformation:stack-name` , `aws:cloudformation:logical-id` , and `aws:cloudformation:stack-id` , in addition to any custom tags you specify. │ │ │ > │ │ │ > To add, update, or remove tags during stack update, you must have IAM permissions for both `ssm:AddTagsToResource` and `ssm:RemoveTagsFromResource` . For more information, see [Managing Access Using Policies](https://docs.aws.amazon.com/systems-manager/latest/userguide/security-iam.html#security_iam_access-manage) in the *AWS Systems Manager User Guide* . │ │ │ For information about valid values for parameters, see [Requirements and Constraints for Parameter Names](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-su-create.html#sysman-parameter-name-constraints) in the *AWS Systems Manager User Guide* and [PutParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html) in the *AWS Systems Manager API Reference* . │ │ │ + documentation: The `AWS::SSM::Parameter` resource creates an SSM parameter in AWS Systems Manager Parameter Store. │ │ │ > To create an SSM parameter, you must have the AWS Identity and Access Management ( IAM ) permissions `ssm:PutParameter` and `ssm:AddTagsToResource` . On stack creation, AWS CloudFormation adds the following three tags to the parameter: `aws:cloudformation:stack-name` , `aws:cloudformation:logical-id` , and `aws:cloudformation:stack-id` , in addition to any custom tags you specify. │ │ │ > │ │ │ > To add, update, or remove tags during stack update, you must have IAM permissions for both `ssm:AddTagsToResource` and `ssm:RemoveTagsFromResource` . For more information, see [Managing Access Using Policies](https://docs.aws.amazon.com/systems-manager/latest/userguide/security-iam.html#security_iam_access-manage) in the *AWS Systems Manager User Guide* . │ │ │ For information about valid values for parameters, see [About requirements and constraints for parameter names](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-su-create.html#sysman-parameter-name-constraints) in the *AWS Systems Manager User Guide* and [PutParameter](https://docs.aws.amazon.com/systems-manager/latest/APIReference/API_PutParameter.html) in the *AWS Systems Manager API Reference* . │ │ └ properties │ │ ├ Name: (documentation changed) │ │ └ Type: (documentation changed) │ ├[~] resource AWS::SSM::ResourceDataSync │ │ ├ - documentation: The `AWS::SSM::ResourceDataSync` resource creates, updates, or deletes a resource data sync for AWS Systems Manager . A resource data sync helps you view data from multiple sources in a single location. Systems Manager offers two types of resource data sync: `SyncToDestination` and `SyncFromSource` . │ │ │ You can configure Systems Manager Inventory to use the `SyncToDestination` type to synchronize Inventory data from multiple AWS Regions to a single Amazon S3 bucket. │ │ │ You can configure Systems Manager Explorer to use the `SyncFromSource` type to synchronize operational work items (OpsItems) and operational data (OpsData) from multiple AWS Regions . This type can synchronize OpsItems and OpsData from multiple AWS accounts and Regions or from an `EntireOrganization` by using AWS Organizations . │ │ │ A resource data sync is an asynchronous operation that returns immediately. After a successful initial sync is completed, the system continuously syncs data. │ │ │ By default, data is not encrypted in Amazon S3 . We strongly recommend that you enable encryption in Amazon S3 to ensure secure data storage. We also recommend that you secure access to the Amazon S3 bucket by creating a restrictive bucket policy. │ │ │ For more information, see [Configuring Inventory Collection](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-configuring.html#sysman-inventory-datasync) and [Setting Up Systems Manager Explorer to Display Data from Multiple Accounts and Regions](https://docs.aws.amazon.com/systems-manager/latest/userguide/Explorer-resource-data-sync.html) in the *AWS Systems Manager User Guide* . │ │ │ Important: The following *Syntax* section shows all fields that are supported for a resource data sync. The *Examples* section below shows the recommended way to specify configurations for each sync type. Please see the *Examples* section when you create your resource data sync. │ │ │ + documentation: The `AWS::SSM::ResourceDataSync` resource creates, updates, or deletes a resource data sync for AWS Systems Manager . A resource data sync helps you view data from multiple sources in a single location. Systems Manager offers two types of resource data sync: `SyncToDestination` and `SyncFromSource` . │ │ │ You can configure Systems Manager Inventory to use the `SyncToDestination` type to synchronize Inventory data from multiple AWS Regions to a single Amazon S3 bucket. │ │ │ You can configure Systems Manager Explorer to use the `SyncFromSource` type to synchronize operational work items (OpsItems) and operational data (OpsData) from multiple AWS Regions . This type can synchronize OpsItems and OpsData from multiple AWS accounts and Regions or from an `EntireOrganization` by using AWS Organizations . │ │ │ A resource data sync is an asynchronous operation that returns immediately. After a successful initial sync is completed, the system continuously syncs data. │ │ │ By default, data is not encrypted in Amazon S3 . We strongly recommend that you enable encryption in Amazon S3 to ensure secure data storage. We also recommend that you secure access to the Amazon S3 bucket by creating a restrictive bucket policy. │ │ │ For more information, see [Configuring Inventory Collection](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-inventory-configuring.html#sysman-inventory-datasync) and [Setting Up Systems Manager Explorer to Display Data from Multiple Accounts and Regions](https://docs.aws.amazon.com/systems-manager/latest/userguide/Explorer-resource-data-sync.html) in the *AWS Systems Manager User Guide* . │ │ │ > The following *Syntax* section shows all fields that are supported for a resource data sync. The *Examples* section below shows the recommended way to specify configurations for each sync type. Refer to the *Examples* section when you create your resource data sync. │ │ └ properties │ │ └ KMSKeyArn: (documentation changed) │ └[~] resource AWS::SSM::ResourcePolicy │ └ properties │ └ ResourceArn: (documentation changed) ├[~] service aws-ssmcontacts │ └ resources │ └[~] resource AWS::SSMContacts::Contact │ └ properties │ └ Type: (documentation changed) ├[~] service aws-ssmincidents │ └ resources │ ├[~] resource AWS::SSMIncidents::ReplicationSet │ │ ├ - documentation: The `AWS::SSMIncidents::ReplicationSet` resource specifies a set of Regions that Incident Manager data is replicated to and the AWS Key Management Service ( AWS KMS key used to encrypt the data. │ │ │ + documentation: The `AWS::SSMIncidents::ReplicationSet` resource specifies a set of AWS Regions that Incident Manager data is replicated to and the AWS Key Management Service ( AWS KMS key used to encrypt the data. │ │ └ types │ │ ├[~] type RegionConfiguration │ │ │ ├ - documentation: The `RegionConfiguration` property specifies the Region and KMS key to add to the replication set. │ │ │ │ + documentation: The `RegionConfiguration` property specifies the Region and AWS Key Management Service key to add to the replication set. │ │ │ └ properties │ │ │ └ SseKmsKeyId: (documentation changed) │ │ └[~] type ReplicationRegion │ │ └ - documentation: The `ReplicationRegion` property type specifies the Region and KMS key to add to the replication set. │ │ + documentation: The `ReplicationRegion` property type specifies the Region and AWS Key Management Service key to add to the replication set. │ └[~] resource AWS::SSMIncidents::ResponsePlan │ └ types │ ├[~] type ChatChannel │ │ └ properties │ │ └ ChatbotSns: (documentation changed) │ ├[~] type DynamicSsmParameter │ │ └ - documentation: When you add a runbook to a response plan, you can specify the parameters the runbook should use at runtime. Response plans support parameters with both static and dynamic values. For static values, you enter the value when you define the parameter in the response plan. For dynamic values, the system determines the correct parameter value by collecting information from the incident. Incident Manager supports the following dynamic parameters: │ │ *Incident ARN* │ │ When Incident Manager creates an incident, the system captures the Amazon Resource Name (ARN) of the corresponding incident record and enters it for this parameter in the runbook. │ │ > This value can only be assigned to parameters of type `String` . If assigned to a parameter of any other type, the runbook fails to run. │ │ *Involved resources* │ │ When Incident Manager creates an incident, the system captures the ARNs of the resources involved in the incident. These resource ARNs are then assigned to this parameter in the runbook. │ │ > This value can only be assigned to parameters of type `StringList` . If assigned to a parameter of any other type, the runbook fails to run. │ │ + documentation: When you add a runbook to a response plan, you can specify the parameters for the runbook to use at runtime. Response plans support parameters with both static and dynamic values. For static values, you enter the value when you define the parameter in the response plan. For dynamic values, the system determines the correct parameter value by collecting information from the incident. Incident Manager supports the following dynamic parameters: │ │ *Incident ARN* │ │ When Incident Manager creates an incident, the system captures the Amazon Resource Name (ARN) of the corresponding incident record and enters it for this parameter in the runbook. │ │ > This value can only be assigned to parameters of type `String` . If assigned to a parameter of any other type, the runbook fails to run. │ │ *Involved resources* │ │ When Incident Manager creates an incident, the system captures the ARNs of the resources involved in the incident. These resource ARNs are then assigned to this parameter in the runbook. │ │ > This value can only be assigned to parameters of type `StringList` . If assigned to a parameter of any other type, the runbook fails to run. │ ├[~] type IncidentTemplate │ │ └ properties │ │ └ NotificationTargets: (documentation changed) │ ├[~] type NotificationTargetItem │ │ ├ - documentation: The SNS topic that's used by AWS Chatbot to notify the incidents chat channel. │ │ │ + documentation: The Amazon SNS topic that's used by AWS Chatbot to notify the incidents chat channel. │ │ └ properties │ │ └ SnsTopicArn: (documentation changed) │ ├[~] type SsmAutomation │ │ ├ - documentation: The `SsmAutomation` property type specifies details about the Systems Manager automation document that will be used as a runbook during an incident. │ │ │ + documentation: The `SsmAutomation` property type specifies details about the Systems Manager Automation runbook that will be used as the runbook during an incident. │ │ └ properties │ │ ├ DocumentVersion: (documentation changed) │ │ └ Parameters: (documentation changed) │ └[~] type SsmParameter │ ├ - documentation: The key-value pair parameters to use when running the automation document. │ │ + documentation: The key-value pair parameters to use when running the Automation runbook. │ └ properties │ ├ Key: (documentation changed) │ └ Values: (documentation changed) ├[~] service aws-wafv2 │ └ resources │ ├[~] resource AWS::WAFv2::RuleGroup │ │ └ types │ │ ├[~] type FieldToMatch │ │ │ └ - documentation: The part of the web request that you want AWS WAF to inspect. Include the single `FieldToMatch` type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in `FieldToMatch` for each rule statement that requires it. To inspect more than one component of the web request, create a separate rule statement for each component. │ │ │ Example JSON for a `QueryString` field to match: │ │ │ `"FieldToMatch": { "QueryString": {} }` │ │ │ Example JSON for a `Method` field to match specification: │ │ │ `"FieldToMatch": { "Method": { "Name": "DELETE" } }` │ │ │ + documentation: Specifies a web request component to be used in a rule match statement or in a logging configuration. │ │ │ - In a rule statement, this is the part of the web request that you want AWS WAF to inspect. Include the single `FieldToMatch` type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in `FieldToMatch` for each rule statement that requires it. To inspect more than one component of the web request, create a separate rule statement for each component. │ │ │ Example JSON for a `QueryString` field to match: │ │ │ `"FieldToMatch": { "QueryString": {} }` │ │ │ Example JSON for a `Method` field to match specification: │ │ │ `"FieldToMatch": { "Method": { "Name": "DELETE" } }` │ │ │ - In a logging configuration, this is used in the `RedactedFields` property to specify a field to redact from the logging records. For this use case, note the following: │ │ │ - Even though all `FieldToMatch` settings are available, the only valid settings for field redaction are `UriPath` , `QueryString` , `SingleHeader` , and `Method` . │ │ │ - In this documentation, the descriptions of the individual fields talk about specifying the web request component to inspect, but for field redaction, you are specifying the component type to redact from the logs. │ │ ├[~] type RateBasedStatement │ │ │ └ - documentation: A rate-based rule counts incoming requests and rate limits requests when they are coming at too fast a rate. The rule categorizes requests according to your aggregation criteria, collects them into aggregation instances, and counts and rate limits the requests for each instance. │ │ │ You can specify individual aggregation keys, like IP address or HTTP method. You can also specify aggregation key combinations, like IP address and HTTP method, or HTTP method, query argument, and cookie. │ │ │ Each unique set of values for the aggregation keys that you specify is a separate aggregation instance, with the value from each key contributing to the aggregation instance definition. │ │ │ For example, assume the rule evaluates web requests with the following IP address and HTTP method values: │ │ │ - IP address 10.1.1.1, HTTP method POST │ │ │ - IP address 10.1.1.1, HTTP method GET │ │ │ - IP address 127.0.0.0, HTTP method POST │ │ │ - IP address 10.1.1.1, HTTP method GET │ │ │ The rule would create different aggregation instances according to your aggregation criteria, for example: │ │ │ - If the aggregation criteria is just the IP address, then each individual address is an aggregation instance, and AWS WAF counts requests separately for each. The aggregation instances and request counts for our example would be the following: │ │ │ - IP address 10.1.1.1: count 3 │ │ │ - IP address 127.0.0.0: count 1 │ │ │ - If the aggregation criteria is HTTP method, then each individual HTTP method is an aggregation instance. The aggregation instances and request counts for our example would be the following: │ │ │ - HTTP method POST: count 2 │ │ │ - HTTP method GET: count 2 │ │ │ - If the aggregation criteria is IP address and HTTP method, then each IP address and each HTTP method would contribute to the combined aggregation instance. The aggregation instances and request counts for our example would be the following: │ │ │ - IP address 10.1.1.1, HTTP method POST: count 1 │ │ │ - IP address 10.1.1.1, HTTP method GET: count 2 │ │ │ - IP address 127.0.0.0, HTTP method POST: count 1 │ │ │ For any n-tuple of aggregation keys, each unique combination of values for the keys defines a separate aggregation instance, which AWS WAF counts and rate-limits individually. │ │ │ You can optionally nest another statement inside the rate-based statement, to narrow the scope of the rule so that it only counts and rate limits requests that match the nested statement. You can use this nested scope-down statement in conjunction with your aggregation key specifications or you can just count and rate limit all requests that match the scope-down statement, without additional aggregation. When you choose to just manage all requests that match a scope-down statement, the aggregation instance is singular for the rule. │ │ │ You cannot nest a `RateBasedStatement` inside another statement, for example inside a `NotStatement` or `OrStatement` . You can define a `RateBasedStatement` inside a web ACL and inside a rule group. │ │ │ For additional information about the options, see [Rate limiting web requests using rate-based rules](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rate-based-rules.html) in the *AWS WAF Developer Guide* . │ │ │ If you only aggregate on the individual IP address or forwarded IP address, you can retrieve the list of IP addresses that AWS WAF is currently rate limiting for a rule through the API call `GetRateBasedStatementManagedKeys` . This option is not available for other aggregation configurations. │ │ │ AWS WAF tracks and manages web requests separately for each instance of a rate-based rule that you use. For example, if you provide the same rate-based rule settings in two web ACLs, each of the two rule statements represents a separate instance of the rate-based rule and gets its own tracking and management by AWS WAF . If you define a rate-based rule inside a rule group, and then use that rule group in multiple places, each use creates a separate instance of the rate-based rule that gets its own tracking and management by AWS WAF . │ │ │ + documentation: A rate-based rule counts incoming requests and rate limits requests when they are coming at too fast a rate. The rule categorizes requests according to your aggregation criteria, collects them into aggregation instances, and counts and rate limits the requests for each instance. │ │ │ > If you change any of these settings in a rule that's currently in use, the change resets the rule's rate limiting counts. This can pause the rule's rate limiting activities for up to a minute. │ │ │ You can specify individual aggregation keys, like IP address or HTTP method. You can also specify aggregation key combinations, like IP address and HTTP method, or HTTP method, query argument, and cookie. │ │ │ Each unique set of values for the aggregation keys that you specify is a separate aggregation instance, with the value from each key contributing to the aggregation instance definition. │ │ │ For example, assume the rule evaluates web requests with the following IP address and HTTP method values: │ │ │ - IP address 10.1.1.1, HTTP method POST │ │ │ - IP address 10.1.1.1, HTTP method GET │ │ │ - IP address 127.0.0.0, HTTP method POST │ │ │ - IP address 10.1.1.1, HTTP method GET │ │ │ The rule would create different aggregation instances according to your aggregation criteria, for example: │ │ │ - If the aggregation criteria is just the IP address, then each individual address is an aggregation instance, and AWS WAF counts requests separately for each. The aggregation instances and request counts for our example would be the following: │ │ │ - IP address 10.1.1.1: count 3 │ │ │ - IP address 127.0.0.0: count 1 │ │ │ - If the aggregation criteria is HTTP method, then each individual HTTP method is an aggregation instance. The aggregation instances and request counts for our example would be the following: │ │ │ - HTTP method POST: count 2 │ │ │ - HTTP method GET: count 2 │ │ │ - If the aggregation criteria is IP address and HTTP method, then each IP address and each HTTP method would contribute to the combined aggregation instance. The aggregation instances and request counts for our example would be the following: │ │ │ - IP address 10.1.1.1, HTTP method POST: count 1 │ │ │ - IP address 10.1.1.1, HTTP method GET: count 2 │ │ │ - IP address 127.0.0.0, HTTP method POST: count 1 │ │ │ For any n-tuple of aggregation keys, each unique combination of values for the keys defines a separate aggregation instance, which AWS WAF counts and rate-limits individually. │ │ │ You can optionally nest another statement inside the rate-based statement, to narrow the scope of the rule so that it only counts and rate limits requests that match the nested statement. You can use this nested scope-down statement in conjunction with your aggregation key specifications or you can just count and rate limit all requests that match the scope-down statement, without additional aggregation. When you choose to just manage all requests that match a scope-down statement, the aggregation instance is singular for the rule. │ │ │ You cannot nest a `RateBasedStatement` inside another statement, for example inside a `NotStatement` or `OrStatement` . You can define a `RateBasedStatement` inside a web ACL and inside a rule group. │ │ │ For additional information about the options, see [Rate limiting web requests using rate-based rules](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rate-based-rules.html) in the *AWS WAF Developer Guide* . │ │ │ If you only aggregate on the individual IP address or forwarded IP address, you can retrieve the list of IP addresses that AWS WAF is currently rate limiting for a rule through the API call `GetRateBasedStatementManagedKeys` . This option is not available for other aggregation configurations. │ │ │ AWS WAF tracks and manages web requests separately for each instance of a rate-based rule that you use. For example, if you provide the same rate-based rule settings in two web ACLs, each of the two rule statements represents a separate instance of the rate-based rule and gets its own tracking and management by AWS WAF . If you define a rate-based rule inside a rule group, and then use that rule group in multiple places, each use creates a separate instance of the rate-based rule that gets its own tracking and management by AWS WAF . │ │ └[~] type Statement │ │ └ properties │ │ └ RateBasedStatement: (documentation changed) │ └[~] resource AWS::WAFv2::WebACL │ └ types │ ├[~] type FieldToMatch │ │ └ - documentation: The part of the web request that you want AWS WAF to inspect. Include the single `FieldToMatch` type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in `FieldToMatch` for each rule statement that requires it. To inspect more than one component of the web request, create a separate rule statement for each component. │ │ Example JSON for a `QueryString` field to match: │ │ `"FieldToMatch": { "QueryString": {} }` │ │ Example JSON for a `Method` field to match specification: │ │ `"FieldToMatch": { "Method": { "Name": "DELETE" } }` │ │ + documentation: Specifies a web request component to be used in a rule match statement or in a logging configuration. │ │ - In a rule statement, this is the part of the web request that you want AWS WAF to inspect. Include the single `FieldToMatch` type that you want to inspect, with additional specifications as needed, according to the type. You specify a single request component in `FieldToMatch` for each rule statement that requires it. To inspect more than one component of the web request, create a separate rule statement for each component. │ │ Example JSON for a `QueryString` field to match: │ │ `"FieldToMatch": { "QueryString": {} }` │ │ Example JSON for a `Method` field to match specification: │ │ `"FieldToMatch": { "Method": { "Name": "DELETE" } }` │ │ - In a logging configuration, this is used in the `RedactedFields` property to specify a field to redact from the logging records. For this use case, note the following: │ │ - Even though all `FieldToMatch` settings are available, the only valid settings for field redaction are `UriPath` , `QueryString` , `SingleHeader` , and `Method` . │ │ - In this documentation, the descriptions of the individual fields talk about specifying the web request component to inspect, but for field redaction, you are specifying the component type to redact from the logs. │ ├[~] type RateBasedStatement │ │ └ - documentation: A rate-based rule counts incoming requests and rate limits requests when they are coming at too fast a rate. The rule categorizes requests according to your aggregation criteria, collects them into aggregation instances, and counts and rate limits the requests for each instance. │ │ You can specify individual aggregation keys, like IP address or HTTP method. You can also specify aggregation key combinations, like IP address and HTTP method, or HTTP method, query argument, and cookie. │ │ Each unique set of values for the aggregation keys that you specify is a separate aggregation instance, with the value from each key contributing to the aggregation instance definition. │ │ For example, assume the rule evaluates web requests with the following IP address and HTTP method values: │ │ - IP address 10.1.1.1, HTTP method POST │ │ - IP address 10.1.1.1, HTTP method GET │ │ - IP address 127.0.0.0, HTTP method POST │ │ - IP address 10.1.1.1, HTTP method GET │ │ The rule would create different aggregation instances according to your aggregation criteria, for example: │ │ - If the aggregation criteria is just the IP address, then each individual address is an aggregation instance, and AWS WAF counts requests separately for each. The aggregation instances and request counts for our example would be the following: │ │ - IP address 10.1.1.1: count 3 │ │ - IP address 127.0.0.0: count 1 │ │ - If the aggregation criteria is HTTP method, then each individual HTTP method is an aggregation instance. The aggregation instances and request counts for our example would be the following: │ │ - HTTP method POST: count 2 │ │ - HTTP method GET: count 2 │ │ - If the aggregation criteria is IP address and HTTP method, then each IP address and each HTTP method would contribute to the combined aggregation instance. The aggregation instances and request counts for our example would be the following: │ │ - IP address 10.1.1.1, HTTP method POST: count 1 │ │ - IP address 10.1.1.1, HTTP method GET: count 2 │ │ - IP address 127.0.0.0, HTTP method POST: count 1 │ │ For any n-tuple of aggregation keys, each unique combination of values for the keys defines a separate aggregation instance, which AWS WAF counts and rate-limits individually. │ │ You can optionally nest another statement inside the rate-based statement, to narrow the scope of the rule so that it only counts and rate limits requests that match the nested statement. You can use this nested scope-down statement in conjunction with your aggregation key specifications or you can just count and rate limit all requests that match the scope-down statement, without additional aggregation. When you choose to just manage all requests that match a scope-down statement, the aggregation instance is singular for the rule. │ │ You cannot nest a `RateBasedStatement` inside another statement, for example inside a `NotStatement` or `OrStatement` . You can define a `RateBasedStatement` inside a web ACL and inside a rule group. │ │ For additional information about the options, see [Rate limiting web requests using rate-based rules](https://docs.aws.amazon.com/waf/latest/developerguide/waf-rate-based-rules.html) in the *AWS WAF Developer Guide* . │ │ If you only aggregate on the individual IP address or forwarded IP address, you can retrieve the list of IP addresses that AWS WAF is currently rate limiting for a rule through the API call `GetRateBasedStatementManagedKeys` . This option is not available for other aggregation configurations. │ │ AWS WAF tracks and manages web requests separately for each instance of a rate-based rule that you use. For example, if you provide the same rate-based rule settings in two web ACLs, each of the two rule statements represents a separate instance of the rate-based rule and gets its own tracking and management by AWS WAF . If you define a rate-based rule inside a rule group, and then use that rule group in multiple places, each use creates a separate instance of the rate-based rule that gets its own tracking and management by AWS WAF . │ │ + documentation: A rate-based rule counts incoming requests and rate limits requests when they are coming at too fast a rate. The rule categorizes requests according to your aggregation criteria, collects them into aggregation instances, and counts and rate limits the requests for each instance. │ │ > If you change any of these settings in a rule that's currently in use, the change resets the rule's rate limiting counts. This can pause the rule's rate limiting activities for up to a minute. │ │ You can specify individual aggregation keys, like IP address or HTTP method. You can also specify aggregation key combinations, like IP address and HTTP method, or HTTP method, query argument, and cookie. │ │ Each unique set of values for the aggregation keys that you specify is a separate aggregation instance, with the value from each key contributing to the aggregation instance definition. │ │ For example, assume the rule evaluates web requests with the following IP address and HTTP method values: │ │ - IP address 10.1.1.1, HTTP method POST │ │ - IP address 10.1.1.1, HTTP method GET │ │ - IP address 127.0.0.0, HTTP method POST │ │ - IP address 10.1.1.1, HTTP method GET │ │ The rule would create different aggregation instances according to your aggregation criteria, for example: │ │ - If the aggregation criteria is just the IP address, then each individual address is an aggregation instance, and AWS WAF counts requests separately for each. The aggregation instances and request counts for our example would be the following: │ │ - IP address 10.1.1.1: count 3 │ │ - IP address 127.0.0.0: count 1 │ │ - If the aggregation criteria is HTTP method, then each individual HTTP method is an aggregation instance. The aggregation instances and requ…
Configuration menu - View commit details
-
Copy full SHA for 8b01f45 - Browse repository at this point
Copy the full SHA 8b01f45View commit details -
fix(changelog): changelog for v2.131.0 has some errors (#29352)
---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 1b56897 - Browse repository at this point
Copy the full SHA 1b56897View commit details -
feat(ec2): add NAT instance V2 support using AL2023 (#29013)
### Issue # (if applicable) Closes #28907 ### Reason for this change Current NAT instance image has reached EOL on Dec 31 2023. ### Description of changes If NAT instances are a better match for your use case than NAT gateways, you can create your own NAT AMI from a current version of Amazon Linux as described in [Create a NAT AMI](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html#create-nat-ami). ### Description of how you validated changes New unit and integration tests *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 7fa6bbf - Browse repository at this point
Copy the full SHA 7fa6bbfView commit details -
fix(custom-resources): correctly convert values to Date type (#28398)
## Description The following issue reports an error that occurs when calling an API that takes the `Date` type as a parameter, such as `GetMetricData` API, from a Custom Resource Lambda function, where the parameter is passed as `string` type to the AWS SDK. #27962 https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/cloudwatch/command/GetMetricDataCommand/#:~:text=Description-,EndTime,-Required To resolve this error, the `string` type must be properly converted to `Date` type when calling the AWS SDK from Lambda. In this PR, I added the conversion to Date type in the same way as the existing conversion to `number` and `Uint8Array` types. `Uint8Array`: #27034 `number`: #27112 ## Major changes ### `update-sdkv3-parameters-model.ts` script If the type is `timestamp` in the `smithy` specification, write `d` to the state machine so that it can be converted to a Date type later. https://smithy.io/2.0/spec/simple-types.html#timestamp `update-sdkv3-parameters-model.sh` script was not called from anywhere, so I called it manually and updated the JSON file. Please let me know if there is a problem. ### `sdk-v2-to-v3-adapter` module I added code to convert value marked `d` in state machine to `Date` type. If the conversion to `Date` type fails, the `Date` class does not throw an exception, so the error is handled in a slightly tricky way. Also added a unit test for this process. ### `integ-tests-alpha` module Added integ-test to verify that errors reported in the related issue have been resolved. The IAM Policy added internally by the call to `adPolicyStatementFromSdkCall` looks like the following and does not call `GetMetricData` correctly, so the `addToRolePolicy` method was used to explicitly add a new Policy is added explicitly with the `addToRolePolicy` method. ```json { "Version": "2012-10-17", "Statement": [ { "Action": [ "monitoring:GetMetricData" ], "Resource": [ "*" ], "Effect": "Allow" } ] } ``` https://github.com/aws/aws-cdk/blob/1a9c30e55e58203bd0a61de82711cf10f1e04851/packages/aws-cdk-lib/custom-resources/lib/helpers-internal/sdk-v3-metadata.json#L174 fixes #27962 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 38bdb92 - Browse repository at this point
Copy the full SHA 38bdb92View commit details -
fix(ecs-patterns): resolve not being able to create ECS service in `i…
…nteg.alb-ecs-service-command-entry-point` (#29333) ### Issue # (if applicable) part of #29186 (comment) Closes #<issue number here>. ### Reason for this change CFN stack gets stuck after `yarn integ` because of not being able to create ECS service. ``` AWS::ECS::Service | CREATE_IN_PROGRESS ``` ``` $ aws ecs describe-tasks --cluster aws-ecs-integ-alb-ec2-cmd-entrypoint-Ec2ClusterEE43E89D-zBVKZa6JEBrW --tasks xxxxxxxxxxxxxxx | jq '.tasks[].stopCode' "EssentialContainerExited" ``` ### Description of changes Change `taskImageOptions` `image`, `command`, `entryPoint` and add security group. Ref: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/example_task_definitions.html#example_task_definition-webserver ### Description of how you validated changes Pass integration tests ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 6a69d5b - Browse repository at this point
Copy the full SHA 6a69d5bView commit details -
feat(autoscaling): support custom termination policy with lambda (#29340
) ### Issue # (if applicable) Closes #19750. ### Reason for this change Amazon EC2 autoscaling supports [custom termination policy](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lambda-custom-termination-policy.html) using lambda to terminate instances, but it is not supported yet by the `aws-autoscaling` package. ### Description of changes This code change adds an enum `TerminationPolicy.CUSTOM_LAMBDA_FUNCTION` and an optional `terminationPolicyCustomLambdaFunctionArn` property to pass the lambda arn. The change involves the logic to check If there are multiple termination policies specified since lamba termination policy has to be [first in order](https://docs.aws.amazon.com/autoscaling/ec2/userguide/lambda-custom-termination-policy.html#lambda-custom-termination-policy-limitations). ### Description of how you validated changes Added unit tests and integration test. Created a simple CDK application with the updated constructs and verified that it synthesized and deployed correctly. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 2ebb409 - Browse repository at this point
Copy the full SHA 2ebb409View commit details -
fix(events-targets): ecs:TagResource permission (#28898)
I enabled the following: `aws ecs put-account-setting-default --name tagResourceAuthorization --value on` And then confirmed the task completes successfully. Closes #28854. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 4af0dfc - Browse repository at this point
Copy the full SHA 4af0dfcView commit details
Commits on Mar 5, 2024
-
feat: list stack dependencies (#28995)
### Reason for this change Existing `cdk list` functionality does not display stack dependencies. This PR introduces that functionality. For instance, Existing functionality: ``` ❯ cdk list producer consumer ``` Feature functionality: ``` ❯ cdk list --show-dependencies - id: producer dependencies: [] - id: consumer dependencies: - id: producer dependencies: [] ``` ### Description of changes Changes are based on internal team design discussions. * A new flag `--show-dependencies` is being introduced for `list` cli command. * A new file `list-stacks.ts` is being added. * Adding `listStacks` function within the file for listing stack and their dependencies using cloud assembly from the cdk toolkit. ### Description of how you validated changes * Unit and Integration testing ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ### Co-Author Co-authored-by: @SankyRed ----- > NOTE: We are currently getting it reviewed by UX too so the final display output might change. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for a7fac9d - Browse repository at this point
Copy the full SHA a7fac9dView commit details -
chore: add new windows build image (#29358)
Adding the newest build image for windows. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 0605a8b - Browse repository at this point
Copy the full SHA 0605a8bView commit details -
feat(rds): enable data api for aurora cluster (#29338)
### Issue # (if applicable) Closes #28574. ### Reason for this change Data API is supported for not only Aurora Serverless V1 cluster but also Aurora provisioned and Serverless V2 cluster. However, it is not supported to enable it for provisioned and Serverless V2 cluster. ### Description of changes Add `enableDataApi` to `DatabaseClusterBaseProps` and implement `grantDataApiAccess()` to `DatabaseClusterBase` class. ### Description of how you validated changes Add both unit and integ tests ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 82690f7 - Browse repository at this point
Copy the full SHA 82690f7View commit details -
fix(lambda-nodejs): support bundling aws-sdk as part of the bundled c…
…ode asset (#29207) ### Issue # (if applicable) #25492. Closes #<issue number here>. #25492. ### Reason for this change The BundlingOptions in NodejsFunction construct removes AWS SDK dependencies by default. This uses Lambda Provided SDK in the resulting function. This has higher cold start than a bundled function with AWS SDK dependencies included. This happens, because the Node.js runtime has to do module resolution and go through multiple files while reading dependency code in the bundled function which uses Lambda Provided SDK. When SDK in bundled with the function code, the cold starts are lower as the as Node.js runtime has to read single file without any module resolution. Result from reproduction: { 'NodejsFunction default (uses Lambda Provided SDK)': 1227.1435, 'NodejsFunction custom (uses Customer Deployed SDK)': 929.441 } related to this issue: #25492 ### Description of changes While maintaining backward compatibility, an new option `useAwsSDK` was introduced to include the sdk in the code asset yes kindly refer to the above ### Description of how you validated changes Added both unit and integration test yes ``` Running integration tests for failed tests... Running in parallel across regions: us-east-1, us-east-2, us-west-2 Running test /Users/jonife/Documents/dev/lambda-tooling/cdk/aws-cdk/packages/@aws-cdk-testing/framework-integ/test/aws-lambda-nodejs/test/integ.dependencies.js in us-east-1 SUCCESS aws-lambda-nodejs/test/integ.dependencies-LambdaDependencies/DefaultTest 329.553s AssertionResultsLambdaInvoke5050b1f640cc49956b59f2a71febe95c - success AssertionResultsLambdaInvokee35a5227846e334cb95a90bacfbfb877 - success AssertionResultsLambdaInvoke7d0602e4b9f40ae057f935d874b5f971 - success Test Results: Tests: 1 passed, 1 total ✨ Done in 337.42s. ``` ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 2378635 - Browse repository at this point
Copy the full SHA 2378635View commit details
Commits on Mar 6, 2024
-
feat(rds): add ability to specify PreferredMaintenanceWindow to RDS c…
…luster database instances (#29033) ### Issue # (if applicable) Closes [#16954](#16954) ### Reason for this change Noticed that we were able to specify preferredMaintenanceWindow for a cluster, but unable to do so for the instances created under the cluster. Instead, AWS (semi-)randomly assigns a maintenance window ([doc](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.Maintenance.html#Concepts.DBMaintenance)) for the instances, which leads to things being out of sync b/w the cluster and its child instances There are some workarounds as mentioned in the issue above, but those are a little hacky (imo) and I figured adding the preferredMaintenanceWindow as an instance prop is a better long-term solution. Also, it might be hard for other developers to find the workarounds as they are only mentioned in the above issue and aren't available through normal channels (Stack overflow/official CDK docs) ### Description of changes Added optional preferredMaintenanceWindow field under `InstanceProps`, and passed that field in during the creation of the `CfnDBInstance`. Also added a quick unit test ### Description of how you validated changes Added a unit test, did not add integ tests. Ran `yarn build` and `yarn test` Callout: I was unable to run integration tests locally, kept getting errors with `yarn integ --directory packages/aws-cdk-lib/aws-rds` and `yarn integ-runner --directory packages/aws-cdk-lib/aws-rds` - `Error: Cannot find module './integ-runner.js'`, not sure if I'm missing something ### Checklist - [ x ] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) No breaking changes *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 9c82bca - Browse repository at this point
Copy the full SHA 9c82bcaView commit details -
fix(s3): incorrect account used for S3 event source mapping (#29365)
### Issue # (if applicable) #21628 initial PR was closed for naming issues: #29023 Closes #<issue number here>. 1 ### Reason for this change A Customers has a stack intending to deploy with alongside a Lambda to account "A" A Bucket (in account "B") was referenced using fromBucketAttributes in the same stack and specified account "B" as the account attribute. When hooked to Lambda using addEventSource, it was expected that the IAM configuration generated will specify account "B" as part of the conditional grant. However, Account "A" is defined as the "source account" which is incorrect. The Bucket lives in account "B" and was only referenced in the stack whose resources get deployed to "A". Today S3 bucket is added as an event source to lambda, the account for the bucket is sourced from the stack not from the bucket configuration. CDK fails to reference customer's bucket account rather is results to using the stack account which might not necessary be the bucket account. ### Description of changes ### Description of how you validated changes 1. Extensive testing was conducted by creating an application and validating the generated templates 2. Unit test was also added to test the new change ``` aws-cdk-lib % yarn test aws-lambda-event-sources yarn run v1.22.19 $ jest aws-lambda-event-sources PASS aws-lambda-event-sources/test/sns.test.ts (56.025 s) PASS aws-lambda-event-sources/test/s3.test.ts (56.097 s) PASS aws-lambda-event-sources/test/api.test.ts (56.436 s) PASS aws-lambda-event-sources/test/kinesis.test.ts (56.558 s) PASS aws-lambda-event-sources/test/dynamo.test.ts (57.016 s) PASS aws-lambda-event-sources/test/sqs.test.ts (56.816 s) PASS aws-lambda-event-sources/test/kafka.test.ts (57.452 s) A worker process has failed to exit gracefully and has been force exited. This is likely caused by tests leaking due to improper teardown. Try running with --detectOpenHandles to find leaks. Active timers can also cause this, ensure that .unref() was called on them. ``` ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 61ac788 - Browse repository at this point
Copy the full SHA 61ac788View commit details -
fix(events_targets): installing latest aws sdk fails in cn partition (#…
…29374) ### Issue # (if applicable) Closes #29373 ### Reason for this change AWS Log Group event target by default installs the latest aws sdk for its custom resource and this would fail in `aws-cn` partition. This PR exposes the `installLatestAwsSdk` to the surface and allows users to optionally turn off `installLatestAwsSdk` for cloudwatch log events target. ### Description of changes Allow users to override the value, if unset default to true which is the same behaviour as current. ### Description of how you validated changes all tests pass. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for f0383d6 - Browse repository at this point
Copy the full SHA f0383d6View commit details
Commits on Mar 7, 2024
-
feat(autoscaling): add support for InstanceRefresh suspended process (#…
…29113) Also add InstanceRefresh as a default suspended process for RollingUpdate. I have also submitted a request to update https://repost.aws/knowledge-center/auto-scaling-group-rolling-updates to conform with this change. ### Reason for this change Instance Refresh is a feature of ASG. It performs a similar function to Rolling Update. If an Instance Refresh is running at the same time as a Rolling Update, the Rolling Update will fail. It is safer to suspend the process. ### Description of changes See above. ### Description of how you validated changes Unit tests. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for f5e7717 - Browse repository at this point
Copy the full SHA f5e7717View commit details -
feat(stepfunctions-tasks): start build batch integration (#29296)
### Issue # (if applicable) Closes #29119. ### Reason for this change There is an optimized integration with codebuild but it is not able to integrate by AWS CDK. ### Description of changes Add CodeBuildStartBuildBatch class ```ts declare const project: codebuild.Project; const buildconfig = project.enableBatchBuilds(); const startBuildBatch = new tasks.CodeBuildStartBuildBatch(this, 'buildTask', { project, integrationPattern: sfn.IntegrationPattern.REQUEST_RESPONSE, environmentVariablesOverride: { test: { type: codebuild.BuildEnvironmentVariableType.PLAINTEXT, value: 'testValue', }, }, }); ``` ### Description of how you validated changes I've implemented both unit and integ tests. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 4f2b757 - Browse repository at this point
Copy the full SHA 4f2b757View commit details -
chore: typo fix in cli README.md (#29390)
### Reason for this change I was reading the documentation and found a small typo to be fixed. ### Description of changes Fixed the typo ### Description of how you validated changes It's just a README ### Checklist - [X] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 54f0f43 - Browse repository at this point
Copy the full SHA 54f0f43View commit details -
fix(cli): prevent changeset diff for non-deployed stacks (#29394)
### Reason for this change When a stack does not exist in CloudFormation, creating a changeset makes an empty `REVIEW_IN_PROGRESS` stack. We then call `delete-stack` to clean up the empty stack. However, this can cause a race condition with a deploy call. ### Description of changes This change prevents changeset diffs for stacks that do not yet exist in CloudFormation. This overrides the changeset diff flag. This change also adds logic for migrate stacks in the old diff logic to represent resource imports without needing the changeset present. ### Description of how you validated changes Testing with new stacks only uses changeset diffs once the stack is deployed. Testing with new migrate stacks only uses changeset diffs once deployed. Pre-deployment the resources correctly show as imports. Note: the deleted test assumes the diff will be calculated using the mocked changeset. The new logic avoids the changeset, so the test is no longer relevant. Closes #29265. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for d33caff - Browse repository at this point
Copy the full SHA d33caffView commit details -
fix(spec2cdk): use modern type when building tag type (#29389)
### Issue # (if applicable) Closes #29388 ### Reason for this change Some of the modern tags failed to run `cdk synth` due to type misconfiguration. ### Description of changes Always default to use the latest type for modern tags. ### Description of how you validated changes Fixed for failed resources. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 3fb0254 - Browse repository at this point
Copy the full SHA 3fb0254View commit details -
Configuration menu - View commit details
-
Copy full SHA for ba41996 - Browse repository at this point
Copy the full SHA ba41996View commit details
Commits on Mar 8, 2024
-
fix(glue):
PythonRayExecutableProps
has innaccurate properties (#28625) Closes #28570. - Added RayExecutableProps which supports s3PythonModules - Added check to block extraPythonFiles usage for Ray jobs - Added unit tests and integ tests ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 7994733 - Browse repository at this point
Copy the full SHA 7994733View commit details -
chore(codepipeline): add missing action for EcsDeployAction (#29401)
### Issue # (if applicable) Closes #29400 ### Reason for this change Missing required action as described from the [doc](https://docs.aws.amazon.com/codepipeline/latest/userguide/security-iam.html#how-to-custom-role). ### Description of changes ### Description of how you validated changes ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 946412b - Browse repository at this point
Copy the full SHA 946412bView commit details -
docs(lambda): add missing JSDoc Markdown code block (#29348)
### Reason for this change There is a missing Markdown code block in the [`EventSourceMapping` documentation](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_lambda.EventSourceMapping.html): ![image](https://github.com/aws/aws-cdk/assets/2505696/cb50ded7-a4b0-43f9-ace7-736d805b23d0) ### Description of changes Adds missing Markdown code block tags ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 1f8acc1 - Browse repository at this point
Copy the full SHA 1f8acc1View commit details -
chore(cli): improve error message for cdk migrate (#29392)
### Reason for this change This change is a follow-up to a [PR](cdklabs/cdk-from-cfn#594) that improved the error message thrown by `cdk-from-cfn` when an invalid resource property was used in a CloudFormation template. This PR further improves the error message on the cli side. ### Description of changes Primarily, this PR is a verbiage change. The base error message now states that the `<stack-name> could not be generated because <error-message>`. The error message itself is checked against `unreachable` because any use of `panic!`, `unreachable!`, or `unimplemented!` will cause the `cdk-from-cfn` source code to panic in-place. In the resulting wasm binary, this produces a `RuntimeError` that has an error message of `unreachable`. I've improved this slightly by stating `template and/or language inputs caused the source code to panic`. If the error message is not `unreachable`, then the error message is taken as is with `TransmuteError:` replaced. Note that we should still continue to improve our error messages in `cdk-from-cfn` by by replacing `panic!`, `unreachable!`, and `unimplemented!` with more detailed error messages. ### Description of how you validated changes An existing unit test was changed based on the error message verbiage change. Additionally, a new unit test was added to validate that the expected error message would be thrown by the cli when an invalid resource property was used in a CloudFormation template. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 110c79f - Browse repository at this point
Copy the full SHA 110c79fView commit details -
fix(rds): incorrect error message for rds proxies (#29404)
Closes #29402. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 2dbb381 - Browse repository at this point
Copy the full SHA 2dbb381View commit details -
feat(elasticloadbalancingv2): health check interval greater than time…
Configuration menu - View commit details
-
Copy full SHA for 576d034 - Browse repository at this point
Copy the full SHA 576d034View commit details -
feat(codepipeline):
executionMode
property for Pipeline (#29148)### Issue # (if applicable) Closes #29147. ### Reason for this change We would be good to add a new parameter for execution mode. see: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-pipeline.html#cfn-codepipeline-pipeline-executionmode https://aws.amazon.com/about-aws/whats-new/2024/02/codepipeline-trigger-filters-execution-modes ### Description of changes Add an `executionMode` parameter to the `PipelineProps` interface. ### Description of how you validated changes Both unit and integ tests. ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 3bb2944 - Browse repository at this point
Copy the full SHA 3bb2944View commit details -
fix(rds):
DatabaseCluster.instanceEndpoints
doesn't include writer ……endpoint (#29337) ### Issue # (if applicable) Closes #29279. ### Reason for this change `DatabaseCluster.instanceEndpoints` should include writer's endpoint but doesn't. ### Description of changes Add writer's endpoint to `DatabaseCluster.instanceEndpoints` ### Description of how you validated changes Add unit tests ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for ca59616 - Browse repository at this point
Copy the full SHA ca59616View commit details -
chore(rds):
IO2
instance storage (#29395)### Issue # (if applicable) Closes #29396. ### Reason for this change [AWS RDS now supports IO2 instance storage](https://aws.amazon.com/jp/blogs/aws/amazon-rds-now-supports-io2-block-express-volumes-for-mission-critical-database-workloads/) but CDK cannot configure it yet. ### Description of changes - Added IO2 storage type to the `StorageType` enum. - Set default IOPS to 1000, which is minimum value in the allowed IOPS range. ### Description of how you validated changes Added both unit and integ test ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 8b0dbb7 - Browse repository at this point
Copy the full SHA 8b0dbb7View commit details -
fix(custom-resources): log statement exposes information prohibited b…
…y security guideline (#29406) ### Issue # (if applicable) ### Reason for this change Current log statement will log too much content, including response URL which may not be ideal to be logged according to AWS Security Guideline. Removing the `input event` from the log statement. ### Description of changes Remove the log statement. ### Description of how you validated changes N/A ### Checklist - [x] My code adheres to the [CONTRIBUTING GUIDE](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md) and [DESIGN GUIDELINES](https://github.com/aws/aws-cdk/blob/main/docs/DESIGN_GUIDELINES.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Configuration menu - View commit details
-
Copy full SHA for 11621e7 - Browse repository at this point
Copy the full SHA 11621e7View commit details -
AWS CDK Team committed
Mar 8, 2024 Configuration menu - View commit details
-
Copy full SHA for 817bb32 - Browse repository at this point
Copy the full SHA 817bb32View commit details -
Configuration menu - View commit details
-
Copy full SHA for 47f53c2 - Browse repository at this point
Copy the full SHA 47f53c2View commit details