Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Cloudwatch Metric Alarm 'metric_query' block does not Support Period in Conjunction With Expression #28617

Closed
jhwbarlow opened this issue Dec 30, 2022 · 4 comments · Fixed by #29896
Assignees
Labels
bug Addresses a defect in current functionality. service/cloudwatch Issues and PRs that pertain to the cloudwatch service.
Milestone

Comments

@jhwbarlow
Copy link

Terraform Core Version

1.3.6

AWS Provider Version

4.48.0

Affected Resource(s)

  • aws_cloudwatch_metric_alarm

Expected Behavior

When creating a Metric Alarm that uses an expression, it is not possible to set the period. However, this is a valid configuration in CloudWatch and can be successfuly performed via the web console and the API.

Actual Behavior

This leads to the request being rejected by AWS as the 'period' field is missing in the request.

Relevant Error/Panic Output Snippet

Error: Updating metric alarm failed: ValidationError: Period must not be null
│       status code: 400, request id: 95fc8690-15f5-433d-85f1-3a34e6516db8
│ 
│   with module.redshift_cluster.aws_cloudwatch_metric_alarm.wlm_total_average_queue_length[0],
│   on ../../../modules/redshift_cluster/main.tf line 378, in resource "aws_cloudwatch_metric_alarm" "wlm_total_average_queue_length":
│  378: resource "aws_cloudwatch_metric_alarm" "wlm_total_average_queue_length" {

Terraform Configuration Files

resource "aws_cloudwatch_metric_alarm" "wlm_total_average_queue_length" {
  alarm_name          = "redshift-archive-wlm-total-avg-queue-length"
  comparison_operator = "GreaterThanOrEqualToThreshold"
  evaluation_periods  = 1
  datapoints_to_alarm = 1
  threshold           = 5
  alarm_description   = "The WLM queue length across all queues has exceeded 5 on average in the last 15 minutes"
  
  metric_query {   
    id          = "avg_queue_length_all_queues"
    expression  = "SELECT AVG(WLMQueueLength) FROM \"AWS/Redshift\" WHERE ClusterIdentifier = 'archive'" # No QueueName dimension as we want to get the avg of all queue lengths
    label       = "Average queue length across all WLM queues"
    return_data = true
    // period      = 15 * 60 # Unable to set in TF
  }

  alarm_actions = ["arn:aws:sns:eu-west-1:582261275102:NotifyMe"]
}

Steps to Reproduce

  • Attempt to create a Cloudwatch Alarm with an expression

Debug Output

No response

Panic Output

No response

Important Factoids

No response

References

An example in Python showing the successful creation of such a metric:

import boto3

def create_alarm(
    sns_topic_arn: str,
    redshift_cluster_identifier: str,
    evaluation_period_minutes: int=15,
    wlm_queue_total_average_max_acceptable_length: int=5
) -> None:
    cloudwatch = boto3.client('cloudwatch')

    alarm_data = {
        "AlarmName": f"redshift-{redshift_cluster_identifier}-wlm-total-avg-queue-length",
        "AlarmDescription": f"The WLM queue length across all queues has exceeded {wlm_queue_total_average_max_acceptable_length} on average in the last {evaluation_period_minutes} minutes",
        "ActionsEnabled": True,
        "AlarmActions": [sns_topic_arn],
        "EvaluationPeriods": 1,
        "DatapointsToAlarm": 1,
        "Threshold": wlm_queue_total_average_max_acceptable_length,
        "ComparisonOperator": "GreaterThanOrEqualToThreshold",
        "TreatMissingData": "missing",
        "Metrics": [
            {
                "Id": "avg_queue_length_all_queues",
                "Expression": f"SELECT AVG(WLMQueueLength) FROM \"AWS/Redshift\" WHERE ClusterIdentifier = '{redshift_cluster_identifier}'",
                "Label": "Average queue length across all WLM queues",
                "ReturnData": True,
                "Period": evaluation_period_minutes * 60
            }
        ]
    }

    cloudwatch.put_metric_alarm(**alarm_data)

REDSHIFT_CLUSTER_IDENTIFIER = "archive"
SNS_TOPIC_ARN = "arn:aws:sns:eu-west-1:582261275102:NotifyMe"

if __name__ == "__main__":
    create_alarm(sns_topic_arn=SNS_TOPIC_ARN, redshift_cluster_identifier=REDSHIFT_CLUSTER_IDENTIFIER)

Would you like to implement a fix?

None

@jhwbarlow jhwbarlow added bug Addresses a defect in current functionality. needs-triage Waiting for first response or review from a maintainer. labels Dec 30, 2022
@github-actions
Copy link

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please see our prioritization guide for information on how we prioritize.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@github-actions github-actions bot added the service/cloudwatch Issues and PRs that pertain to the cloudwatch service. label Dec 30, 2022
@jhwbarlow jhwbarlow changed the title [Bug]: Cloudwatch Metric Alarm 'metric_query' block does not Support Period in Conjunction With 'expression' [Bug]: Cloudwatch Metric Alarm 'metric_query' block does not Support Period in Conjunction With Expression Dec 30, 2022
@jhwbarlow
Copy link
Author

Just seen this PR which looks like it may do the job here.

@github-actions
Copy link

This functionality has been released in v4.59.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/cloudwatch Issues and PRs that pertain to the cloudwatch service.
Projects
None yet
3 participants