Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(logs): delete associated log group when stack is deleted #11549

Closed
1 of 2 tasks
duarten opened this issue Nov 18, 2020 · 19 comments · Fixed by #21113
Closed
1 of 2 tasks

(logs): delete associated log group when stack is deleted #11549

duarten opened this issue Nov 18, 2020 · 19 comments · Fixed by #21113
Labels
@aws-cdk/aws-logs Related to Amazon CloudWatch Logs effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p1

Comments

@duarten
Copy link
Contributor

duarten commented Nov 18, 2020

When the log group of a Lambda function is created by the LogRetention custom resource, then those log groups linger after the stack has been deleted. It should be possible to configure the removal policy.

Use Case

Not having hundreds of dead log groups around, created for PR and test deployments.

Proposed Solution

This requires setting the deletionPolicy on the CfnResource, and implementing support for the "Delete" event type in the custom resource itself.

Other

N/A

  • 👋 I may be able to implement this feature request
  • ⚠️ This feature might incur a breaking change

This is a 🚀 Feature Request

@duarten duarten added feature-request A feature should be added or improved. needs-triage This issue or PR still needs to be triaged. labels Nov 18, 2020
@SomayaB SomayaB changed the title lambda: delete associated log group when stack is deleted [lambda] delete associated log group when stack is deleted Nov 20, 2020
@github-actions github-actions bot added the @aws-cdk/aws-lambda Related to AWS Lambda label Nov 20, 2020
@nija-at
Copy link
Contributor

nija-at commented Nov 25, 2020

We could add this feature on the LogRetention construct and updating the handler with the Delete event.

Note that, the default must be to retain the logs.

@nija-at nija-at changed the title [lambda] delete associated log group when stack is deleted [logs] delete associated log group when stack is deleted Nov 25, 2020
@nija-at nija-at removed their assignment Nov 25, 2020
@nija-at nija-at added @aws-cdk/aws-logs Related to Amazon CloudWatch Logs and removed @aws-cdk/aws-lambda Related to AWS Lambda labels Nov 25, 2020
@SomayaB SomayaB changed the title [logs] delete associated log group when stack is deleted (logs): delete associated log group when stack is deleted Nov 25, 2020
@shivlaks shivlaks added p1 effort/medium Medium work item – several days of effort labels Dec 17, 2020
@SomayaB SomayaB added p1 and removed p1 needs-triage This issue or PR still needs to be triaged. labels Dec 17, 2020
@NGL321 NGL321 assigned rix0rrr and unassigned shivlaks Jan 25, 2021
@jagregory
Copy link

This would be a really useful feature. I'm creating a lot of temporary stacks for automated testing in CI, and it's leaving behind lots of log groups.

@wswoboda
Copy link

agreed, for the same reason as jagregory

@kornicameister
Copy link
Contributor

I am tempted to give it a shot. Recently been playing with cdk internals to learn a bit more and this particular use case is close to my heart as well. Not promising anything, but will certainly play with it.

@FilipPyrek
Copy link

+1

Sorry, I'm new to CDK universe (but not to CloudFormation).

Does anybody know why does the log group needs to be created by custom resource? I've successfully accomplished this without need for any custom resources in CloudFromation/Serverless Framework in the past.

@mlafeldt
Copy link

Does anybody know why does the log group needs to be created by custom resource?

It creates the log group to be able to set the retention time. Otherwise, the log group will only be created once a Lambda logs something - with no expiry date set.

@FilipPyrek
Copy link

@mlafeldt you can create the log group inside CloudFormation and set it retention as parameter.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-loggroup.html#cfn-logs-loggroup-retentionindays

And then when the stack gets removed, it removes also the log group.

But it's true that I remember some case, when lambda executed during stack removal and in the meantime the log group was removed, but since there was a little delay before cloudwatch delivered the logs, it created a new log group with "Never expire" retention.

So maybe it's trying to address this issue. 🤷‍♂️

@mlafeldt
Copy link

@mlafeldt you can create the log group inside CloudFormation and set it retention as parameter.

Yes, that's what I do if I truly care about cleanup.

Most of the time I do not and LogRetention is just more convenient.

@mlafeldt
Copy link

PS: A friend of mine recently released https://github.com/itmettkeDE/lambda-delete-unnused-aws-loggroups, which cleans up Lambda and CodeBuild logs.

@shishkin
Copy link

We could add this feature on the LogRetention construct and updating the handler with the Delete event.

Yes, LogRetention needs a RemovalPolicy that when set to DESTROY would delete the log group.

@RomainMuller
Copy link
Contributor

@shishkin but then what'll delete the LogRetention's backing lambda function's log group after it gets deleted? 🤓

@shishkin
Copy link

@shishkin but then what'll delete the LogRetention's backing lambda function's log group after it gets deleted? 🤓

If LogRetention lambda is a singleton resource its single log group is lesser evil than log groups created for each user lambda.

And if it is a singleton resource with a known log group name, maybe another custom resource could delete it as part of the stack deletion? And if that resource lacks CloudWatch permissions it would not leave behind any own logs 🤔

@greensmith
Copy link

And if it is a singleton resource with a known log group name, maybe another custom resource could delete it as part of the stack deletion? And if that resource lacks CloudWatch permissions it would not leave behind any own logs

Maybe I'm completely missing something here but I'm not sure why we should be so reliant on Cfn custom resources for everything. Why can we not handle this post-destroy?

Can we keep track of what Cfn has deployed and any "possible" children (i.e. Lambda LogGroups) then present information on orphaned resources that are not removed during Cfn delete?

This would be useful in CI to just say during post-build teardowns "here's a list of possibly orphaned resources - delete them"

@TheFlow0360
Copy link

Creating a log group with the same name is a workaround:

const nodejsFunction = new NodejsFunction(this, id, {
  ...
});
new logs.LogGroup(this, `LogGroupLambdaFunction ${id}`, {
  retention: logs.RetentionDays.ONE_WEEK,
  logGroupName: `/aws/lambda/${nodejsFunction.functionName}`,
  removalPolicy: RemovalPolicy.DESTROY,
});

This, however, breaks other integrations with the custom resource like subscriptions. So a way to directly configure the RemovalPolicy would be better.

@greensmith
Copy link

Another alternative is changing policies on the role to deny log access , to stop them being created in the first place.
e.g. for BucketDeployment, if we're deploying/destroying multiple dev branches i'd rather not have to worry about going in and tidying up after a destroy than having 99/100 logs saying the zip contents were extracted successfully.

const BucketDeploymentHandler = this.bucketDeployment.node.findChild('CustomResourceHandler') as SingletonFunction;
        BucketDeploymentHandler.addToRolePolicy(new PolicyStatement({
            actions:['logs:*'],
            resources:['*'],
            effect: Effect.DENY
       })
)

@mergify mergify bot closed this as completed in #21113 Aug 8, 2022
mergify bot pushed a commit that referenced this issue Aug 8, 2022
----
Motivation: The log group created by LogRetention lingers after the stack is deleted. This commit allows users to choose whether to retain or destroy associated logs of the deleted stack.

Use case: To avoid having hundreds of dead log groups after the stack is deleted.

most salient design aspects:
Added an enum LogDeletionPolicy and set LogDeletionPolicy in the LogRetetion custom resource. Added an API deleteLogGroup and added an event 'Delete' to the handler of LogRetetion to delete the log group.

closes [#11549](#11549)

### All Submissions:

* [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [x] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@github-actions
Copy link

github-actions bot commented Aug 8, 2022

⚠️COMMENT VISIBILITY WARNING⚠️

Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
If you wish to keep having a conversation with other community members under this issue feel free to do so.

jmortlock pushed a commit to jmortlock/aws-cdk that referenced this issue Aug 8, 2022
)

----
Motivation: The log group created by LogRetention lingers after the stack is deleted. This commit allows users to choose whether to retain or destroy associated logs of the deleted stack.

Use case: To avoid having hundreds of dead log groups after the stack is deleted.

most salient design aspects:
Added an enum LogDeletionPolicy and set LogDeletionPolicy in the LogRetetion custom resource. Added an API deleteLogGroup and added an event 'Delete' to the handler of LogRetetion to delete the log group.

closes [aws#11549](aws#11549)

### All Submissions:

* [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [x] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
josephedward pushed a commit to josephedward/aws-cdk that referenced this issue Aug 30, 2022
)

----
Motivation: The log group created by LogRetention lingers after the stack is deleted. This commit allows users to choose whether to retain or destroy associated logs of the deleted stack.

Use case: To avoid having hundreds of dead log groups after the stack is deleted.

most salient design aspects:
Added an enum LogDeletionPolicy and set LogDeletionPolicy in the LogRetetion custom resource. Added an API deleteLogGroup and added an event 'Delete' to the handler of LogRetetion to delete the log group.

closes [aws#11549](aws#11549)

### All Submissions:

* [x] Have you followed the guidelines in our [Contributing guide?](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md)

### Adding new Unconventional Dependencies:

* [ ] This PR adds new unconventional dependencies following the process described [here](https://github.com/aws/aws-cdk/blob/main/CONTRIBUTING.md/#adding-new-unconventional-dependencies)

### New Features

* [x] Have you added the new feature to an [integration test](https://github.com/aws/aws-cdk/blob/main/INTEGRATION_TESTS.md)?
	* [x] Did you use `yarn integ` to deploy the infrastructure and generate the snapshot (i.e. `yarn integ` without `--dry-run`)?

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
@albertodiazdorado
Copy link

Bump

This is a very important feature IMO

@gshpychka
Copy link
Contributor

Bump

This is a very important feature IMO

It has been implemented.

@albertodiazdorado
Copy link

Oh that's great. Thanks for letting me know @gshpychka !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/aws-logs Related to Amazon CloudWatch Logs effort/medium Medium work item – several days of effort feature-request A feature should be added or improved. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.