-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[aws-eks] Enable Control Plane logs in EKS cluster #4159
Comments
Hi @stefanolczak, thanks for submitting a feature request! We will update this issue when there is any progress. |
Any updates on this? Waiting for this feature as well. |
This is not highly prioritized at the moment, but more than happy to take contributions. |
Note that there is an abandoned PR for this: #8497 Consider resurrecting it once we pick this up again. |
Any update on this feature? |
@rameshmimit We are discussing this issue internally, we'll update here soon. |
In the meantime, what workarounds are available? It seems to me that click-ops or AWS cli are the alternatives, but neither is amicable to automation - is that correct? CLI will error if there's no change required. |
Since cluster logging can be updated post cluster creation, you can create a custom resource that updates that config. |
Any update on this? |
If anybody wants the workaround, here's code we wrote in our CDK to enable the logging for the cluster using import { FargateCluster } from "@aws-cdk/aws-eks";
import { Stack } from "@aws-cdk/core";
import {
AwsCustomResource,
AwsCustomResourcePolicy,
} from "@aws-cdk/custom-resources";
// Enables logs for the cluster.
//
// Taken from
// https://github.com/aws/aws-cdk/issues/4159#issuecomment-855625700
export function setupClusterLogging(
stack: Stack,
cluster: FargateCluster
): void {
new AwsCustomResource(stack, "ClusterLogsEnabler", {
policy: AwsCustomResourcePolicy.fromSdkCalls({
resources: [`${cluster.clusterArn}/update-config`],
}),
onCreate: {
physicalResourceId: { id: `${cluster.clusterArn}/LogsEnabler` },
service: "EKS",
action: "updateClusterConfig",
region: stack.region,
parameters: {
name: cluster.clusterName,
logging: {
clusterLogging: [
{
enabled: true,
types: [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler",
],
},
],
},
},
},
onDelete: {
physicalResourceId: { id: `${cluster.clusterArn}/LogsEnabler` },
service: "EKS",
action: "updateClusterConfig",
region: stack.region,
parameters: {
name: cluster.clusterName,
logging: {
clusterLogging: [
{
enabled: false,
types: [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler",
],
},
],
},
},
},
});
} |
So I did the above with a AwsCustomResource and it worked great until we wanted to create a FargateProfile. Even adding an explicit dependency from the FargateProfile on the logging custom resource didn't help - it fails with a "you can't update the logs while we are creating a Fargate Profile" error. Having fought with it all day I am now having to look at going lower level to a Lambda-backed custom resource to get access to a is_complete_handler or something. |
@jasonumiker in our case we're using Fargate, but the logging config is set after Fargate has been setup. We're also using |
Thanks - yeah I eventually tried flipping the dependency and that seems to work 🤞 |
CloudFormation (NOT CDK) supports EKS control plane logging settings. |
I thought this would be easy...just use an escape hatch, like this: const cfnCluster = cluster.node.defaultChild as eks.CfnCluster;
cfnCluster.logging = {
clusterLogging: {
enabledTypes: [{ type: 'api' }, { type: 'audit' }, { type: 'authenticator' }, { type: 'controllerManager' }, { type: 'scheduler' }]
}
} But alas, it seems like the CDK does not use the low-level aws-cdk/packages/@aws-cdk/aws-eks/lib/cluster-resource.ts Lines 73 to 102 in b8a4a9a
Long story short the quick "escape hatch" hack does not work 😭 |
Fixes #4159 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
|
Fixes aws#4159 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Use Case
Enabling Control Plane logging in EKS cluster is only possible by calling EKS API after cluster is created. Doing it in CDK requires to create Custom Resource with code that calls the API. It would be nice to have it as an argument for creating EKS cluster from CDK.
Proposed Solution
Since EKS is created from python lambda when kubectlEnabled flag is enabled there is a simple way to create the EKS cluster with logging enabled. Currently the lambda code uses boto3 method eks.create_cluster() where we can pass arguments to enable logging on created cluster. (https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/eks.html#EKS.Client.create_cluster).
The lambda uses config as an argument for this method :
aws-cdk/packages/@aws-cdk/aws-eks/lib/cluster-resource/index.py
Line 69 in c3b3c93
The config is passed as a properties of custom resource and is created here:
aws-cdk/packages/@aws-cdk/aws-eks/lib/cluster.ts
Lines 364 to 379 in c3b3c93
So I suggest to expose a way to include logging properties in the config so it should be passed to eks.create_cluster() method without any more changes. That should result in enabling logging on newly created EKS cluster.
This is a 🚀 Feature Request
The text was updated successfully, but these errors were encountered: