-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider applying kube-bench/CIS k8s benchmark to node #99
Comments
I took a look at kube-bench and completely agree that these tests should be passing. It looks like there are some issues with the tool that invalidate a lot of the results. I didn't look into all the checks yet, but this is what I have found so far:
|
I'll create a custom config and see if I can get more valid results (and pass them along). |
So I am not using k8s CIS benchmarks (just ubuntu/al2 benchmarks) here: https://github.com/hobbsh/hardened-eks-ami. Would be great to get k8s benchmarks in anywhere! |
We've got this on our internal backlog here at work, I'd be glad to put some time into a PR if no-one else has started work on this yet. |
I believe this is now solved here - https://github.com/aquasecurity/kube-bench/blob/master/job-eks.yaml I have been able to get 20 checks passing and 2 failing by populating the kubelet-config.json file with the appropriate values. on initial setup my kubelet-config file looks as follows: {
\"kind\": \"KubeletConfiguration\",
\"apiVersion\": \"kubelet.config.k8s.io/v1beta1\",
\"address\": \"0.0.0.0\",
\"authentication\": {
\"anonymous\": {
\"enabled\": false
},
\"webhook\": {
\"cacheTTL\": \"2m0s\",
\"enabled\": true
},
\"x509\": {
\"clientCAFile\": \"/etc/kubernetes/pki/ca.crt\"
}
},
\"authorization\": {
\"mode\": \"Webhook\",
\"webhook\": {
\"cacheAuthorizedTTL\": \"5m0s\",
\"cacheUnauthorizedTTL\": \"30s\"
}
},
\"clusterDomain\": \"cluster.local\",
\"hairpinMode\": \"hairpin-veth\",
\"cgroupDriver\": \"cgroupfs\",
\"cgroupRoot\": \"/\",
\"featureGates\": {
\"RotateKubeletServerCertificate\": true
},
\"serializeImagePulls\": false,
\"serverTLSBootstrap\": true,
\"configMapAndSecretChangeDetectionStrategy\": \"Cache\",
\"clusterDNS\": [
\"172.20.0.10\"
],
\"maxPods\": 11,
\"TLSCipherSuites\": \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ,TLS_EC\
DHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 ,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\",
\"ReadOnlyPort\": 0,
\"StreamingConnectionIdleTimeout\": \"5m\",
\"protectKernelDefaults\": true,
\"eventRecordQPS\": 0,
\"RotateCertificates\": true
} The 2 failures are: The other failure is the TLS location: \"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet-server-current.pem\",
\"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet-server-current.pem\", that is how i am currently getting all checks to pass the CIS checks... Sorry for the lengthy post, but I guess my question is has anybody managed to get the TLS issue resolved before the pod joins the cluster or would you ignore that check knowing that EKS handles the TLS itself? |
dumb question: How can I apply this configuration to my worker nodes w/o manually copying? Fairly new to eks. |
I'm having trouble getting the following one sorted out:
I believe it may be related to #207 and to client certificates not being supported for authentication in EKS. @johndavies91 did you manage to get |
Just to keep a status... I have run that today (2020-02-14) I used kube-bench from master (commit 17cd10478809f0b36a5a72d55b5a520bb3c6a85b) parameter:
|
Hello, sorry to bring this back but is it on the map, do you have any news? :) Thanks! |
Is this still relevant? We now have a benchmark optimized for EKS. sh-4.2$ kube-bench run --targets node --benchmark eks-1.0.1
[INFO] 3 Worker Node Security Configuration
[INFO] 3.1 Worker Node Configuration Files
[PASS] 3.1.1 Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Manual)
[PASS] 3.1.2 Ensure that the kubelet kubeconfig file ownership is set to root:root (Manual)
[PASS] 3.1.3 Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Manual)
[PASS] 3.1.4 Ensure that the kubelet configuration file ownership is set to root:root (Manual)
[INFO] 3.2 Kubelet
[PASS] 3.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
[PASS] 3.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 3.2.3 Ensure that the --client-ca-file argument is set as appropriate (Manual)
[PASS] 3.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
[PASS] 3.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[PASS] 3.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
[PASS] 3.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[PASS] 3.2.8 Ensure that the --hostname-override argument is not set (Manual)
[WARN] 3.2.9 Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)
[PASS] 3.2.10 Ensure that the --rotate-certificates argument is not set to false (Manual)
[PASS] 3.2.11 Ensure that the RotateKubeletServerCertificate argument is set to true (Manual)
== Remediations node ==
3.2.9 If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service
== Summary node ==
14 checks PASS
0 checks FAIL
1 checks WARN
0 checks INFO
== Summary total ==
14 checks PASS
0 checks FAIL
1 checks WARN
0 checks INFO |
I've attached a run of kube-bench that applies the k8s CIS benchmark against the nodes. The remediations are included in the output of the tool.
Should this be the defaults for the created AMI (or should it be applied in another fashion)? Obviously I think it should be the default.
kube-bench_node_output.txt
The text was updated successfully, but these errors were encountered: