Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider applying kube-bench/CIS k8s benchmark to node #99

Open
tcotav opened this issue Nov 15, 2018 · 11 comments
Open

Consider applying kube-bench/CIS k8s benchmark to node #99

tcotav opened this issue Nov 15, 2018 · 11 comments

Comments

@tcotav
Copy link

tcotav commented Nov 15, 2018

I've attached a run of kube-bench that applies the k8s CIS benchmark against the nodes. The remediations are included in the output of the tool.

Should this be the defaults for the created AMI (or should it be applied in another fashion)? Obviously I think it should be the default.

kube-bench_node_output.txt

@brycecarman
Copy link

I took a look at kube-bench and completely agree that these tests should be passing. It looks like there are some issues with the tool that invalidate a lot of the results.

I didn't look into all the checks yet, but this is what I have found so far:

  • The tool looks in the wrong locations for config files. Rather then telling you it can't find a file, it just marks the check as failed.
  • The tool expects all kubelet arguments to be passed on the commandline. We just switched over to using a kubelet config file so the tool doesn't see a lot of the actual configuration.

@tcotav
Copy link
Author

tcotav commented Nov 19, 2018

I'll create a custom config and see if I can get more valid results (and pass them along).

@hobbsh
Copy link
Contributor

hobbsh commented Dec 13, 2018

So I am not using k8s CIS benchmarks (just ubuntu/al2 benchmarks) here: https://github.com/hobbsh/hardened-eks-ami. Would be great to get k8s benchmarks in anywhere!

@brandond
Copy link

We've got this on our internal backlog here at work, I'd be glad to put some time into a PR if no-one else has started work on this yet.

@johndavies91
Copy link

johndavies91 commented May 22, 2019

I took a look at kube-bench and completely agree that these tests should be passing. It looks like there are some issues with the tool that invalidate a lot of the results.

I didn't look into all the checks yet, but this is what I have found so far:

  • The tool looks in the wrong locations for config files. Rather then telling you it can't find a file, it just marks the check as failed.
  • The tool expects all kubelet arguments to be passed on the commandline. We just switched over to using a kubelet config file so the tool doesn't see a lot of the actual configuration.

I believe this is now solved here - https://github.com/aquasecurity/kube-bench/blob/master/job-eks.yaml

I have been able to get 20 checks passing and 2 failing by populating the kubelet-config.json file with the appropriate values. on initial setup my kubelet-config file looks as follows:

{
    \"kind\": \"KubeletConfiguration\",
    \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\",
    \"address\": \"0.0.0.0\",
    \"authentication\": {
        \"anonymous\": {
            \"enabled\": false
        },
        \"webhook\": {
            \"cacheTTL\": \"2m0s\",
            \"enabled\": true
        },
        \"x509\": {
            \"clientCAFile\": \"/etc/kubernetes/pki/ca.crt\"
        }
    },
    \"authorization\": {
        \"mode\": \"Webhook\",
        \"webhook\": {
            \"cacheAuthorizedTTL\": \"5m0s\",
            \"cacheUnauthorizedTTL\": \"30s\"
        }
    },
    \"clusterDomain\": \"cluster.local\",
    \"hairpinMode\": \"hairpin-veth\",
    \"cgroupDriver\": \"cgroupfs\",
    \"cgroupRoot\": \"/\",
    \"featureGates\": {
        \"RotateKubeletServerCertificate\": true
    },
    \"serializeImagePulls\": false,
    \"serverTLSBootstrap\": true,
    \"configMapAndSecretChangeDetectionStrategy\": \"Cache\",
    \"clusterDNS\": [
        \"172.20.0.10\"
    ],
    \"maxPods\": 11,
    \"TLSCipherSuites\": \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 ,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 ,TLS_EC\
    DHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 ,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256\",
    \"ReadOnlyPort\": 0,
    \"StreamingConnectionIdleTimeout\": \"5m\",
    \"protectKernelDefaults\": true,
    \"eventRecordQPS\": 0,
    \"RotateCertificates\": true
}

The 2 failures are:
[FAIL] 2.1.1 Ensure that the --allow-privileged argument is set to false (Scored) This is now a deprecated security control for kubelet so can be ignored

The other failure is the TLS location:
[FAIL] 2.1.11 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)
I believe the TLS cert file is generated after the pod is added to the cluster, so currently I'm manually adding the pod to the cluster as per the docs then adding the following two lines to the kubelet-config.json file to specify the location:

\"tlsCertFile\": \"/var/lib/kubelet/pki/kubelet-server-current.pem\",
    \"tlsPrivateKeyFile\": \"/var/lib/kubelet/pki/kubelet-server-current.pem\",

that is how i am currently getting all checks to pass the CIS checks...

Sorry for the lengthy post, but I guess my question is has anybody managed to get the TLS issue resolved before the pod joins the cluster or would you ignore that check knowing that EKS handles the TLS itself?

@munntjlx
Copy link

dumb question: How can I apply this configuration to my worker nodes w/o manually copying? Fairly new to eks.

@bmcustodio
Copy link
Contributor

bmcustodio commented Oct 7, 2019

I'm having trouble getting the following one sorted out:

 [FAIL] 2.1.13 Ensure that the --rotate-certificates argument is not set to false (Scored)

I believe it may be related to #207 and to client certificates not being supported for authentication in EKS. @johndavies91 did you manage to get kube-bench to pass with --rotate-certificates=true? Could you please share what changes you made to allow for this?

@aissarmurad
Copy link

Just to keep a status...

I have run that today (2020-02-14)
Using amazon-eks-node-1.14-v20191213 (ami-087a82f6b78a07557) in us-east-1

I used kube-bench from master (commit 17cd10478809f0b36a5a72d55b5a520bb3c6a85b)

parameter: --benchmark cis-1.4

[INFO] 2 Worker Node Security Configuration
[INFO] 2.1 Kubelet
[PASS] 2.1.1 Ensure that the --anonymous-auth argument is set to false (Scored)
[PASS] 2.1.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Scored)
[PASS] 2.1.3 Ensure that the --client-ca-file argument is set as appropriate (Scored)
[FAIL] 2.1.4 Ensure that the --read-only-port argument is set to 0 (Scored)
[PASS] 2.1.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Scored)
[FAIL] 2.1.6 Ensure that the --protect-kernel-defaults argument is set to true (Scored)
[PASS] 2.1.7 Ensure that the --make-iptables-util-chains argument is set to true (Scored)
[PASS] 2.1.8 Ensure that the --hostname-override argument is not set (Scored)
[FAIL] 2.1.9 Ensure that the --event-qps argument is set to 0 (Scored)
[FAIL] 2.1.10 Ensure that the --tls-cert-file and --tls-private-key-file arguments are set as appropriate (Scored)
[INFO] 2.1.11 [DEPRECATED] Ensure that the --cadvisor-port argument is set to 0
[PASS] 2.1.12 Ensure that the --rotate-certificates argument is not set to false (Scored)
[PASS] 2.1.13 Ensure that the RotateKubeletServerCertificate argument is set to true (Scored)
[WARN] 2.1.14 Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers (Not Scored)
[INFO] 2.2 Configuration Files
[PASS] 2.2.1 Ensure that the kubelet.conf file permissions are set to 644 or more restrictive (Scored)
[PASS] 2.2.2 Ensure that the kubelet.conf file ownership is set to root:root (Scored)
[PASS] 2.2.3 Ensure that the kubelet service file permissions are set to 644 or more restrictive (Scored)
[PASS] 2.2.4 Ensure that the kubelet service file ownership is set to root:root (Scored)
[PASS] 2.2.5 Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive (Scored)
[PASS] 2.2.6 Ensure that the proxy kubeconfig file ownership is set to root:root (Scored)
[FAIL] 2.2.7 Ensure that the certificate authorities file permissions are set to 644 or more restrictive (Scored)
[FAIL] 2.2.8 Ensure that the client certificate authorities file ownership is set to root:root (Scored)
[PASS] 2.2.9 Ensure that the kubelet configuration file ownership is set to root:root (Scored)
[PASS] 2.2.10 Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Scored)

== Remediations ==
2.1.4 If using a Kubelet config file, edit the file to set readOnlyPort to 0 .
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--read-only-port=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

2.1.6 If using a Kubelet config file, edit the file to set protectKernelDefaults: true .
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--protect-kernel-defaults=true
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

2.1.9 If using a Kubelet config file, edit the file to set eventRecordQPS: 0 .
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
--event-qps=0
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

2.1.10 If using a Kubelet config file, edit the file to set tlsCertFile to the location of the certificate
file to use to identify this Kubelet, and tlsPrivateKeyFile to the location of the
corresponding private key file.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service on each worker node and
set the below parameters in KUBELET_CERTIFICATE_ARGS variable.
--tls-cert-file=<path/to/tls-certificate-file>
file=<path/to/tls-key-file>
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service

2.1.14 If using a Kubelet config file, edit the file to set TLSCipherSuites: to TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256
If using executable arguments, edit the kubelet service file /etc/systemd/system/kubelet.service on each worker node and set the below parameter.
--tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256

2.2.7 Run the following command to modify the file permissions of the --client-ca-file
chmod 644 <filename>

2.2.8 Run the following command to modify the ownership of the --client-ca-file .
chown root:root <filename>


== Summary ==
16 checks PASS
6 checks FAIL
1 checks WARN
1 checks INFO

@oceaneLonneux
Copy link

Hello, sorry to bring this back but is it on the map, do you have any news? :)
It would be really nice to have a default EKS-AMI with all those tests passing (at least, the one that are looking at the wrong place and are actually correct). Even a "one page" documentation would be appreciated - so we don't have to look at different places to get the same answer.

Thanks!

@trallnag
Copy link

trallnag commented Jun 8, 2022

Is this still relevant? We now have a benchmark optimized for EKS.

sh-4.2$ kube-bench run --targets node --benchmark eks-1.0.1
[INFO] 3 Worker Node Security Configuration
[INFO] 3.1 Worker Node Configuration Files
[PASS] 3.1.1 Ensure that the kubeconfig file permissions are set to 644 or more restrictive (Manual)
[PASS] 3.1.2 Ensure that the kubelet kubeconfig file ownership is set to root:root (Manual)
[PASS] 3.1.3 Ensure that the kubelet configuration file has permissions set to 644 or more restrictive (Manual)
[PASS] 3.1.4 Ensure that the kubelet configuration file ownership is set to root:root (Manual)
[INFO] 3.2 Kubelet
[PASS] 3.2.1 Ensure that the --anonymous-auth argument is set to false (Automated)
[PASS] 3.2.2 Ensure that the --authorization-mode argument is not set to AlwaysAllow (Automated)
[PASS] 3.2.3 Ensure that the --client-ca-file argument is set as appropriate (Manual)
[PASS] 3.2.4 Ensure that the --read-only-port argument is set to 0 (Manual)
[PASS] 3.2.5 Ensure that the --streaming-connection-idle-timeout argument is not set to 0 (Manual)
[PASS] 3.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated)
[PASS] 3.2.7 Ensure that the --make-iptables-util-chains argument is set to true (Automated)
[PASS] 3.2.8 Ensure that the --hostname-override argument is not set (Manual)
[WARN] 3.2.9 Ensure that the --eventRecordQPS argument is set to 0 or a level which ensures appropriate event capture (Automated)
[PASS] 3.2.10 Ensure that the --rotate-certificates argument is not set to false (Manual)
[PASS] 3.2.11 Ensure that the RotateKubeletServerCertificate argument is set to true (Manual)

== Remediations node ==
3.2.9 If using a Kubelet config file, edit the file to set eventRecordQPS: to an appropriate level.
If using command line arguments, edit the kubelet service file
/etc/systemd/system/kubelet.service on each worker node and
set the below parameter in KUBELET_SYSTEM_PODS_ARGS variable.
Based on your system, restart the kubelet service. For example:
systemctl daemon-reload
systemctl restart kubelet.service


== Summary node ==
14 checks PASS
0 checks FAIL
1 checks WARN
0 checks INFO

== Summary total ==
14 checks PASS
0 checks FAIL
1 checks WARN
0 checks INFO

@joebowbeer
Copy link

@trallnag I think this should be closed as fixed.

The eventRecordQPS warning should be updated and clarified, but that is an issue for kube-bench.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests