Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using non-privileged containers for kubearmor daemonset #781

Closed
4 of 5 tasks
nyrahul opened this issue Jul 26, 2022 · 6 comments · Fixed by #900
Closed
4 of 5 tasks

using non-privileged containers for kubearmor daemonset #781

nyrahul opened this issue Jul 26, 2022 · 6 comments · Fixed by #900
Assignees
Labels
enhancement New feature or request mentorship

Comments

@nyrahul
Copy link
Contributor

nyrahul commented Jul 26, 2022

Feature Request

Short Description

Privileged containers are usually frowned upon. Almost every static scanning engine will flag this as an issue. In lot of cases, orgs also deploy admission controllers that would not allow containers to be installed in priviledged mode.

Kubearmor currently uses privileged mode for its daemonset containers.

It is best to not use privileged mode, but to use specific capabilities for kubearmor.

Describe the solution you'd like

Use specific capabilities in place of privileged mode. Cilium recently added this mode as well.

Tasks

  • remove privilege: true from the deployment and drop all caps and enable specific required caps
  • Analysis of individual caps that are used and why that cap is needed
  • install kubearmor in non kube-system namespace
  • install kubearmor with non clusteradmin role (identify least privileged role for applying apparmor annotations, watch pods/nodes and using that roles in the manifest)
  • drop caps from the code once the cap is used and no more required. For e.g., once the ebpf byte code is loaded we might want to drop BPF cap.
@kranurag7
Copy link
Member

kranurag7 commented Aug 12, 2022

Results of scans from kubescape & trivy

kubescape scanning

kubescape scanning

kubescape scan --enable-host-scan

+----------+-------------------------------------------------+------------------+--------------------+---------------+--------------+
| SEVERITY |                  CONTROL NAME                   | FAILED RESOURCES | EXCLUDED RESOURCES | ALL RESOURCES | % RISK-SCORE |
+----------+-------------------------------------------------+------------------+--------------------+---------------+--------------+
| Critical | Data Destruction                                |        19        |         0          |      70       |     27%      |
| Critical | Malicious admission controller (mutating)       |        1         |         0          |       1       |     100%     |
| High     | Applications credentials in configuration files |        1         |         0          |      42       |      2%      |
| High     | Cluster-admin binding                           |        2         |         0          |      70       |      3%      |
| High     | List Kubernetes secrets                         |        11        |         0          |      70       |     16%      |
| High     | Privileged container                            |        2         |         0          |      26       |      7%      |
| High     | Resources CPU limit and request                 |        22        |         0          |      26       |     85%      |
| High     | Resources memory limit and request              |        21        |         0          |      26       |     77%      |
| High     | Writable hostPath mount                         |        5         |         0          |      26       |     18%      |
| Medium   | Access container service account                |        40        |         0          |      40       |     100%     |
| Medium   | Allow privilege escalation                      |        25        |         0          |      26       |     92%      |
| Medium   | Allowed hostPath                                |        5         |         0          |      26       |     18%      |
| Medium   | Automatic mapping of service account            |        69        |         0          |      69       |     100%     |
| Medium   | CVE-2022-0185-linux-kernel-container-escape     |        1         |         0          |       1       |     100%     |
| Medium   | CVE-2022-0492-cgroups-container-escape          |        19        |         0          |      26       |     74%      |
| Medium   | Cluster internal networking                     |        6         |         0          |       6       |     100%     |
| Medium   | Configured liveness probe                       |        20        |         0          |      26       |     74%      |
| Medium   | Container hostPort                              |        1         |         0          |      26       |      4%      |
| Medium   | CoreDNS poisoning                               |        4         |         0          |      70       |      6%      |
| Medium   | Delete Kubernetes events                        |        4         |         0          |      70       |      6%      |
| Medium   | Exec into container                             |        2         |         0          |      70       |      3%      |
| Medium   | Forbidden Container Registries                  |        3         |         0          |      26       |     11%      |
| Medium   | Host PID/IPC privileges                         |        1         |         0          |      26       |      4%      |
| Medium   | HostNetwork access                              |        7         |         0          |      26       |     26%      |
| Medium   | HostPath mount                                  |        6         |         0          |      26       |     22%      |
| Medium   | Images from allowed registry                    |        20        |         0          |      26       |     74%      |
| Medium   | Ingress and Egress blocked                      |        26        |         0          |      26       |     100%     |
| Medium   | Insecure capabilities                           |        1         |         0          |      26       |      4%      |
| Medium   | Linux hardening                                 |        8         |         0          |      26       |     29%      |
| Medium   | Mount service principal                         |        6         |         0          |      26       |     22%      |
| Medium   | Namespace without service accounts              |        4         |         0          |      49       |      8%      |
| Medium   | Network mapping                                 |        6         |         0          |       6       |     100%     |
| Medium   | No impersonation                                |        2         |         0          |      70       |      3%      |
| Medium   | Non-root containers                             |        26        |         0          |      26       |     100%     |
| Medium   | Portforwarding privileges                       |        2         |         0          |      70       |      3%      |
| Low      | Audit logs enabled                              |        1         |         0          |       1       |     100%     |
| Low      | Configured readiness probe                      |        24        |         0          |      26       |     88%      |
| Low      | Immutable container filesystem                  |        14        |         0          |      26       |     51%      |
| Low      | K8s common labels usage                         |        26        |         0          |      26       |     100%     |
| Low      | Label usage for resources                       |        21        |         0          |      26       |     82%      |
| Low      | PSP enabled                                     |        1         |         0          |       1       |     100%     |
| Low      | Resource policies                               |        22        |         0          |      26       |     85%      |
| Low      | Secret/ETCD encryption enabled                  |        1         |         0          |       1       |     100%     |
+----------+-------------------------------------------------+------------------+--------------------+---------------+--------------+
|          |                RESOURCE SUMMARY                 |       120        |         0          |      207      |    28.91%    |
+----------+-------------------------------------------------+------------------+--------------------+---------------+--------------+
FRAMEWORKS: MITRE (risk: 15.84), AllControls (risk: 28.91), ArmoBest (risk: 27.89), DevOpsBest (risk: 48.18), NSA (risk: 33.53)
trivy scanning

trivy scanning

trivy k8s -n kube-system --report=summary all

Summary Report for kind-karmor-cluster
┌─────────────┬──────────────────────────────────────────────────────────┬───────────────────────┬────────────────────┬───────────────────┐
│  Namespace  │                         Resource                         │    Vulnerabilities    │ Misconfigurations  │      Secrets      │
│             │                                                          ├───┬────┬────┬────┬────┼───┬───┬───┬────┬───┼───┬───┬───┬───┬───┤
│             │                                                          │ C │ H  │ M  │ L  │ U  │ C │ H │ M │ L  │ U │ C │ H │ M │ L │ U │
├─────────────┼──────────────────────────────────────────────────────────┼───┼────┼────┼────┼────┼───┼───┼───┼────┼───┼───┼───┼───┼───┼───┤
│ kube-system │ Deployment/kubearmor-annotation-manager                  │   │ 14 │ 5  │    │ 12 │   │   │ 8 │ 12 │   │   │   │   │   │   │
│ kube-system │ Pod/kube-controller-manager-karmor-cluster-control-plane │   │ 1  │ 3  │ 8  │    │   │   │   │    │   │   │   │   │   │   │
│ kube-system │ Service/kubearmor-annotation-manager-metrics-service     │   │    │ 1  │    │    │   │   │   │    │   │   │   │   │   │   │
│ kube-system │ Deployment/kubearmor-policy-manager                      │ 4 │ 42 │ 16 │ 2  │ 8  │   │   │ 6 │ 12 │   │   │   │   │   │   │
│ kube-system │ DaemonSet/kube-proxy                                     │ 7 │ 12 │ 2  │ 56 │    │   │ 2 │ 4 │ 10 │   │   │   │   │   │   │
│ kube-system │ DaemonSet/kindnet                                        │ 8 │ 12 │ 3  │ 56 │    │   │ 1 │ 5 │ 6  │   │   │   │   │   │   │
│ kube-system │ DaemonSet/kubearmor                                      │ 4 │ 27 │ 19 │    │ 1  │   │ 6 │ 8 │ 20 │   │   │   │   │   │   │
│ kube-system │ Service/kube-dns                                         │   │    │ 1  │    │    │   │   │   │    │   │   │   │   │   │   │
│ kube-system │ Service/kubearmor-policy-manager-metrics-service         │   │    │ 1  │    │    │   │   │   │    │   │   │   │   │   │   │
│ kube-system │ Deployment/kubearmor-host-policy-manager                 │ 4 │ 42 │ 16 │ 2  │ 8  │   │   │ 6 │ 12 │   │   │   │   │   │   │
│ kube-system │ Deployment/kubearmor-relay                               │ 1 │ 3  │ 2  │    │    │   │   │ 4 │ 10 │   │   │   │   │   │   │
│ kube-system │ Pod/etcd-karmor-cluster-control-plane                    │   │ 12 │ 4  │    │ 4  │   │ 1 │ 3 │ 7  │   │   │   │   │   │   │
│ kube-system │ Pod/kube-scheduler-karmor-cluster-control-plane          │   │ 1  │ 3  │ 8  │    │   │   │   │    │   │   │   │   │   │   │
│ kube-system │ Deployment/coredns                                       │   │ 4  │ 1  │    │ 2  │   │   │ 3 │ 5  │   │   │   │   │   │   │
│ kube-system │ Pod/kube-apiserver-karmor-cluster-control-plane          │   │ 1  │ 3  │ 8  │    │   │   │   │    │   │   │   │   │   │   │
│ kube-system │ Service/kubearmor                                        │   │    │ 1  │    │    │   │   │   │    │   │   │   │   │   │   │
│ kube-system │ Service/kubearmor-host-policy-manager-metrics-service    │   │    │ 1  │    │    │   │   │   │    │   │   │   │   │   │   │
└─────────────┴──────────────────────────────────────────────────────────┴───┴────┴────┴────┴────┴───┴───┴───┴────┴───┴───┴───┴───┴───┴───┘
Severities: C=CRITICAL H=HIGH M=MEDIUM L=LOW U=UNKNOWN

Summary Report for kind-karmor-cluster
┌─────────────┬─────────────────────────────────────────────────────┬───────────────────┐
│  Namespace  │                      Resource                       │  RBAC Assessment  │
│             │                                                     ├───┬───┬───┬───┬───┤
│             │                                                     │ C │ H │ M │ L │ U │
├─────────────┼─────────────────────────────────────────────────────┼───┼───┼───┼───┼───┤
│ kube-system │ Role/system::leader-locking-kube-scheduler          │   │   │ 1 │   │   │
│ kube-system │ Role/system:controller:bootstrap-signer             │ 1 │   │   │   │   │
│ kube-system │ Role/system::leader-locking-kube-controller-manager │   │   │ 1 │   │   │
│ kube-system │ Role/system:controller:cloud-provider               │   │   │ 1 │   │   │
│ kube-system │ Role/system:controller:token-cleaner                │ 1 │   │   │   │   │
└─────────────┴─────────────────────────────────────────────────────┴───┴───┴───┴───┴───┘
Severities: C=CRITICAL H=HIGH M=MEDIUM L=LOW U=UNKNOWN

Note: Results can differ in your case. I'm using kind cluster in GitHub codespaces.
cc @nyrahul

@ahsenkamal
Copy link

Hey folks, came here from the LFX mentorship projects and I'd like to contribute to this. Any help on where to get started?

@nyrahul
Copy link
Contributor Author

nyrahul commented Sep 13, 2022

[copying the update provided by @kranurag7 on kubearmor slack here...]

Here is the update of what I have done till now.

  • I'm working on this issue as part of LFX mentorship.
  • I have mainly used tracee and bpftrace to findout the capabilities required by the daemonset.
  • Look at the results by tracee here
  • I have also used bpftrace to filter out the capabilities of the daemonset and it was the same. I have used this program to check the required capabilities. The output was very verbose and bpftrace was giving output with container IDs so I have not included any gist here.
  • I have came to the conclusion that these are the capabilities required by the daemonset.
- SETGID
- SETUID
- SETPCAP
- SYS_PTRACE
- SYS_ADMIN
- MAC_ADMIN
  • I have replace privileged:true and set these capabilities in the pod security context and the kubearmor daemonset was running. I have used the same capabilities in both containers (init & kubearmor)
  • Next, I was working on setting hostPID & hostNetwork to false. For hostNetwork I have set it to false and given the set the capability NET_ADMIN in the pod security context and it was working well.
  • To cover the testcases I have interacted with Rahul and he told me that the best way to capture whether it's working properly or not is to run the ginkgo test and I have run the same in my repository and the test was passing. I have run it only once. The test results can be seen here

What I'm doing now?

  • I'm actively looking for the remediation of hostPID but I'm not able to find it anywhere. I have tried looking setting it to false and giving the privileged capabilities SYS_ADMIN and NET_ADMIN but the pod is not starting up.
    I'm trying to cover this with setting up network policies and I'm also exploring Pod security Admission for this. With network policies I know it's not going to work as it restricts the connectivity with cluster. With PSA I have some hope as it restrict the linux capabilities in a namespace. Even if it fails, it will be good to check whether kubearmor is running with all the three levels of standards baseline, privileged and restricted or not.

What I have not done till now?

  • As of now, I have set the same capabilities to both the container which may not be the case actually so I still have to figure out whether both the container requires the same capabilities or different.
  • Approaching the problem from the codebase side. I have not looked much into the code? I haven't figured out what's going on there and what are the functions both the containers are covering?
  • I haven't tested all these things extensively across platforms.

@nyrahul
Copy link
Contributor Author

nyrahul commented Sep 13, 2022

  • removing privilege: true
  • removing hostNetwork ... @kranurag7 update: removed hostNetwork and using NET_ADMIN cap
  • removing hostPID ... Conclusion: this cannot be removed
  • removing NET_ADMIN ... This cannot be removed
  • removing individual caps and validating using the CI

AWS FTR (Foundational Tech Review) requires that privilege mode not be used by the deployment (issue #891).

@nyrahul
Copy link
Contributor Author

nyrahul commented Mar 24, 2023

@kranurag7 , can we close this issue? I guess the cluster-role-binding could be separate issue?>

@kranurag7
Copy link
Member

Yes, we can close this. There are other two issues tracking the progress of this cluster-admin. I will update it there.

#1143
#1186

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request mentorship
Projects
No open projects
Status: Done
Development

Successfully merging a pull request may close this issue.

4 participants