Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add support for crown jewel policies #739

Open
wants to merge 2 commits into
base: crown-jewel
Choose a base branch
from

Conversation

Ankurk99
Copy link
Contributor

@Ankurk99 Ankurk99 commented Jun 5, 2023

Description

Add support for generating lenient policies protecting sensitive assets (mount points here)

Problem
Currently we were discovering least-permissive security policies using Discovery Engine. Ideally these policies were designed to get us a Zero-trust security posture by only allowing the binaries which are essential for running any particular application. But the issue with these types of policies is that it's very difficult to actually create an exhaustive policy which contains all the necessary binaries allowed. If we miss even a single important binary, that would have caused the whole application to crash in worst case.

Solution
The aim of this PR is to identify the crown-jewels or the list of important assets which when protected can give us a fairly good security posture for the whole application.

PR changes
This PR identifies the paths mounted by any application and check if they are actually being used. We then create a "Crown-jewel" lenient policy which only allows the access of the particular mount paths by a particular binary based on the actual usage and deny access from others.

Example of a Crown jewel policy:

 - apiVersion: v1
  kind: KubeArmorPolicy
  metadata:
    name: autopol-assets-vault
    namespace: default
  spec:
    action: Allow
    file:
      matchDirectories:
      - action: Block
        dir: /vault/data/
        recursive: true
      - dir: /vault/data/
        fromSource:
        - path: /bin/vault
        recursive: true
      - dir: /
        recursive: true
      - dir: /vault/config/
        recursive: true
      - action: Block
        dir: /home/vault/
        recursive: true
      - dir: /home/vault/
        fromSource:
        - path: /bin/sh
        recursive: true
    message: Sensitive assets and process control policy
    network: {}
    process:
      matchPaths:
      - path: /bin/sh
      - path: /bin/vault
      - path: /bin/busybox
    selector:
      matchLabels:
        app.kubernetes.io/instance: vault
        app.kubernetes.io/name: vault
        component: server
        helm.sh/chart: vault-0.24.1
    severity: 7

Here in the above policy, access to the dir: /home/vault/ is only allowed by /bin/sh and /vault/data/ by /bin/vault. The other mount path /vault/config/ is not being used, so it's set to Block. As Vault is using Alpine image, that's why we see the process /bin/busybox being used (will be a symlink to system packages).

ref: #715

@Ankurk99 Ankurk99 force-pushed the crown-jewel-policy branch 7 times, most recently from b722107 to a042869 Compare June 7, 2023 18:25
@Ankurk99 Ankurk99 marked this pull request as ready for review June 7, 2023 18:27
src/crownjewel/crownjewel.go Outdated Show resolved Hide resolved
@Ankurk99 Ankurk99 force-pushed the crown-jewel-policy branch 2 times, most recently from fb30102 to 45d3606 Compare June 14, 2023 06:57
@Ankurk99 Ankurk99 changed the base branch from dev to crown-jewel July 4, 2023 05:00
if CrownjewelCronJob != nil {
log.Info().Msg("Got a signal to terminate the auto system policy discovery")

CrownjewelStopChan = make(chan struct{})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see this channel also getting initialized through init, can you explain the flow here?

for _, pod := range podList.Items {
for _, container := range pod.Spec.Containers {
sumResp, err := obs.GetSummaryData(&opb.Request{
PodName: pod.Name,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we using pod name?
And how mount paths from observability data is utilized?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we using pod name?

We are checking the resource to be pods (instead of deployment/statefulset etc.) because pod labels contains all the labels for that resource which might be missing in other workload (say statefulset for eg.).
Here below is an example of a vault statefulset:

Pod Labels:

Labels:     app.kubernetes.io/instance=vault
                  app.kubernetes.io/name=vault
                  component=server
                  controller-revision-hash=vault-5f4c59685d
                  helm.sh/chart=vault-0.24.1
                  statefulset.kubernetes.io/pod-name=vault-0

Statefulset labels:

  Labels:           app.kubernetes.io/instance=vault
                    app.kubernetes.io/name=vault
                    component=server
                    helm.sh/chart=vault-0.24.1```

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And how mount paths from observability data is utilized?

PTAL at #739 (comment), if anything is unclear please comment.

@Ankurk99
Copy link
Contributor Author

The code implementation can be summarized as below:

  • func getProcessList() : Get the list of running processes from observability data and store it in an array processList[]
  • func getVolumeMountPaths(): Get all the volume path from the k8s cluster (looking for pods matching the labels) and store it in array mountPaths[]
  • func usedMountPath(): Get the list of used mount paths from observability data and store the info together in sumResponses[] string with fromSource make(map[string]string)
  • func accessedMountPaths(): Match used mounts paths with actually accessed mount paths. Here we compare the mount points found in getVolumeMountPaths and usedMountPaths, if they match then keep them otherwise ignore those mount paths.

All the required info about the mount points which are mounted and actually being used are used to create a crown jewel policy using getCrownjewelPolicy() func.

  • func getCrownjewelPolicy() calls createCrownjewelPolicy()

  • func createCrownjewelPolicy(): Assigns Actions (for eg: block Allow for matching mount paths and Block by default) and ignores duplicate fromSource values. It then calls buildSystemPolicy() func to generate the crown jewel policy.

  • func buildSystemPolicy(): has the template to create the policy. Currently, severity and message are set by default but they can be assigned from the func arguments.

  • func getFilteredPolicy(): filters to check the namespaces to be ignored and calls systempolicy.UpdateSysPolicies(policies)

  • func systempolicy.UpdateSysPolicies(): Inserts the system policies to DB (sqlite or MYSQL)

  • func WriteSystemPoliciesToFile(): all the crown jewel policies are saved to a file named "kubearmor_policies_sensitive"

The logic for updating the crown jewel policies to the DB is the same as was for other system policies.

Add support for generating lenient policies protecting sensitive assets (mount points here)

Signed-off-by: Ankur Kothiwal <ankur.kothiwal99@gmail.com>
@seswarrajan seswarrajan removed their request for review October 8, 2023 14:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants