Skip to content

Latest commit

 

History

History
217 lines (165 loc) · 7.3 KB

README.md

File metadata and controls

217 lines (165 loc) · 7.3 KB

Cloudbeat

Coverage Status Go Report Card Build Status

Cloudbeat evaluates cloud assets for security compliance and ships findings to Elasticsearch

Table of contents

Prerequisites

  1. Just command runner
  2. Elasticsearch with the default username & password (elastic & changeme) running on the default port (http://localhost:9200)
  3. Kibana with running on the default port (http://localhost:5601)
  4. Install and configure Elastic-Package
  5. Set up the local env:
just setup-env

Running Cloudbeat

Load the elastic stack environment variables.

eval "$(elastic-package stack shellinit)"

Kubernetes Vanilla

Build & deploy cloudbeat:

just build-deploy-cloudbeat

Amazon Elastic Kubernetes Service (EKS)

Export AWS creds as env vars, kustomize will use these to populate your cloudbeat deployment.

$ export AWS_ACCESS_KEY="<YOUR_AWS_KEY>" AWS_SECRET_ACCESS_KEY="<YOUR_AWS_SECRET>"

Set your default cluster to your EKS cluster

 kubectl config use-context your-eks-cluster

Deploy cloudbeat on your EKS cluster

just deploy-eks-cloudbeat

Advanced

If you need to change the default values in the configuration(ES_HOST, ES_PORT, ES_USERNAME, ES_PASSWORD), you can also create the deployment file yourself.

Vanilla

just create-vanilla-deployment-file

EKS

just create-eks-deployment-file

To validate check the logs:

See logs

just logs-cloudbeat

Now go and check out the data on your Kibana!

Clean up

To stop this example and clean up the pod, run:

just delete-cloudbeat

Remote Debugging

Build & Deploy remote debug docker:

just build-deploy-cloudbeat-debug

After running the pod, expose the relevant ports:

just expose-ports

The app will wait for the debugger to connect before starting

just logs-cloudbeat

Use your favorite IDE to connect to the debugger on localhost:40000 (for example Goland)

Note: Check the jusfile for all available commands for build or deploy $ just --summary

Skaffold Workflows

Skaffold is a CLI tool that enables continuous development for K8s applications. Skaffold will initiate a file-system watcher and will continuously deploy cloudbeat to a local or remote K8s cluster. The skaffold workflows are defined in the skaffold.yml file. Kustomize is used to overlay different config options. (current are cloudbeat vanilla & EKS)

Cloudbeat Vanilla:

Skaffold will initiate a watcher to build and re-deploy Cloudbeat every time a go file is saved and output logs to stdout

skaffold dev

Cloudbeat EKS:

Export AWS creds as env vars, Skaffold & kustomize will use these to populate your k8s deployment.

$ export AWS_ACCESS_KEY="<YOUR_AWS_KEY>" AWS_SECRET_ACCESS_KEY="<YOUR_AWS_SECRET>"

A skaffold profile is configured for EKS, it can be activated via the following options

Specify the profile name using the -p flag

skaffold -p eks dev

export the activation var prior to skaffold invocation, then proceed as usual.

export SKF_MODE="CB_EKS"
skaffold dev

Additional commands:

Skaffold supports one-off commands (no continuous watcher) if you wish to build or deploy just once.

skaffold build
skaffold deploy

Full CLI reference can be found here

Running Agent & Cloudbeat

Cloudbeat is only supported on managed elastic-agents. It means, that in order to run the setup, you will be required to have a Kibana running. Create an agent policy and install the CSP integration. Now, when adding a new agent, you will get the K8s deployment instructions of elastic-agent.

Update settings

Update cloudbeat settings on a runnign elastic-agent can be done by running the script. The script still requires a second step of trigerring the agent to re-run cloudbeat. This can be done on Fleet UI by changing the agent log level. Another option is through CLI on the agent by running

kill -9 `pidof cloudbeat`

Code guidelines

Pre-commit hooks

see pre-commit package

  • Install the package brew install pre-commit
  • Then run pre-commit install
  • Finally pre-commit run --all-files --verbose

Editorconfig

see editorconfig package

Testing

Cloudbeat has a various sets of tests. This guide should help to understand how the different test suites work, how they are used and how new tests are added.

In general there are two major test suites:

  • Unit tests written in Go
  • Integration tests written in Python

The tests written in Go use the Go Testing package. The tests written in Python depend on pytest and require a compiled and executable binary from the Go code. The python test run a beat with a specific config and params and either check if the output is as expected or if the correct things show up in the logs.

Integration tests in Beats are tests which require an external system like Elasticsearch to test if the integration with this service works as expected. Beats provides in its testsuite docker containers and docker-compose files to start these environments but a developer can run the required services also locally.

Mocking

Cloudbeat uses mockery as its mocking test framework. Mockery provides an easy way to generate mocks for golang interfaces.

Some tests use the new expecter interface the library provides. For example, given an interface such as

type Requester interface {
	Get(path string) (string, error)
}

You can use the type-safe expecter interface as such:

requesterMock := Requester{}
requesterMock.EXPECT().Get("some path").Return("result", nil)
requesterMock.EXPECT().
	Get(mock.Anything).
	Run(func(path string) { fmt.Println(path, "was called") }).
	// Can still use return functions by getting the embedded mock.Call
	Call.Return(func(path string) string { return "result for " + path }, nil)

Notes

  • Place the test in the same package as the code it meant to test.
  • File name should be aligned with the convention original_file_mock. For example: ecr_provider -> ecr_provider_mock.

Command example:

mockery --name=<interface_name> --with-expecter  --case underscore  --inpackage --recursive