Skip to content

Commit

Permalink
updating docs to v1.5.0 with new features (#261)
Browse files Browse the repository at this point in the history
* updating docs to v1.5.0

* Update docs/dev-guide/getting-started.md

Co-authored-by: Simon Maréchal <66471981+Minosity-VR@users.noreply.github.com>

* PR comment

---------

Co-authored-by: Simon Maréchal <66471981+Minosity-VR@users.noreply.github.com>
  • Loading branch information
jt-dd and Minosity-VR authored Sep 12, 2024
1 parent fc18b2b commit 01f384a
Show file tree
Hide file tree
Showing 14 changed files with 225 additions and 179 deletions.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ dist/

cmd/kubehound/kubehound
cmd/kubehound/__debug_bin
cmd/kubehound-ingestor/kubehound-ingestor
deployments/kubehound/data
deployments/kubehound/data/*

Expand Down
26 changes: 15 additions & 11 deletions configs/etc/kubehound-reference.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,17 +32,14 @@ collector:
# file:
# # Directory holding the K8s json data files
# directory: /path/to/directory
#
# # Target cluster name
# cluster: <cluster name>

#
# General storage configuration
#
storage:
# Whether or not to wipe all data on startup
wipe: true

# Number of connection retries before declaring an error
retry: 5

Expand Down Expand Up @@ -74,8 +71,8 @@ telemetry:

# Default tags to add to all telemetry (free form key-value map)
# tags:
# team: ase
# team: ase

# Statsd configuration for metics support
statsd:
# URL to send statsd data to the Datadog agent
Expand All @@ -90,7 +87,7 @@ telemetry:
# Graph builder configuration
#
# NOTE: increasing batch sizes can have some performance improvements by reducing network latency in transferring data
# between KubeGraph and the application. However, increasing it past a certain level can overload the backend leading
# between KubeGraph and the application. However, increasing it past a certain level can overload the backend leading
# to instability and eventually exceed the size limits of the websocket buffer used to transfer the data. Changing this
# is not recommended.
#
Expand All @@ -99,7 +96,7 @@ builder:
# vertex:
# # Batch size for vertex inserts
# batch_size: 500
#
#
# # Small batch size for vertex inserts
# batch_size_small: 100

Expand All @@ -124,18 +121,25 @@ builder:

# # Cluster impact batch size for edge inserts
# batch_size_cluster_impact: 1

# Ingestor configuration (for KHaaS)
# ingestor:
# blob:
# # (i.e.: s3://<your-bucket>)
# bucket: ""
# # (i.e.: us-east-1)
# region: ""
# region: ""
# temp_dir: "/tmp/kubehound"
# archive_name: "archive.tar.gz"
# max_archive_size: 2147483648 # 2GB
# # GRPC endpoint for the ingestor
# api:
# api:
# endpoint: "127.0.0.1:9000"
# insecure: true

#
# Dynamic info (optionnal - auto injected by KubeHound)
#
# dynamic:
#
# # Target cluster name
# cluster: <cluster name>
2 changes: 1 addition & 1 deletion deployments/k8s/khaas/values.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
team: <your_team>
services:
ingestor:
image: ghcr.io/datadog/kubehound-ingestor
image: ghcr.io/datadog/kubehound-binary
version: latest
bucket: s3://<your_bucket>
region: "us-east-1"
Expand Down
2 changes: 1 addition & 1 deletion docs/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ KubeHound works in 3 steps:
2. Compute attack paths
3. Write the results to a local graph database (JanusGraph)

After the initial ingestion is done, you use a compatible client or the provided [Jupyter Notebook](../../deployments/kubehound/notebook/KubeHound.ipynb) to visualize and query attack paths in your cluster.
After the initial ingestion is done, you use a compatible client or the provided [Jupyter Notebook](https://github.com/DataDog/KubeHound/blob/main/deployments/kubehound/ui/KubeHound.ipynb) to visualize and query attack paths in your cluster.

[![KubeHound architecture (click to enlarge)](./images/kubehound-high-level-v2.png)](./images/kubehound-high-level-v2.png)

Expand Down
46 changes: 26 additions & 20 deletions docs/dev-guide/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ make help

## Requirements build

+ go (v1.22): https://go.dev/doc/install
+ [Docker](https://docs.docker.com/engine/install/) >= 19.03 (`docker version`)
+ [Docker Compose](https://docs.docker.com/compose/compose-file/compose-versioning/) >= v2.0 (`docker compose version`)
- go (v1.22): https://go.dev/doc/install
- [Docker](https://docs.docker.com/engine/install/) >= 19.03 (`docker version`)
- [Docker Compose](https://docs.docker.com/compose/compose-file/compose-versioning/) >= v2.0 (`docker compose version`)

## Backend

Expand All @@ -20,31 +20,32 @@ The backend images are built with the Dockerfiles `docker-compose.dev.[graph|ing

The minimum stack (`mongo` & `graph`) can be spawned with

* `kubehound dev` which is an equivalent of
* `docker compose -f docker-compose.yaml -f docker-compose.dev.graph.yaml -f docker-compose.dev.mongo.yaml`. By default it will always rebuild everything (no cache is being used).
- `kubehound dev` which is an equivalent of
- `docker compose -f docker-compose.yaml -f docker-compose.dev.graph.yaml -f docker-compose.dev.mongo.yaml`. By default it will always rebuild everything (no cache is being used).

### Building dev options

You can add components to the mininum stack (`ui` and `grpc endpoint`) by adding the following flag.

* `--ui` to add the Jupyter UI to the build.
* `--grpc` to add the ingestor endpoint (exposing the grpc server for KHaaS).
- `--ui` to add the Jupyter UI to the build.
- `--grpc` to add the ingestor endpoint (exposing the grpc server for KHaaS).

For instance, building locally the minimum stack with the `ui` component:

```bash
kubehound dev --ui
```

### Tearing down the dev stack
### Tearing down the dev stack

To tear down the KubeHound dev stack, just use `--down` flag:

```bash
kubehound dev --down
```

!!! Note
!!! note

It will stop all the component from the dev stack (including the `ui` and `grpc endpoint` if started)

## Build the binary
Expand All @@ -59,28 +60,28 @@ make build

KubeHound binary will be output to `./bin/build/kubehound`.


### Releases

We use `buildx` to release new versions of KubeHound, for cross platform compatibility and because we are embedding the docker compose library (to enable KubeHound to spin up the KubeHound stack directly from the binary). This saves the user from having to take care of this part. The build relies on 2 files [docker-bake.hcl](https://github.com/DataDog/KubeHound/blob/main/docker-bake.hcl) and [Dockerfile](https://github.com/DataDog/KubeHound/blob/main/Dockerfile). The following bake targets are available:

* `validate` or `lint`: run the release CI linter
* `binary` (default option): build kubehound just for the local architecture
* `binary-cross` or `release`: run the cross platform compilation
- `validate` or `lint`: run the release CI linter
- `binary` (default option): build kubehound just for the local architecture
- `binary-cross` or `release`: run the cross platform compilation

!!! Note
Those targets are made only for the CI and are not intented to be run run locally (except to test the CI locally).
!!! note

Those targets are made only for the CI and are not intented to be run run locally (except to test the CI locally).

##### Cross platform compilation

To test the cross platform compilation locally, use the buildx bake target `release`. This target is being run by the CI ([buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml#L77-L84 workflow).
To test the cross platform compilation locally, use the buildx bake target `release`. This target is being run by the CI ([buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml#L77-L84 workflow).

```bash
docker buildx bake release
```

!!! Warning
!!! warning

The cross-binary compilation with `buildx` is not working in mac: `ERROR: Multi-platform build is not supported for the docker driver.`

## Push a new release
Expand All @@ -94,10 +95,15 @@ git push origin vX.X.X

New tags will trigger the 2 following jobs:

* [docker](): pushing new images for `kubehound-graph`, `kubehound-ingestor` and `kubehound-ui` on ghcr.io. The images can be listed [here](https://github.com/orgs/DataDog/packages?repo_name=KubeHound).
* [buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml): compiling the binary for all platform. The platform supported can be listed using this `docker buildx bake binary-cross --print | jq -cr '.target."binary-cross".platforms'`.
- [docker](): pushing new images for `kubehound-graph`, `kubehound-binary` and `kubehound-ui` on ghcr.io. The images can be listed [here](https://github.com/orgs/DataDog/packages?repo_name=KubeHound).
- [buildx](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/buildx.yml): compiling the binary for all platform. The platform supported can be listed using this `docker buildx bake binary-cross --print | jq -cr '.target."binary-cross".platforms'`.

!!! warning "deprecated"

The `kubehound-ingestor` image has been deprecated since **v1.5.0** and renamed to `kubehound-binary`.

The CI will draft a new release that **will need manual validation**. In order to get published, an admin has to to validate the new draft from the UI.

!!! Tip
!!! tip

To resync all the tags from the main repo you can use `git tag -l | xargs git tag -d;git fetch --tags`.
31 changes: 17 additions & 14 deletions docs/dev-guide/testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@

To ensure no regression in KubeHound, 2 kinds of tests are in place:

* classic unit test: can be identify with the `xxx_test.go` files in the source code
* system tests: end to end test where we run full ingestion from different scenario to simulate all use cases against a real cluster.
- classic unit test: can be identify with the `xxx_test.go` files in the source code
- system tests: end to end test where we run full ingestion from different scenario to simulate all use cases against a real cluster.

## Requirements test

+ [Golang](https://go.dev/doc/install) `>= 1.22`
+ [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager)
+ [Kubectl](https://kubernetes.io/docs/tasks/tools/)
- [Golang](https://go.dev/doc/install) `>= 1.22`
- [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager)
- [Kubectl](https://kubernetes.io/docs/tasks/tools/)

## Unit Testing

Expand All @@ -22,10 +22,11 @@ make test
## System Testing

The repository includes a suite of system tests that will do the following:
+ create a local kubernetes cluster
+ collect kubernetes API data from the cluster
+ run KubeHound using the file collector to create a working graph database
+ query the graph database to ensure all expected vertices and edges have been created correctly

- create a local kubernetes cluster
- collect kubernetes API data from the cluster
- run KubeHound using the file collector to create a working graph database
- query the graph database to ensure all expected vertices and edges have been created correctly

The cluster setup and running instances can be found under [test/setup](./test/setup/)

Expand All @@ -36,6 +37,7 @@ cd test/setup/ && export KUBECONFIG=$(pwd)/.kube-config
```

### Environment variable:

- `DD_API_KEY` (optional): set to the datadog API key used to submit metrics and other observability data (see [datadog](https://kubehound.io/dev-guide/datadog/) section)

### Setup
Expand All @@ -62,12 +64,13 @@ To cleanup the environment you can destroy the cluster via:
make local-cluster-destroy
```

!!! Note
!!! note

if you are running on Linux but you dont want to run `sudo` for `kind` and `docker` command, you can overwrite this behavior by editing the following var in `test/setup/.config`:
* `DOCKER_CMD="docker"` for docker command
* `KIND_CMD="kind"` for kind command

* `DOCKER_CMD="docker"` for docker command
* `KIND_CMD="kind"` for kind command

### CI Testing

System tests will be run in CI via the [system-test](./.github/workflows/system-test.yml) github action
System tests will be run in CI via the [system-test](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/system-test.yml) github action
7 changes: 4 additions & 3 deletions docs/dev-guide/wiki.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,14 @@ The website [kubehound.io](https://kubehound.io) is being statically generated f
make local-wiki
```

!!! Tip
All the configuration of the website (url, menu, css, ...) is being made from [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file:
!!! tip

All the configuration of the website (url, menu, css, ...) is being made from [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file:

## Push new version

The website will get automatically updated everytime there is changemement in [docs](https://github.com/DataDog/KubeHound/tree/main/docs) directory or the [mkdocs.yml](https://github.com/DataDog/KubeHound/blob/main/mkdocs.yml) file. This is being handled by [docs](https://github.com/DataDog/KubeHound/blob/main/.github/workflows/docs.yml) workflow.

!!! Note
!!! note

The domain for the wiki is being setup in the [CNAME](https://github.com/DataDog/KubeHound/tree/main/docs/CNAME) file.
Loading

0 comments on commit 01f384a

Please sign in to comment.