This repository contains kustomized Kubernetes manifests supported by Observe These are intended as a way of quickly start ingesting data into Observe.
You can install our kustomize stack directly using kubectl
: Kustomize is built into kubectl, from version 1.14.*
kubectl apply -k github.com/observeinc/manifests//stack
This will create an observe
namespace for you and start collecting Kubernetes state, logs and metrics.
To send data to Observe you must create a secret containing your an Observe Datastream token. A token can be generated from from an Observe app or datastream.
OBSERVE_CUSTOMER='observe_customer_id'
OBSERVE_TOKEN='some_token'
kubectl -n observe create secret generic credentials \
--from-literal=OBSERVE_CUSTOMER=${OBSERVE_CUSTOMER?} \
--from-literal=OBSERVE_TOKEN=${OBSERVE_TOKEN?}
The URL format used by kustomize is a git clone
URL. Attempting to open it
directly in a browser will result in a "page not found" error. If you would
like to inspect the contents of our installation, you can either view the
source on github, or
generate the manifest locally:
kubectl kustomize github.com/observeinc/manifests//stack
Next: Install Otel
Observe uses versioned manifests. To install or uninstall a specific version, add the version parameter to the URL, e.g:
$ kubectl apply -k 'https://github.com/observeinc/manifests/stack?ref=v0.25.0'
You can find the list of published versions on the releases page.
Releases are automatically cut on a weekly basis on Tuesdays. Release tags follow Semantic versioning. Bugfixes increment the patch version number, whereas new features increment the minor version. In the absence of bugfixes or features, no release is cut.
We provide four "base" components:
bases/events
- responsible for collecting Kubernetes state usingkube-state-events
bases/logs
- responsible for collecting container logs usingfluent-bit
bases/metrics
- responsible for collecting metrics usinggrafana-agent
bases/traces
- responsible for collecting traces usingotel-collector
We compose these base layers into overlays. Our default stack
overlay
collects events, logs and metrics.
You can add -collector-listerwatcher-limit batch_size
to the
kube-state-events
container args to adjust the batch size
(default is 500). This can reduce initial memeory usage, which
may allow you to run a smaller container.
By default, we attempt to choose defaults which have a wide operating range. However, some clusters will inevitably fall outside this range. We provide additional configurations that are more appropriate for these extremes:
github.com/observeinc/manifests//stack/xs
- intended to run on small clusters such as development environments, where resources are scarce and reliability is less of a concerngithub.com/observeinc/manifests//stack/m
- the default sizing, intended to run on clusters with hundreds of pods. Start here and adjust up or down accordingly.github.com/observeinc/manifests//stack/l
- used for similar sized clusters asm
, but with higher throughput in logs, metrics or events. This may be due to verbose logging, high cardinality metrics or frequent cluster reconfiguration.github.com/observeinc/manifests//stack/xl
- intended to run on large clusters with 100+ nodes. Collection is preferentially performed using daemonsets rather than deployments.
Resource limits for each sizing is as follows:
xs |
m |
l |
xl |
|
---|---|---|---|---|
events | 20m 64Mi |
50m 256Mi |
200m 1Gi |
400m 2Gi |
logs* | 10m 64Mi |
100m 128Mi |
200m 192Mi |
500m 256Mi |
metrics | 50m 256Mi |
250m 2Gi |
500m 4Gi |
200m* 1Gi |
* run as daemonset
Support for trace collection is available with the OpenTelemetry App installed in your Observe account. After installing the App, you will need to configure an otel-collector 'gateway' to send data to Observe.
Observe provides an otel
Kustomize stack to streamline creating the resources needed to report your spans to Observe. The Daemonset and Deployments for these are are based on standard deployment of opentelementry adapted from OpenTelemetry-helm-charts
Observe's OpenTelemetry Collector requires an OBSERVE_TOKEN and OBSERVE_CUSTOMER environment variables provided by the credentials secret that is created as part of the OpenTelemetry Observe App. For more information on how this token works, please consider Observe's Datastream Documentation
It's strongly recommended use a different 'Bearer' token than the one created for the credentials in Observe Stack. After you Generate a new OBSERVE_TOKEN for the OpenTelemetry App you need to add it as a k8s secret resource called otel-credentials
OBSERVE_CUSTOMER='observe_customer_id'
OBSERVE_TOKEN='connection_token_for_otel_app'
kubectl -n observe create secret generic otel-credentials \
--from-literal=OBSERVE_CUSTOMER=${OBSERVE_CUSTOMER?} \
--from-literal=OBSERVE_TOKEN=${OBSERVE_TOKEN?}
Note: OpenTelemetry Collector Pods will not start without expected secret
Updating from this manifest after November 11, 2022 will Change the secret mounted by the collector from credentials
to otel-credentials
. This Requires an observe token from the OpenTelemetry Observe App to prevent large Cost Spike and undefined behavior.
For general application you can use the default Observe Kustomize manifests which provides a daemonset instance of opentelemetry-collector designed for small (<100 nodes) clusters
kubectl apply -k github.com/observeinc/manifests//stack/otel
You can also use specific sizings:
kubectl apply -k github.com/observeinc/manifests/stack/otel/xs
kubectl apply -k github.com/observeinc/manifests/stack/otel/m
kubectl apply -k github.com/observeinc/manifests/stack/otel/l
kubectl apply -k github.com/observeinc/manifests/stack/otel/xl
The respective opentelemetry-collector container for each size:
xs |
m |
l |
xl |
|
---|---|---|---|---|
traces* | 50m 128Mi (Daemonset) |
250m 256Mi (Daemonset) |
250m 256Mi (deployment replicas: 10) |
250m 256Mi |
Once installed, traces can be sent to the local collector running in k8s over GRPC on
observe-traces.observe.svc.cluster.local:4317
.
NOTE When migrating from s or m to l, ensure that you remove the previous opentelemetry-collector daemonset
$ kubectl -n observe delete daemonset.apps traces
You can override any individual configuration element by creating a new kustomized manifest with our kustomized directory as a base.
The following example creates a new directory with kustomization.yaml
set to
override the FB_DEBUG
variable in the fluent-bit environment variable
configmap:
EXAMPLE_DIR=$(mktemp -d)
cat <<EOF >$EXAMPLE_DIR/kustomization.yaml
resources:
- github.com/observeinc/manifests//stack?ref=main
configMapGenerator:
- name: fluent-bit-env
behavior: merge
literals:
- FB_DEBUG=true
EOF
You can then apply this configuration directly from kubectl:
kubectl apply -k $EXAMPLE_DIR
or, alternatively, you can build using your specific kustomize
version:
kustomize build $EXAMPLE_DIR
It's recommended you version control your kustomized directory while tracking upstream changes through the use of branch tags.
Alternatively, you can create a configmap in the observe
namespace containing
overrides for each pod. Similar to the previous example, you can override the environment for all Observe pods by creating an env-overrides
files.
echo "FB_DEBUG=true" >> example.env
kubectl create configmap -n observe env-overrides --from-env-file example.env
Unlike the kustomize method, configuration changes are not picked up
automatically. You must restart the relevant pods to pick up the new environment variables. You can do this by restarting the daemonset(s) and deployment(s) in the observe
namespace
kubectl rollout restart -n observe daemonset
kubectl rollout restart -n observe deployment
If you are using the 1.0.0 release of the OpenTelemetry Observe app or newer, you should use the v2 collection endpoint which
provides a more efficient representation of trace observations in the datastream. For that you'll need to override the environment variable value OBSERVE_COLLECTOR_OTEL_VERSION=v2
as described above (Using an override configMap)
All Kubernetes resources installed through this repo will have an
observeinc.com/component
label. You can therefore use this label for pruning an existing install:
kubectl kustomize github.com/observeinc/manifests//stack | kubectl apply --prune -l observeinc.com/component -f -
To delete an existing install, just use kubectl delete
:
kubectl kustomize github.com/observeinc/manifests//stack | kubectl delete -f -