Getting Started • Getting Involved • Migrating from Smart Agent
Components • Monitoring • Security • Sizing • Troubleshooting
The Splunk OpenTelemetry Connector for Kubernetes is a Helm chart for the Splunk Distribution of OpenTelemetry Collector. This chart creates a Kubernetes DaemonSet along with other Kubernetes objects in a Kubernetes cluster and provides a unified way to receive, process and export metric, trace, and log data for Splunk Observability Cloud.
Installations that use this distribution can receive direct help from Splunk's support teams. Customers are free to use the core OpenTelemetry OSS components (several do!) and we will provide best effort guidance to them for any issues that crop up, however only the Splunk distributions are in scope for official Splunk support and support-related SLAs.
This distribution currently supports:
- Splunk APM via the
sapm
exporter. Theotlphttp
exporter can be used with a custom configuration. More information available here. - Splunk Infrastructure
Monitoring
via the
signalfx
exporter. More information available here. - Splunk Log Observer via
the
splunk_hec
exporter. - Splunk Cloud or
Splunk
Enterprise via
the
splunk_hec
exporter.
🚧 This project is currently in BETA. It is officially supported by Splunk. However, breaking changes MAY be introduced.
This helm chart is tested and works with default configurations on the following Kubernetes distributions:
- Vanilla (unmodified version) Kubernetes
- Amazon Elastic Kubernetes Service
- Azure Kubernetes Service
- Google Kubernetes Engine
- Minikube
While this helm chart should work for other Kubernetes distributions, it may require additional configurations applied to values.yaml.
The following components required to use the helm chart:
- Helm 3 (Helm 2 is not supported)
- Kubernetes cluster
- Splunk Access Token
- Splunk Realm
To install splunk-otel-collector in k8s cluster at least three parameters must be provided:
splunkRealm
(defaultus0
): Splunk realm to send telemetry data to.splunkAccessToken
: Your Splunk org access token.clusterName
: arbitrary value that will identify your Kubernetes cluster in Splunk.
To deploy the chart run the following commands replacing the parameters above with their appropriate values.
$ helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart
$ helm install my-splunk-otel-collector --set="splunkRealm=us0,splunkAccessToken=xxxxxx,clusterName=my-cluster" splunk-otel-collector-chart/splunk-otel-collector
Instead of setting helm values as arguments a yaml file can be provided:
$ helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector
To uninstall/delete a deployment with name my-splunk-otel-collector
:
$ helm delete my-splunk-otel-collector
The command removes all the Kubernetes components associated with the chart and deletes the release.
The values.yaml lists all supported configurable parameters for this chart, along with detailed explanation. Read through it to understand how to configure this chart.
Also check examples of chart configuration.
At the minimum you need to configure the following values.
clusterName: my-k8s-cluster
splunkAccessToken: xxxxxx
splunkRealm: us0
Use the provider
parameter to provide information about the cloud provider, if any.
aws
- Amazon Web Servicesgcp
- Google Cloudazure
- Microsoft Azure
This value can be omitted if none of the values apply.
Use the distro
parameter to provide information about underlying Kubernetes
deployment. This parameter allows the connector to automatically scrape
additional metadata. The supported options are:
eks
- Amazon EKSgke
- Google GKEaks
- Azure AKS
This value can be omitted if none of the values apply.
Optional environment
parameter can be used to specify an additional deployment.environment
attribute that will be added to all the telemetry data. It will help Splunk Observability
users to investigate data coming from different source separately.
Value examples: development, staging, production, etc.
environment: production
By default all telemetry data (metrics, traces and logs) is collected from the Kubernetes cluster. It's possible to disable any kind of telemetry with the following parameters:
metricsEnabled
:false
tracesEnabled
:false
logsEnabled
:false
For example, to install the connector only for logs:
$ helm install my-splunk-otel-collector \
--set="splunkRealm=us0,splunkAccessToken=xxxxxx,clusterName=my-cluster,metricsEnabled=false,tracesEnabled=false" \
splunk-otel-collector-chart/splunk-otel-collector
Use autodetect
config option to enable additional telemetry sources.
Set autodetect.prometheus=true
if you want the otel-collector agent to scrape
prometheus metrics from pods that have generic prometheus-style annotations:
prometheus.io/scrape: true
: Prometheus metrics will be scraped only from pods having this annotation;prometheus.io/path
: path to scrape the metrics from, default/metrics
;prometheus.io/port
: port to scrape the metrics from, default9090
.
Set autodetect.istio=true
, if the otel-collector agent in running in Istio
environment, to make sure that all traces, metrics and logs reported by Istio
collected in a unified manner.
For example to enable both Prometheus and Istio telemetry add the following
lines to your values.yaml
file:
autodetect:
istio: true
prometheus: true
The rendered directory contains pre-rendered Kubernetes resource manifests.
#163 Auto-detection of prometheus metrics is disabled by default: If you rely on automatic prometheus endpoints detection to scrape prometheus metrics from pods in your k8s cluster, make sure to add this configuration to your values.yaml:
autodetect:
prometheus: true