Skip to content

timescale/promscale-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

85 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Promscale-Benchmark

This repository will contain resources that will be used to benchmark Promscale using Avalanche. For now this will focus on utilizing tobs Helm chart to install to a K8s cluster. At this moment we are only using this to test ingestion of Prometheus data into Promscale using the remote-write endpoint in Promscale.

Prerequisites

To run this benchmark you will need to have at least the following tools installed.

Stack Setup

In this repo we have a local Helm chart that can be used to install and manage both tobs and Avalanche configurations into a Kubernetes Cluster.

The helm chart can be used on any Kubernetes cluster.

Cluster provisioning

Local

Start a local kind cluster and install cert-manager

make start-kind

Verify that you have access to the local cluster

kubectl get nodes

Amazon EKS

Go to docs/eks.md for instructions on how to provision and manage an EKS cluster.

Stack installation

We are using Helm to install the latest tobs stack with values.yaml pre-configured for the benchmark environment.

The default installation will install tobs in bench namespace and can be executed with:

make stack

To check if the stack was installed correctly you can run:

kubectl get po -n bench

NAME                                                         READY   STATUS      RESTARTS        AGE
opentelemetry-operator-controller-manager-7b69d9856f-8nhcm   2/2     Running     0               3m40s
prometheus-tobs-kube-prometheus-stack-prometheus-0           2/2     Running     0               3m26s
tobs-connection-secret-9m45w                                 0/1     Completed   0               3m40s
tobs-grafana-6c545d5fc8-7hr9k                                3/3     Running     0               3m40s
tobs-kube-prometheus-stack-operator-75985bb949-x2bxv         1/1     Running     0               3m40s
tobs-kube-state-metrics-5cfc875576-9pp5b                     1/1     Running     0               3m40s
tobs-opentelemetry-collector-6869598c59-tbncl                1/1     Running     0               107s
tobs-prometheus-node-exporter-hc6s6                          1/1     Running     0               3m40s
tobs-promscale-57855f5c46-zn562                              1/1     Running     4 (2m54s ago)   3m40s
tobs-timescaledb-0                                           2/2     Running     0               3m40s

By default Promscale is setup to send traces to Jaeger, if you wish to view those traces you can run:

make jaeger

To open the Jaeger UI run make jaeger-ui and open a browser to http://localhost:16686

Updating

If you want to modify the default stack installation (ex. to change promscale image version) you can edit the stack/values.yaml file and run:

make stack

Pod placement

To ensure that pods are placed on nodes we want them to be on we are offering methods described in docs/pod-placement.md.

Benchmark scenarios

Available scenarios

To check the available scenarios explore the scenarios directory.

Run a scenario

To quickly execute a benchmark scenario you can run:

make scenarios/<TYPE>/<NAME>

Where <TYPE> is the type of scenario you want to run and <NAME> is the name of the scenario.

For example:

make scenarios/metrics/ingest-direct

This will run the load-generator from the ingest-direct scenario in the metrics type.

Note: Keep in mind that by default all scenarios are deployed in separate namespaces.

Alternatively you can go to the scenario directory and run make command.

Configure scenario

To configure a scenario follow its instructions in the dedicated README.md file. You can find the file in scenario directory.

Grafana

You can easily log into Grafana and view Dashboards with a simple command:

make grafana

Once ran you can log into Grafana locally with http://localhost:8080.

Note: By default grafana is configured with anonymous access which doesn't require a password.