From 34c064022873cdd33de2ca2794c307c39ee06dbd Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Fri, 21 Jun 2024 15:47:13 +0100 Subject: [PATCH 01/35] draft: killercoda enabled alloy doc --- docs/sources/send-data/alloy/_index.md | 6 + .../alloy/examples/alloy-otel-logs.md | 194 ++++++++++++++++++ 2 files changed, 200 insertions(+) create mode 100644 docs/sources/send-data/alloy/_index.md create mode 100644 docs/sources/send-data/alloy/examples/alloy-otel-logs.md diff --git a/docs/sources/send-data/alloy/_index.md b/docs/sources/send-data/alloy/_index.md new file mode 100644 index 000000000000..f6d269c2ff4a --- /dev/null +++ b/docs/sources/send-data/alloy/_index.md @@ -0,0 +1,6 @@ +--- +title: Ingesting logs to Loki using Alloy +menuTitle: Grafana Alloy +description: Configuring Grafana Alloy to send logs to Loki. +weight: 250 +--- diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md new file mode 100644 index 000000000000..f295af92c9eb --- /dev/null +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -0,0 +1,194 @@ +--- +title: Ingesting OpenTelemetry logs to Loki using Alloy +menuTitle: Ingesting OpenTelemetry logs using Alloy +description: Configuring Grafana Alloy to ingest OpenTelemetry logs to Loki. +weight: 250 +killercoda: + title: Ingesting OpenTelemetry logs to Loki using Alloy + description: Configuring Grafana Alloy to ingest OpenTelemetry logs to Loki. + details: + finish: + text: finish.md + backend: + imageid: ubuntu +--- + + + +# Ingesting OpenTelemetry logs to Loki using Alloy + +Alloy natively supports ingesting OpenTelemetry logs. In this example, we will configure Alloy to ingest OpenTelemetry logs to Loki. + +## Dependencies + +Before you begin, ensure you have the following to run the demo: + +- Docker +- Docker Compose + + + +{{< admonition type="note" >}} +Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). + +{{< /admonition >}} + + + +## Scenario + +In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: + +- foo + + + + + +## Step 1: Environment setup + +In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. + +1. To get started, clone the repository that contains our demo application: + + ```bash + git clone -b microservice-otel https://github.com/grafana/loki-fundamentals.git + ``` + +1. Next we will spin up our observability stack using Docker Compose: + + + ```bash + docker compose -f loki-fundamentals/docker-compose.yml up -d + ``` + + + + + + + + + + + + This will spin up the following services: + ```bash + ✔ Container loki-fundamentals-grafana-1 Started + ✔ Container loki-fundamentals-loki-1 Started + ✔ Container loki-fundamentals-alloy-1 Started + ``` + +We will be access two UI interfaces: + +- Grafana at [http://localhost:3000](http://localhost:3000) +- Alloy at [http://localhost:12345](http://localhost:12345) + + + + + +## Step 2: Configure Alloy to ingest OpenTelemetry logs + +To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. + +### OpenTelelmetry Logs Receiver + +First, we will configure the OpenTelemetry logs receiver. This receiver will accept logs via HTTP and gRPC. + +1. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + + + ```alloy + otelcol.receiver.otlp "default" { + http {} + grpc {} + + output { + logs = [otelcol.processor.batch.default.input] + } + } + ``` + + + + +### OpenTelemetry Logs Processor + +Next, we will configure the OpenTelemetry logs processor. This processor will batch the logs before sending them to the logs exporter. + +1. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + + ```alloy + otelcol.processor.batch "default" { + output { + logs = [otelcol.exporter.otlphttp.default.input] + } + } + ``` + + +### OpenTelemetry Logs Exporter + +Lastly, we will configure the OpenTelemetry logs exporter. This exporter will send the logs to Loki. + +1. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + + ```alloy + otelcol.exporter.otlphttp "default" { + client { + endpoint = "http://loki:3100/otlp" + } + } + ``` + + +Once added, save the file. Then run the following command to request Alloy to reload the configuration: + + +```bash +curl -X POST http://localhost:12345/-/reload +``` + + +## Stuck? Need help? + +If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: + + +```bash +cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy +``` + + + + + + +## Step 3: Start the Carnivorous Greenhouse + +In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command: + +{{< admonition type="note" >}} +This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first. +{{< /admonition >}} + + + + + + + +```bash + docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +``` + + +This will start the following services: +``` + + +``` + + + From 2d8ad39bec597375b09ae7aee6fce3d32acd0d0f Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Fri, 21 Jun 2024 16:24:03 +0100 Subject: [PATCH 02/35] Updated steps --- .../alloy/examples/alloy-otel-logs.md | 94 +++++++++++-------- 1 file changed, 56 insertions(+), 38 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index f295af92c9eb..7d375104c57c 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -27,19 +27,24 @@ Before you begin, ensure you have the following to run the demo: - Docker Compose - {{< admonition type="note" >}} Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). - {{< /admonition >}} - ## Scenario In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: -- foo +- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in. +- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created. +- **Simulation Service:** Generates sensor data for each plant. +- **Websocket Service:** Manages the websocket connections for the application. +- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. +- **Main App:** The main application that ties all the services together. +- **Database:** A database that stores user and plant data. + +Each service generates logs using the OpenTelemetry SDK and exports to Alloy in the OpenTelemetry format. Alloy then ingests the logs and sends them to Loki. We will configure Alloy to ingest OpenTelemetry logs, send them to Loki, and view the logs in Grafana. @@ -49,7 +54,7 @@ In this scenario, we have a microservices application called the Carnivourse Gre In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. -1. To get started, clone the repository that contains our demo application: +1. To get started, clone the repository that contains our demo application: ```bash git clone -b microservice-otel https://github.com/grafana/loki-fundamentals.git @@ -96,52 +101,52 @@ To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy con First, we will configure the OpenTelemetry logs receiver. This receiver will accept logs via HTTP and gRPC. -1. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + - ```alloy - otelcol.receiver.otlp "default" { - http {} - grpc {} +```alloy + otelcol.receiver.otlp "default" { + http {} + grpc {} - output { - logs = [otelcol.processor.batch.default.input] - } - } - ``` + output { + logs = [otelcol.processor.batch.default.input] + } + } +``` - + ### OpenTelemetry Logs Processor Next, we will configure the OpenTelemetry logs processor. This processor will batch the logs before sending them to the logs exporter. -1. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - - ```alloy - otelcol.processor.batch "default" { - output { - logs = [otelcol.exporter.otlphttp.default.input] - } - } - ``` - +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + +```alloy + otelcol.processor.batch "default" { + output { + logs = [otelcol.exporter.otlphttp.default.input] + } + } +``` + ### OpenTelemetry Logs Exporter Lastly, we will configure the OpenTelemetry logs exporter. This exporter will send the logs to Loki. -1. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - - ```alloy - otelcol.exporter.otlphttp "default" { - client { - endpoint = "http://loki:3100/otlp" - } - } - ``` - +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + +```alloy +otelcol.exporter.otlphttp "default" { + client { + endpoint = "http://loki:3100/otlp" + } +} +``` + Once added, save the file. Then run the following command to request Alloy to reload the configuration: @@ -185,10 +190,23 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. This will start the following services: +```bash + ✔ Container greenhouse-db-1 Started + ✔ Container greenhouse-websocket_service-1 Started + ✔ Container greenhouse-bug_service-1 Started + ✔ Container greenhouse-user_service-1 Started + ✔ Container greenhouse-plant_service-1 Started + ✔ Container greenhouse-simulation_service-1 Started + ✔ Container greenhouse-main_app-1 Started ``` +Once started, you can access the Carnivorous Greenhouse application at [http://localhost:5005](http://localhost:5005). Generate some logs by interacting with the application in the following ways: -``` +- Create a user +- Log in +- Create a few plants to monitor +- Enable bug mode to activate the bug service. This will cause services to fail and generate additional logs. +Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). From ed0f540f18a67cff92f57835249d69bf5f9185b3 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Fri, 21 Jun 2024 16:38:09 +0100 Subject: [PATCH 03/35] Updated steps --- .../alloy/examples/alloy-otel-logs.md | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 7d375104c57c..fd76c2b29fcf 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -97,6 +97,10 @@ We will be access two UI interfaces: To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. + + + + ### OpenTelelmetry Logs Receiver First, we will configure the OpenTelemetry logs receiver. This receiver will accept logs via HTTP and gRPC. @@ -125,11 +129,11 @@ Next, we will configure the OpenTelemetry logs processor. This processor will ba Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: ```alloy - otelcol.processor.batch "default" { +otelcol.processor.batch "default" { output { - logs = [otelcol.exporter.otlphttp.default.input] + logs = [otelcol.exporter.otlphttp.default.input] } - } +} ``` @@ -148,14 +152,17 @@ otelcol.exporter.otlphttp "default" { ``` -Once added, save the file. Then run the following command to request Alloy to reload the configuration: +### Reload the Alloy configuration +Once added, save the file. Then run the following command to request Alloy to reload the configuration: ```bash curl -X POST http://localhost:12345/-/reload ``` +The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). + ## Stuck? Need help? If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: @@ -163,6 +170,7 @@ If you get stuck or need help creating the configuration, you can copy and repla ```bash cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy +curl -X POST http://localhost:12345/-/reload ``` @@ -180,7 +188,7 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. - + @@ -210,3 +218,4 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). + \ No newline at end of file From 678176db24912d72b9b170a9feadbd42d65a9983 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Fri, 21 Jun 2024 16:41:02 +0100 Subject: [PATCH 04/35] updated url order --- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 6 +----- 1 file changed, 1 insertion(+), 5 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index fd76c2b29fcf..4d2f6f9e6fc4 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -75,8 +75,6 @@ In this step, we will set up our environment by cloning the repository that cont - - This will spin up the following services: ```bash ✔ Container loki-fundamentals-grafana-1 Started @@ -85,10 +83,8 @@ In this step, we will set up our environment by cloning the repository that cont ``` We will be access two UI interfaces: - -- Grafana at [http://localhost:3000](http://localhost:3000) - Alloy at [http://localhost:12345](http://localhost:12345) - +- Grafana at [http://localhost:3000](http://localhost:3000) From 4f009de452a6972363d9aadaeffe045c4b79e38e Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Fri, 21 Jun 2024 16:44:10 +0100 Subject: [PATCH 05/35] fixed dock compose notation --- .../send-data/alloy/examples/alloy-otel-logs.md | 13 ++++++++++--- 1 file changed, 10 insertions(+), 3 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 4d2f6f9e6fc4..c335e6077d18 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -187,11 +187,18 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. - + ```bash - docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +docker compose -f lloki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ``` - + + + + + + + + This will start the following services: ```bash From 7ee6176e5d9dd0bb14d3462549268638d55ceead Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Thu, 27 Jun 2024 21:20:35 +0100 Subject: [PATCH 06/35] examples --- docs/sources/send-data/alloy/_index.md | 56 +++++ .../alloy/examples/alloy-otel-kafka.md | 221 ++++++++++++++++++ .../alloy/examples/alloy-otel-logs.md | 89 +++---- 3 files changed, 323 insertions(+), 43 deletions(-) create mode 100644 docs/sources/send-data/alloy/examples/alloy-otel-kafka.md diff --git a/docs/sources/send-data/alloy/_index.md b/docs/sources/send-data/alloy/_index.md index f6d269c2ff4a..47e2e88d0a87 100644 --- a/docs/sources/send-data/alloy/_index.md +++ b/docs/sources/send-data/alloy/_index.md @@ -4,3 +4,59 @@ menuTitle: Grafana Alloy description: Configuring Grafana Alloy to send logs to Loki. weight: 250 --- + + +# Ingesting logs to Loki using Alloy + +Grafana Alloy is a versatile observability collector that can ingest logs in various formats and send them to Loki. We recommend Alloy as the primary method for sending logs to Loki, as it provides a more robust and feature-rich solution for building a highly scalable and reliable observability pipeline. + +## Installing Alloy + +To get started with Grafana Alloy and send logs to Loki, you need to install and configure Alloy. You can follow the [official documentation](https://grafana.com/docs/alloy/latest/get-started/install/) to install Alloy on your preferred platform. + +## Components of Alloy for logs + +Alloy pipelines are built using components that perform specific functions. For logs these can be broken down into three categories: + +- **Collector:** These components collect/receive logs from various sources. This can be scraping logs from a file, receiving logs over HTTP, gRPC or ingesting logs from a message queue. +- **Transformer:** These components can be used to manipulate logs before they are sent to a writer. This can be used to add additional metadata, filter logs or batch logs before sending them to a writer. +- **Writer:** These components send logs to the desired destination. Our documentation will focus on sending logs to Loki, but Alloy supports sending logs to various destinations. + +### Log components in Alloy + +Here is a non-exhaustive list of components that can be used to build a log pipeline in Alloy. For a complete list of components, refer to the [official documentation](https://grafana.com/docs/alloy/latest/reference/components/). + +| Type | Component | +|------------|-----------------------------------------------------------------------------------------------------| +| Collector | [loki.source.api](https://grafana.com/docs/alloy/latest/reference/components/loki.source.api/) | +| Collector | [loki.source.awsfirehose](https://grafana.com/docs/alloy/latest/reference/components/loki.source.awsfirehose/) | +| Collector | [loki.source.azure_event_hubs](https://grafana.com/docs/alloy/latest/reference/components/loki.source.azure_event_hubs/) | +| Collector | [loki.source.cloudflare](https://grafana.com/docs/alloy/latest/reference/components/loki.source.cloudflare/) | +| Collector | [loki.source.docker](https://grafana.com/docs/alloy/latest/reference/components/loki.source.docker/) | +| Collector | [loki.source.file](https://grafana.com/docs/alloy/latest/reference/components/loki.source.file/) | +| Collector | [loki.source.gcplog](https://grafana.com/docs/alloy/latest/reference/components/loki.source.gcplog/) | +| Collector | [loki.source.gelf](https://grafana.com/docs/alloy/latest/reference/components/loki.source.gelf/) | +| Collector | [loki.source.heroku](https://grafana.com/docs/alloy/latest/reference/components/loki.source.heroku/) | +| Collector | [loki.source.journal](https://grafana.com/docs/alloy/latest/reference/components/loki.source.journal/) | +| Collector | [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka/) | +| Collector | [loki.source.kubernetes](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kubernetes/) | +| Collector | [loki.source.kubernetes_events](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kubernetes_events/) | +| Collector | [loki.source.podlogs](https://grafana.com/docs/alloy/latest/reference/components/loki.source.podlogs/) | +| Collector | [loki.source.syslog](https://grafana.com/docs/alloy/latest/reference/components/loki.source.syslog/) | +| Collector | [loki.source.windowsevent](https://grafana.com/docs/alloy/latest/reference/components/loki.source.windowsevent/) | +| Collector | [otelcol.receiver.loki](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.loki/) | +| Transformer| [loki.relabel](https://grafana.com/docs/alloy/latest/reference/components/loki.relabel/) | +| Transformer| [loki.process](https://grafana.com/docs/alloy/latest/reference/components/loki.process/) | +| Writer | [loki.write](https://grafana.com/docs/alloy/latest/reference/components/loki.write/) | +| Writer | [otelcol.exporter.loki](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.loki/) | +| Writer | [otelcol.exporter.logging](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.logging/) | + + +## Interactive Tutorials + +To learn more about how to configure Alloy to send logs to Loki within different scenarios, follow these interactive tutorials: + +- [Sending OpenTelemetry logs to Loki using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs) +- [Sending logs over Kafka to Loki using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-http-logs) + + diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-kafka.md b/docs/sources/send-data/alloy/examples/alloy-otel-kafka.md new file mode 100644 index 000000000000..c757e9fc3a7e --- /dev/null +++ b/docs/sources/send-data/alloy/examples/alloy-otel-kafka.md @@ -0,0 +1,221 @@ +--- +title: Recive OpenTelemetry logs via Kafka using Alloy and Loki +menuTitle: Recive OpenTelemetry logs via Kafka using Alloy and Loki +description: Configuring Grafana Alloy to recive OpenTelemetry logs via Kafka and send them to Loki. +weight: 250 +killercoda: + title: Recive OpenTelemetry logs via Kafka using Alloy and Loki + description: Configuring Grafana Alloy to recive OpenTelemetry logs via Kafka and send them to Loki. + backend: + imageid: ubuntu +--- + + + +# Recive OpenTelemetry logs via Kafka using Alloy and Loki + +Alloy natively supports ingesting OpenTelemetry logs via Kafka. There maybe several scenarios where you may want to ingest logs via Kafka. For instance you may already use Kafka to aggregate logs from several otel collectors. Or your application may already be writing logs to Kafka and you want to ingest them into Loki. In this example, we will make use of 3 Alloy components to achieve this: + +## Dependencies + +Before you begin, ensure you have the following to run the demo: + +- Docker +- Docker Compose + + +{{< admonition type="note" >}} +Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). +{{< /admonition >}} + + +## Scenario + +In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: + +- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in. +- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created. +- **Simulation Service:** Generates sensor data for each plant. +- **Websocket Service:** Manages the websocket connections for the application. +- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. +- **Main App:** The main application that ties all the services together. +- **Database:** A database that stores user and plant data. + +Each service generates logs using the OpenTelemetry SDK and exports to Alloy in the OpenTelemetry format. Alloy then ingests the logs and sends them to Loki. We will configure Alloy to ingest OpenTelemetry logs, send them to Loki, and view the logs in Grafana. + + + + + +## Step 1: Environment setup + +In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. + +1. To get started, clone the repository that contains our demo application: + + ```bash + git clone -b microservice-otel https://github.com/grafana/loki-fundamentals.git + ``` + +1. Next we will spin up our observability stack using Docker Compose: + + + ```bash + docker compose -f loki-fundamentals/docker-compose.yml up -d + ``` + + + + + + + + + + This will spin up the following services: + ```bash + ✔ Container loki-fundamentals-grafana-1 Started + ✔ Container loki-fundamentals-loki-1 Started + ✔ Container loki-fundamentals-alloy-1 Started + ``` + +We will be access two UI interfaces: +- Alloy at [http://localhost:12345](http://localhost:12345) +- Grafana at [http://localhost:3000](http://localhost:3000) + + + + +## Step 2: Configure Alloy to ingest OpenTelemetry logs + +To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. + + + + + +### OpenTelelmetry Logs Receiver + +First, we will configure the OpenTelemetry logs receiver. This receiver will accept logs via HTTP and gRPC. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + + +```alloy + otelcol.receiver.otlp "default" { + http {} + grpc {} + + output { + logs = [otelcol.processor.batch.default.input] + } + } +``` + + + + +### OpenTelemetry Logs Processor + +Next, we will configure the OpenTelemetry logs processor. This processor will batch the logs before sending them to the logs exporter. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + +```alloy +otelcol.processor.batch "default" { + output { + logs = [otelcol.exporter.otlphttp.default.input] + } +} +``` + + +### OpenTelemetry Logs Exporter + +Lastly, we will configure the OpenTelemetry logs exporter. This exporter will send the logs to Loki. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + +```alloy +otelcol.exporter.otlphttp "default" { + client { + endpoint = "http://loki:3100/otlp" + } +} +``` + + +### Reload the Alloy configuration + +Once added, save the file. Then run the following command to request Alloy to reload the configuration: + +```bash +curl -X POST http://localhost:12345/-/reload +``` + + +The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). + +## Stuck? Need help? + +If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: + + +```bash +cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy +curl -X POST http://localhost:12345/-/reload +``` + + + + + + +## Step 3: Start the Carnivorous Greenhouse + +In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command: + +{{< admonition type="note" >}} +This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first. +{{< /admonition >}} + + + + + + + +```bash +docker compose -f lloki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +``` + + + + + + + + + +This will start the following services: +```bash + ✔ Container greenhouse-db-1 Started + ✔ Container greenhouse-websocket_service-1 Started + ✔ Container greenhouse-bug_service-1 Started + ✔ Container greenhouse-user_service-1 Started + ✔ Container greenhouse-plant_service-1 Started + ✔ Container greenhouse-simulation_service-1 Started + ✔ Container greenhouse-main_app-1 Started +``` + +Once started, you can access the Carnivorous Greenhouse application at [http://localhost:5005](http://localhost:5005). Generate some logs by interacting with the application in the following ways: + +- Create a user +- Log in +- Create a few plants to monitor +- Enable bug mode to activate the bug service. This will cause services to fail and generate additional logs. + +Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). + + + \ No newline at end of file diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index c335e6077d18..c611234d1426 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -1,11 +1,11 @@ --- -title: Ingesting OpenTelemetry logs to Loki using Alloy -menuTitle: Ingesting OpenTelemetry logs using Alloy -description: Configuring Grafana Alloy to ingest OpenTelemetry logs to Loki. +title: Sending OpenTelemetry logs to Loki using Alloy +menuTitle: Sending OpenTelemetry logs to Loki using Alloy +description: Configuring Grafana Alloy to send OpenTelemetry logs to Loki. weight: 250 killercoda: - title: Ingesting OpenTelemetry logs to Loki using Alloy - description: Configuring Grafana Alloy to ingest OpenTelemetry logs to Loki. + title: Sending OpenTelemetry logs to Loki using Alloy + description: Configuring Grafana Alloy to send OpenTelemetry logs to Loki. details: finish: text: finish.md @@ -13,11 +13,14 @@ killercoda: imageid: ubuntu --- - + -# Ingesting OpenTelemetry logs to Loki using Alloy +# Sending OpenTelemetry logs to Loki using Alloy -Alloy natively supports ingesting OpenTelemetry logs. In this example, we will configure Alloy to ingest OpenTelemetry logs to Loki. +Alloy natively supports receiving logs in the OpenTelemetry format. This allows you to send logs from applications instrumented with OpenTelemetry to Alloy, which can then be sent to Loki for storage and visualization in Grafana. In this example, we will make use of 3 Alloy components to achieve this: +- **OpenTelemetry Logs Receiver:** This receiver will accept logs via HTTP and gRPC. +- **OpenTelemetry Logs Processor:** This processor will batch the logs before sending them to the logs exporter. +- **OpenTelemetry Logs Exporter:** This exporter will send the logs to Loki. ## Dependencies @@ -26,11 +29,11 @@ Before you begin, ensure you have the following to run the demo: - Docker - Docker Compose - + {{< admonition type="note" >}} Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). {{< /admonition >}} - + ## Scenario @@ -46,34 +49,34 @@ In this scenario, we have a microservices application called the Carnivourse Gre Each service generates logs using the OpenTelemetry SDK and exports to Alloy in the OpenTelemetry format. Alloy then ingests the logs and sends them to Loki. We will configure Alloy to ingest OpenTelemetry logs, send them to Loki, and view the logs in Grafana. - + - + ## Step 1: Environment setup In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. 1. To get started, clone the repository that contains our demo application: - + ```bash git clone -b microservice-otel https://github.com/grafana/loki-fundamentals.git ``` - + 1. Next we will spin up our observability stack using Docker Compose: - + ```bash docker compose -f loki-fundamentals/docker-compose.yml up -d ``` - + - + - + This will spin up the following services: ```bash @@ -85,24 +88,24 @@ In this step, we will set up our environment by cloning the repository that cont We will be access two UI interfaces: - Alloy at [http://localhost:12345](http://localhost:12345) - Grafana at [http://localhost:3000](http://localhost:3000) - + - + ## Step 2: Configure Alloy to ingest OpenTelemetry logs To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. - + - + ### OpenTelelmetry Logs Receiver First, we will configure the OpenTelemetry logs receiver. This receiver will accept logs via HTTP and gRPC. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - + ```alloy otelcol.receiver.otlp "default" { @@ -115,7 +118,7 @@ Open the `config.alloy` file in the `loki-fundamentals` directory and copy the f } ``` - + ### OpenTelemetry Logs Processor @@ -123,7 +126,7 @@ Open the `config.alloy` file in the `loki-fundamentals` directory and copy the f Next, we will configure the OpenTelemetry logs processor. This processor will batch the logs before sending them to the logs exporter. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - + ```alloy otelcol.processor.batch "default" { output { @@ -131,14 +134,14 @@ otelcol.processor.batch "default" { } } ``` - + ### OpenTelemetry Logs Exporter Lastly, we will configure the OpenTelemetry logs exporter. This exporter will send the logs to Loki. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - + ```alloy otelcol.exporter.otlphttp "default" { client { @@ -146,16 +149,16 @@ otelcol.exporter.otlphttp "default" { } } ``` - + ### Reload the Alloy configuration Once added, save the file. Then run the following command to request Alloy to reload the configuration: - + ```bash curl -X POST http://localhost:12345/-/reload ``` - + The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). @@ -163,42 +166,42 @@ The new configuration will be loaded this can be verified by checking the Alloy If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: - + ```bash cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy curl -X POST http://localhost:12345/-/reload ``` - + - + - + ## Step 3: Start the Carnivorous Greenhouse In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command: - + {{< admonition type="note" >}} This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first. {{< /admonition >}} - + - + - + - + ```bash docker compose -f lloki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ``` - + - + - + This will start the following services: ```bash @@ -221,4 +224,4 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). - \ No newline at end of file + \ No newline at end of file From 8a1015c1cf56363e3079de1bfea1f1ac848b6faf Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Mon, 1 Jul 2024 13:35:05 +0100 Subject: [PATCH 07/35] Added Kafka example --- .../alloy/examples/alloy-kafka-logs.md | 363 ++++++++++++++++++ .../alloy/examples/alloy-otel-kafka.md | 221 ----------- .../alloy/examples/alloy-otel-logs.md | 63 ++- 3 files changed, 407 insertions(+), 240 deletions(-) create mode 100644 docs/sources/send-data/alloy/examples/alloy-kafka-logs.md delete mode 100644 docs/sources/send-data/alloy/examples/alloy-otel-kafka.md diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md new file mode 100644 index 000000000000..e7f835eb02f7 --- /dev/null +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -0,0 +1,363 @@ +--- +title: Sending Logs to Loki via Kafka using Alloy +menuTitle: Sending Logs to Loki via Kafka using Alloy +description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki. +weight: 250 +killercoda: + title: Sending Logs to Loki via Kafka using Alloy + description: Configuring Grafana Alloy to recive logs via Kafka and send them to Loki. + backend: + imageid: ubuntu +--- + + + +# Sending Logs to Loki via Kafka using Alloy + +Alloy nativley supports receiving logs via Kafka. In this example, we will configure Alloy to recive logs via kafka using two different methods: +- [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other loki.* components. +- [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other otelcol.* components. + +## Dependencies + +Before you begin, ensure you have the following to run the demo: + +- Docker +- Docker Compose + + +{{< admonition type="note" >}} +Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-kafka-logs). +{{< /admonition >}} + + +## Scenario + +In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: + +- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in. +- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created. +- **Simulation Service:** Generates sensor data for each plant. +- **Websocket Service:** Manages the websocket connections for the application. +- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. +- **Main App:** The main application that ties all the services together. +- **Database:** A database that stores user and plant data. + +Each service generates logs that are sent to Alloy via Kafka. In this example, they are sent on two different topics: +- `loki`: This sends a structured log formatted message (json). +- `otlp`: This sends a serialized OpenTelemetry log message. + +You would not typically do this within your own application, but for the purposes of this example we wanted to show how Alloy can handle different types of log messages over Kafka. + + + + + +## Step 1: Environment setup + +In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. + +1. To get started, clone the repository that contains our demo application: + + ```bash + git clone -b microservice-kafka https://github.com/grafana/loki-fundamentals.git + ``` + +1. Next we will spin up our observability stack using Docker Compose: + + + ```bash + docker compose -f loki-fundamentals/docker-compose.yml up -d + ``` + + + + + + + + + + This will spin up the following services: + ```bash + ✔ Container loki-fundamentals-grafana-1 Started + ✔ Container loki-fundamentals-loki-1 Started + ✔ Container loki-fundamentals-alloy-1 Started + ✔ Container loki-fundamentals-zookeeper-1 Started + ✔ Container loki-fundamentals-kafka-1 Started + ``` + +We will be access two UI interfaces: +- Alloy at [http://localhost:12345](http://localhost:12345) +- Grafana at [http://localhost:3000](http://localhost:3000) + + + + +## Step 2: Configure Alloy to ingest raw Kafka logs + +In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration. + + + + + +### Loki Kafka Source component + +First, we will configure the Loki Kafka source. `loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. + +The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in forward_to. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +```alloy +loki.source.kafka "raw" { + brokers = ["kafka:9092"] + topics = ["loki"] + forward_to = [loki.write.http.receiver] + relabel_rules = loki.relabel.kafka.rules + version = "2.0.0" +} +``` + +In this configuration: +- `brokers`: The Kafka brokers to connect to. +- `topics`: The Kafka topics to consume. In this case, we are consuming the `loki` topic. +- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. +- `relabel_rules`: The relabel rules to apply to the incoming logs. This can be used to generate labels from the temporary internal labels that are added by the Kafka source. +- `version`: The Kafka protocol version to use. + +For more information on the `loki.source.kafka` configuration, see the [Loki Kafka Source documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka/). + +### Loki Relabel Rules component + +Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +```alloy +loki.relabel "kafka" { + forward_to = [loki.write.http.receiver] + rule { + source_labels = ["__meta_kafka_topic"] + target_label = "topic" + } +} +``` + +In this configuration: +- `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component. +- `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`. + +For more information on the `loki.relabel` configuration, see the [Loki Relabel documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.relabel/). + +### Loki Write component + +Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +```alloy +loki.write "http" { + endpoint { + url = "http://loki:3100/loki/api/v1/push" + } +} +``` + +In this configuration: +- `endpoint`: The endpoint to send the logs to. In this case, we are sending the logs to the Loki HTTP endpoint. + +For more information on the `loki.write` configuration, see the [Loki Write documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.write/). + +### Reload the Alloy configuration + +Once added, save the file. Then run the following command to request Alloy to reload the configuration: + +```bash +curl -X POST http://localhost:12345/-/reload +``` + + +The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). + +## Stuck? Need help? + +If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: + + +```bash +cp loki-fundamentals/completed/config-raw.alloy loki-fundamentals/config.alloy +curl -X POST http://localhost:12345/-/reload +``` + + + + + + + +## Step 3: Configure Alloy to ingest OpenTelemetry logs via Kafka + +Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components. + + + + + +### OpenTelelmetry Kafka Receiver + +First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: + +```alloy +otelcol.receiver.kafka "default" { + brokers = ["kafka:9092"] + protocol_version = "2.0.0" + topic = "otlp" + encoding = "otlp_proto" + + output { + logs = [otelcol.processor.batch.default.input] + } +} +``` + +In this configuration: +- `brokers`: The Kafka brokers to connect to. +- `protocol_version`: The Kafka protocol version to use. +- `topic`: The Kafka topic to consume. In this case, we are consuming the `otlp` topic. +- `encoding`: The encoding of the incoming logs. Which decodes messages as OTLP protobuf. +- `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.processor.batch.default.input`. + +For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/). + +### OpenTelemetry Processor Batch + +Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +```alloy +otelcol.processor.batch "default" { + output { + logs = [otelcol.exporter.otlphttp.default.input] + } +} +``` + +In this configuration: +- `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.exporter.otlphttp.default.input`. + +For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/). + +### OpenTelemetry Exporter OTLP HTTP + +Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. + +Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +```alloy +otelcol.exporter.otlphttp "default" { + client { + endpoint = "http://loki:3100/otlp" + } +} +``` + +In this configuration: +- `client`: The client configuration for the exporter. In this case, we are sending the logs to the Loki OTLP endpoint. + +For more information on the `otelcol.exporter.otlphttp` configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/). + +### Reload the Alloy configuration + +Once added, save the file. Then run the following command to request Alloy to reload the configuration: + +```bash +curl -X POST http://localhost:12345/-/reload +``` + + +The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). + +## Stuck? Need help? + +If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: + + +```bash +cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy +curl -X POST http://localhost:12345/-/reload +``` + + + + + + +## Step 3: Start the Carnivorous Greenhouse + +In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command: + +{{< admonition type="note" >}} +This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first. +{{< /admonition >}} + + + + + + + +```bash +docker compose -f lloki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +``` + + + + + + + + + +This will start the following services: +```bash + ✔ Container greenhouse-db-1 Started + ✔ Container greenhouse-websocket_service-1 Started + ✔ Container greenhouse-bug_service-1 Started + ✔ Container greenhouse-user_service-1 Started + ✔ Container greenhouse-plant_service-1 Started + ✔ Container greenhouse-simulation_service-1 Started + ✔ Container greenhouse-main_app-1 Started +``` + +Once started, you can access the Carnivorous Greenhouse application at [http://localhost:5005](http://localhost:5005). Generate some logs by interacting with the application in the following ways: + +- Create a user +- Log in +- Create a few plants to monitor +- Enable bug mode to activate the bug service. This will cause services to fail and generate additional logs. + +Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). + + + + + + +## Summary + +In this example, we configured Alloy to ingest OpenTelemetry logs and send them to Loki. This was a simple example to demonstrate how to send logs from an application instrumented with OpenTelemetry to Loki using Alloy. Where to go next? + + +## Further reading +- [ "Grafana Alloy getting started examples"](https://grafana.com/docs/alloy/latest/tutorials/) +- ["Grafana Alloy common task examples"](https://grafana.com/docs/alloy/latest/tasks/) +- ["Grafana Alloy component reference"](https://grafana.com/docs/alloy/latest/reference/components/) + +## Complete metrics, logs, traces, and profiling example + +If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt). `Intro-to-mltp` provides a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. + +The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud. + + + \ No newline at end of file diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-kafka.md b/docs/sources/send-data/alloy/examples/alloy-otel-kafka.md deleted file mode 100644 index c757e9fc3a7e..000000000000 --- a/docs/sources/send-data/alloy/examples/alloy-otel-kafka.md +++ /dev/null @@ -1,221 +0,0 @@ ---- -title: Recive OpenTelemetry logs via Kafka using Alloy and Loki -menuTitle: Recive OpenTelemetry logs via Kafka using Alloy and Loki -description: Configuring Grafana Alloy to recive OpenTelemetry logs via Kafka and send them to Loki. -weight: 250 -killercoda: - title: Recive OpenTelemetry logs via Kafka using Alloy and Loki - description: Configuring Grafana Alloy to recive OpenTelemetry logs via Kafka and send them to Loki. - backend: - imageid: ubuntu ---- - - - -# Recive OpenTelemetry logs via Kafka using Alloy and Loki - -Alloy natively supports ingesting OpenTelemetry logs via Kafka. There maybe several scenarios where you may want to ingest logs via Kafka. For instance you may already use Kafka to aggregate logs from several otel collectors. Or your application may already be writing logs to Kafka and you want to ingest them into Loki. In this example, we will make use of 3 Alloy components to achieve this: - -## Dependencies - -Before you begin, ensure you have the following to run the demo: - -- Docker -- Docker Compose - - -{{< admonition type="note" >}} -Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). -{{< /admonition >}} - - -## Scenario - -In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: - -- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in. -- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created. -- **Simulation Service:** Generates sensor data for each plant. -- **Websocket Service:** Manages the websocket connections for the application. -- **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. -- **Main App:** The main application that ties all the services together. -- **Database:** A database that stores user and plant data. - -Each service generates logs using the OpenTelemetry SDK and exports to Alloy in the OpenTelemetry format. Alloy then ingests the logs and sends them to Loki. We will configure Alloy to ingest OpenTelemetry logs, send them to Loki, and view the logs in Grafana. - - - - - -## Step 1: Environment setup - -In this step, we will set up our environment by cloning the repository that contains our demo application and spinning up our observability stack using Docker Compose. - -1. To get started, clone the repository that contains our demo application: - - ```bash - git clone -b microservice-otel https://github.com/grafana/loki-fundamentals.git - ``` - -1. Next we will spin up our observability stack using Docker Compose: - - - ```bash - docker compose -f loki-fundamentals/docker-compose.yml up -d - ``` - - - - - - - - - - This will spin up the following services: - ```bash - ✔ Container loki-fundamentals-grafana-1 Started - ✔ Container loki-fundamentals-loki-1 Started - ✔ Container loki-fundamentals-alloy-1 Started - ``` - -We will be access two UI interfaces: -- Alloy at [http://localhost:12345](http://localhost:12345) -- Grafana at [http://localhost:3000](http://localhost:3000) - - - - -## Step 2: Configure Alloy to ingest OpenTelemetry logs - -To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. - - - - - -### OpenTelelmetry Logs Receiver - -First, we will configure the OpenTelemetry logs receiver. This receiver will accept logs via HTTP and gRPC. - -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - - -```alloy - otelcol.receiver.otlp "default" { - http {} - grpc {} - - output { - logs = [otelcol.processor.batch.default.input] - } - } -``` - - - - -### OpenTelemetry Logs Processor - -Next, we will configure the OpenTelemetry logs processor. This processor will batch the logs before sending them to the logs exporter. - -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - -```alloy -otelcol.processor.batch "default" { - output { - logs = [otelcol.exporter.otlphttp.default.input] - } -} -``` - - -### OpenTelemetry Logs Exporter - -Lastly, we will configure the OpenTelemetry logs exporter. This exporter will send the logs to Loki. - -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - -```alloy -otelcol.exporter.otlphttp "default" { - client { - endpoint = "http://loki:3100/otlp" - } -} -``` - - -### Reload the Alloy configuration - -Once added, save the file. Then run the following command to request Alloy to reload the configuration: - -```bash -curl -X POST http://localhost:12345/-/reload -``` - - -The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). - -## Stuck? Need help? - -If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: - - -```bash -cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy -curl -X POST http://localhost:12345/-/reload -``` - - - - - - -## Step 3: Start the Carnivorous Greenhouse - -In this step, we will start the Carnivorous Greenhouse application. To start the application, run the following command: - -{{< admonition type="note" >}} -This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first. -{{< /admonition >}} - - - - - - - -```bash -docker compose -f lloki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build -``` - - - - - - - - - -This will start the following services: -```bash - ✔ Container greenhouse-db-1 Started - ✔ Container greenhouse-websocket_service-1 Started - ✔ Container greenhouse-bug_service-1 Started - ✔ Container greenhouse-user_service-1 Started - ✔ Container greenhouse-plant_service-1 Started - ✔ Container greenhouse-simulation_service-1 Started - ✔ Container greenhouse-main_app-1 Started -``` - -Once started, you can access the Carnivorous Greenhouse application at [http://localhost:5005](http://localhost:5005). Generate some logs by interacting with the application in the following ways: - -- Create a user -- Log in -- Create a few plants to monitor -- Enable bug mode to activate the bug service. This will cause services to fail and generate additional logs. - -Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). - - - \ No newline at end of file diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index c611234d1426..11d6e7c7588d 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -6,9 +6,6 @@ weight: 250 killercoda: title: Sending OpenTelemetry logs to Loki using Alloy description: Configuring Grafana Alloy to send OpenTelemetry logs to Loki. - details: - finish: - text: finish.md backend: imageid: ubuntu --- @@ -18,9 +15,9 @@ killercoda: # Sending OpenTelemetry logs to Loki using Alloy Alloy natively supports receiving logs in the OpenTelemetry format. This allows you to send logs from applications instrumented with OpenTelemetry to Alloy, which can then be sent to Loki for storage and visualization in Grafana. In this example, we will make use of 3 Alloy components to achieve this: -- **OpenTelemetry Logs Receiver:** This receiver will accept logs via HTTP and gRPC. -- **OpenTelemetry Logs Processor:** This processor will batch the logs before sending them to the logs exporter. -- **OpenTelemetry Logs Exporter:** This exporter will send the logs to Loki. +- **OpenTelemetry Receiver:** This component will receive logs in the OpenTelemetry format via HTTP and gRPC. +- **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*` components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. +- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. ## Dependencies @@ -100,12 +97,11 @@ To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy con -### OpenTelelmetry Logs Receiver +### OpenTelelmetry Receiver OTLP -First, we will configure the OpenTelemetry logs receiver. This receiver will accept logs via HTTP and gRPC. +First, we will configure the OpenTelemetry receiver. `otelcol.receiver.otlp` accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - ```alloy otelcol.receiver.otlp "default" { @@ -118,15 +114,19 @@ Open the `config.alloy` file in the `loki-fundamentals` directory and copy the f } ``` - +In this configuration: +- `http`: The HTTP configuration for the receiver. This configuration is used to receive logs in the OpenTelemetry format via HTTP. +- `grpc`: The gRPC configuration for the receiver. This configuration is used to receive logs in the OpenTelemetry format via gRPC. +- `output`: The list of processors to forward the logs to. In this case, we are forwarding the logs to the `otelcol.processor.batch.default.input`. +For more information on the `otelcol.receiver.otlp` configuration, see the [OpenTelemetry Receiver OTLP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.otlp/). -### OpenTelemetry Logs Processor -Next, we will configure the OpenTelemetry logs processor. This processor will batch the logs before sending them to the logs exporter. +### OpenTelemetry Processor Batch + +Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - ```alloy otelcol.processor.batch "default" { output { @@ -134,14 +134,17 @@ otelcol.processor.batch "default" { } } ``` - -### OpenTelemetry Logs Exporter +In this configuration: +- `output`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `otelcol.exporter.otlphttp.default.input`. + +For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/). -Lastly, we will configure the OpenTelemetry logs exporter. This exporter will send the logs to Loki. +### OpenTelemetry Exporter OTLP HTTP + +Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: - ```alloy otelcol.exporter.otlphttp "default" { client { @@ -149,7 +152,8 @@ otelcol.exporter.otlphttp "default" { } } ``` - + +For more information on the `otelcol.exporter.otlphttp` configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/). ### Reload the Alloy configuration @@ -224,4 +228,25 @@ Once started, you can access the Carnivorous Greenhouse application at [http://l Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Grafana at [http://localhost:3000/a/grafana-lokiexplore-app/explore](http://localhost:3000/a/grafana-lokiexplore-app/explore). - \ No newline at end of file + + + + +## Summary + +In this example, we configured Alloy to ingest OpenTelemetry logs and send them to Loki. This was a simple example to demonstrate how to send logs from an application instrumented with OpenTelemetry to Loki using Alloy. Where to go next? + + +## Further reading +- [ "Grafana Alloy getting started examples"](https://grafana.com/docs/alloy/latest/tutorials/) +- ["Grafana Alloy common task examples"](https://grafana.com/docs/alloy/latest/tasks/) +- ["Grafana Alloy component reference"](https://grafana.com/docs/alloy/latest/reference/components/) + +## Complete metrics, logs, traces, and profiling example + +If you would like to use a demo that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt). `Intro-to-mltp` provides a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana. + +The project includes detailed explanations of each component and annotated configurations for a single-instance deployment. Data from `intro-to-mltp` can also be pushed to Grafana Cloud. + + + \ No newline at end of file From 8659ea7e67be5de652f86f3bbf3a5cf7841233ee Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Mon, 1 Jul 2024 14:24:32 +0100 Subject: [PATCH 08/35] Added media --- docs/sources/send-data/alloy/_index.md | 8 ++++--- .../alloy/examples/alloy-kafka-logs.md | 22 +++++++++++++------ .../alloy/examples/alloy-otel-logs.md | 18 ++++++++++----- 3 files changed, 33 insertions(+), 15 deletions(-) diff --git a/docs/sources/send-data/alloy/_index.md b/docs/sources/send-data/alloy/_index.md index 47e2e88d0a87..c60605d9f2ac 100644 --- a/docs/sources/send-data/alloy/_index.md +++ b/docs/sources/send-data/alloy/_index.md @@ -10,6 +10,8 @@ weight: 250 Grafana Alloy is a versatile observability collector that can ingest logs in various formats and send them to Loki. We recommend Alloy as the primary method for sending logs to Loki, as it provides a more robust and feature-rich solution for building a highly scalable and reliable observability pipeline. +{{< figure src="/media/docs/alloy/flow-diagram-small-alloy.png" alt="Alloy flow diagram" >}} + ## Installing Alloy To get started with Grafana Alloy and send logs to Loki, you need to install and configure Alloy. You can follow the [official documentation](https://grafana.com/docs/alloy/latest/get-started/install/) to install Alloy on your preferred platform. @@ -24,7 +26,7 @@ Alloy pipelines are built using components that perform specific functions. For ### Log components in Alloy -Here is a non-exhaustive list of components that can be used to build a log pipeline in Alloy. For a complete list of components, refer to the [official documentation](https://grafana.com/docs/alloy/latest/reference/components/). +Here is a non-exhaustive list of components that can be used to build a log pipeline in Alloy. For a complete list of components, refer to the [components list](https://grafana.com/docs/alloy/latest/reference/components/). | Type | Component | |------------|-----------------------------------------------------------------------------------------------------| @@ -56,7 +58,7 @@ Here is a non-exhaustive list of components that can be used to build a log pipe To learn more about how to configure Alloy to send logs to Loki within different scenarios, follow these interactive tutorials: -- [Sending OpenTelemetry logs to Loki using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs) -- [Sending logs over Kafka to Loki using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-http-logs) +- [Sending OpenTelemetry logs to Loki using Alloy]({{< relref "./examples/alloy-otel-logs" >}}) +- [Sending logs over Kafka to Loki using Alloy]({{< relref "./examples/alloy-kafka-logs" >}}) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index e7f835eb02f7..38c911f0e7aa 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -15,8 +15,8 @@ killercoda: # Sending Logs to Loki via Kafka using Alloy Alloy nativley supports receiving logs via Kafka. In this example, we will configure Alloy to recive logs via kafka using two different methods: -- [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other loki.* components. -- [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other otelcol.* components. +- [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. +- [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. ## Dependencies @@ -26,7 +26,7 @@ Before you begin, ensure you have the following to run the demo: - Docker Compose -{{< admonition type="note" >}} +{{< admonition type="tip" >}} Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-kafka-logs). {{< /admonition >}} @@ -116,6 +116,7 @@ loki.source.kafka "raw" { forward_to = [loki.write.http.receiver] relabel_rules = loki.relabel.kafka.rules version = "2.0.0" + labels = {service_name = "raw_kafka"} } ``` @@ -125,6 +126,7 @@ In this configuration: - `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. - `relabel_rules`: The relabel rules to apply to the incoming logs. This can be used to generate labels from the temporary internal labels that are added by the Kafka source. - `version`: The Kafka protocol version to use. +- `labels`: The labels to add to the incoming logs. In this case, we are adding a `service_name` label with the value `raw_kafka`. This will be used to identify the logs from the raw Kafka source in the Log Explorer App in Grafana. For more information on the `loki.source.kafka` configuration, see the [Loki Kafka Source documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka/). @@ -345,13 +347,19 @@ Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Graf ## Summary -In this example, we configured Alloy to ingest OpenTelemetry logs and send them to Loki. This was a simple example to demonstrate how to send logs from an application instrumented with OpenTelemetry to Loki using Alloy. Where to go next? +In this example, we configured Alloy to ingest logs via Kafka. We configured Alloy to ingest logs in two different formats: raw logs and OpenTelemetry logs. Where to go next? + + + + ## Further reading -- [ "Grafana Alloy getting started examples"](https://grafana.com/docs/alloy/latest/tutorials/) -- ["Grafana Alloy common task examples"](https://grafana.com/docs/alloy/latest/tasks/) -- ["Grafana Alloy component reference"](https://grafana.com/docs/alloy/latest/reference/components/) + +For more information on Grafana Alloy, refer to the following resources: +- [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/) +- [Grafana Alloy common task examples](https://grafana.com/docs/alloy/latest/tasks/) +- [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/) ## Complete metrics, logs, traces, and profiling example diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 11d6e7c7588d..a9d0e62904b2 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -27,8 +27,9 @@ Before you begin, ensure you have the following to run the demo: - Docker Compose -{{< admonition type="note" >}} +{{< admonition type="tip" >}} Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). +![Interactive](https://raw.githubusercontent.com/grafana/killercoda/staging/assets/loki-ile.svg) {{< /admonition >}} @@ -196,7 +197,7 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. ```bash -docker compose -f lloki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ``` @@ -236,11 +237,18 @@ Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Graf In this example, we configured Alloy to ingest OpenTelemetry logs and send them to Loki. This was a simple example to demonstrate how to send logs from an application instrumented with OpenTelemetry to Loki using Alloy. Where to go next? + + + + + ## Further reading -- [ "Grafana Alloy getting started examples"](https://grafana.com/docs/alloy/latest/tutorials/) -- ["Grafana Alloy common task examples"](https://grafana.com/docs/alloy/latest/tasks/) -- ["Grafana Alloy component reference"](https://grafana.com/docs/alloy/latest/reference/components/) + +For more information on Grafana Alloy, refer to the following resources: +- [Grafana Alloy getting started examples](https://grafana.com/docs/alloy/latest/tutorials/) +- [Grafana Alloy common task examples](https://grafana.com/docs/alloy/latest/tasks/) +- [Grafana Alloy component reference](https://grafana.com/docs/alloy/latest/reference/components/) ## Complete metrics, logs, traces, and profiling example From 2d3045076c7eb2efb2e812d10916d07143d7ecba Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Mon, 1 Jul 2024 15:06:55 +0100 Subject: [PATCH 09/35] Updated links --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 38c911f0e7aa..4112280a2da6 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -351,7 +351,7 @@ In this example, we configured Alloy to ingest logs via Kafka. We configured All - + ## Further reading diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index a9d0e62904b2..89c7fff1152e 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -239,7 +239,7 @@ In this example, we configured Alloy to ingest OpenTelemetry logs and send them - + From 34b059ab0e604048e22a3705c4ba865a4ae6e95d Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Mon, 1 Jul 2024 15:52:06 +0100 Subject: [PATCH 10/35] Updated withj new {{< docs/ignore >}} tag --- .../alloy/examples/alloy-kafka-logs.md | 49 +++++++++---------- .../alloy/examples/alloy-otel-logs.md | 41 ++++++++-------- 2 files changed, 44 insertions(+), 46 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 4112280a2da6..21484ee65f10 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -71,12 +71,11 @@ In this step, we will set up our environment by cloning the repository that cont ``` - - - - - - + {{< docs/ignore >}} + ```bash + docker-compose -f loki-fundamentals/docker-compose.yml up -d + ```{{exec}} + {{< /docs/ignore >}} This will spin up the following services: ```bash @@ -98,9 +97,9 @@ We will be access two UI interfaces: In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration. - - - +{{< docs/ignore >}} +**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +{{< /docs/ignore >}} ### Loki Kafka Source component @@ -200,9 +199,9 @@ curl -X POST http://localhost:12345/-/reload Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components. - - - +{{< docs/ignore >}} +**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +{{< /docs/ignore >}} ### OpenTelelmetry Kafka Receiver @@ -303,22 +302,22 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. {{< /admonition >}} - - - +{{< docs/ignore >}} +**Note: This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first.** +{{< /docs/ignore >}} ```bash -docker compose -f lloki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ``` - - - - - +{{docs/ignore}} +```bash +docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +```{{exec}} +{{< /docs/ignore >}} This will start the following services: ```bash @@ -349,10 +348,10 @@ Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Graf In this example, we configured Alloy to ingest logs via Kafka. We configured Alloy to ingest logs in two different formats: raw logs and OpenTelemetry logs. Where to go next? - - - - +{{< docs/ignore >}} +### Back to Docs +Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) +{{< /docs/ignore >}} ## Further reading diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 89c7fff1152e..6bdf8e4d2e26 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -69,12 +69,11 @@ In this step, we will set up our environment by cloning the repository that cont ``` - - - - - - + {{< docs/ignore >}} + ```bash + docker-compose -f loki-fundamentals/docker-compose.yml up -d + ```{{exec}} + {{< /docs/ignore >}} This will spin up the following services: ```bash @@ -94,9 +93,9 @@ We will be access two UI interfaces: To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. - - - +{{< docs/ignore >}} +**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +{{< /docs/ignore >}} ### OpenTelelmetry Receiver OTLP @@ -191,11 +190,10 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. {{< /admonition >}} - - - +{{< docs/ignore >}} +**Note: This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first.** +{{< /docs/ignore >}} - ```bash docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ``` @@ -203,10 +201,11 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- - - - - +{{docs/ignore}} +```bash +docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build +```{{exec}} +{{< /docs/ignore >}} This will start the following services: ```bash @@ -237,10 +236,10 @@ Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Graf In this example, we configured Alloy to ingest OpenTelemetry logs and send them to Loki. This was a simple example to demonstrate how to send logs from an application instrumented with OpenTelemetry to Loki using Alloy. Where to go next? - - - - +{{< docs/ignore >}} +### Back to Docs +Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) +{{< /docs/ignore >}} ## Further reading From ddf65c8111215b3e818f93c078a71a9fcf0c5ccc Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Mon, 1 Jul 2024 16:17:10 +0100 Subject: [PATCH 11/35] fixed docs tag and added missing image --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 5 ++++- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 +- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 21484ee65f10..a564d1db028b 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -28,6 +28,7 @@ Before you begin, ensure you have the following to run the demo: {{< admonition type="tip" >}} Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-kafka-logs). +![Interactive](https://raw.githubusercontent.com/grafana/killercoda/staging/assets/loki-ile.svg) {{< /admonition >}} @@ -313,10 +314,12 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- -{{docs/ignore}} +{{< docs/ignore >}} + ```bash docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ```{{exec}} + {{< /docs/ignore >}} This will start the following services: diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 6bdf8e4d2e26..9f5bb0fe4d1b 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -201,7 +201,7 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- -{{docs/ignore}} +{{< docs/ignore >}} ```bash docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ```{{exec}} From c292b5a1b82c2d032403b9a1dfcca6eec7552915 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Wed, 3 Jul 2024 14:35:27 +0100 Subject: [PATCH 12/35] Updated include tags and fixed headings --- .../alloy/examples/alloy-kafka-logs.md | 34 +++++++++++++------ .../alloy/examples/alloy-otel-logs.md | 28 +++++++++++---- 2 files changed, 45 insertions(+), 17 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index a564d1db028b..27068f765a4d 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -73,9 +73,13 @@ In this step, we will set up our environment by cloning the repository that cont {{< docs/ignore >}} + + ```bash docker-compose -f loki-fundamentals/docker-compose.yml up -d - ```{{exec}} + ``` + + {{< /docs/ignore >}} This will spin up the following services: @@ -99,10 +103,12 @@ We will be access two UI interfaces: In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration. {{< docs/ignore >}} + **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** + {{< /docs/ignore >}} -### Loki Kafka Source component +### Source logs from kafka First, we will configure the Loki Kafka source. `loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. @@ -130,7 +136,7 @@ In this configuration: For more information on the `loki.source.kafka` configuration, see the [Loki Kafka Source documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka/). -### Loki Relabel Rules component +### Create a dynamic relabel based on Kafka topic Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component. @@ -151,7 +157,7 @@ In this configuration: For more information on the `loki.relabel` configuration, see the [Loki Relabel documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.relabel/). -### Loki Write component +### Write logs to Loki Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format. @@ -169,7 +175,7 @@ In this configuration: For more information on the `loki.write` configuration, see the [Loki Write documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.write/). -### Reload the Alloy configuration +### Reload the Alloy configuration to check the changes Once added, save the file. Then run the following command to request Alloy to reload the configuration: @@ -201,10 +207,12 @@ curl -X POST http://localhost:12345/-/reload Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components. {{< docs/ignore >}} + **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** + {{< /docs/ignore >}} -### OpenTelelmetry Kafka Receiver +### Source OpenTelemetry logs from Kafka First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. @@ -232,7 +240,7 @@ In this configuration: For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/). -### OpenTelemetry Processor Batch +### Batch OpenTelemetry logs before sending Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. @@ -250,7 +258,7 @@ In this configuration: For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/). -### OpenTelemetry Exporter OTLP HTTP +### Write OpenTelemetry logs to Loki Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. @@ -268,7 +276,7 @@ In this configuration: For more information on the `otelcol.exporter.otlphttp` configuration, see the [OpenTelemetry Exporter OTLP HTTP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.exporter.otlphttp/). -### Reload the Alloy configuration +### Reload the Alloy configuration to check the changes Once added, save the file. Then run the following command to request Alloy to reload the configuration: @@ -304,7 +312,9 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. {{< docs/ignore >}} + **Note: This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first.** + {{< /docs/ignore >}} @@ -316,9 +326,11 @@ docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- {{< docs/ignore >}} + ```bash docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build -```{{exec}} +``` + {{< /docs/ignore >}} @@ -352,8 +364,10 @@ Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Graf In this example, we configured Alloy to ingest logs via Kafka. We configured Alloy to ingest logs in two different formats: raw logs and OpenTelemetry logs. Where to go next? {{< docs/ignore >}} + ### Back to Docs Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) + {{< /docs/ignore >}} ## Further reading diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 9f5bb0fe4d1b..9dad3c32cc59 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -70,9 +70,13 @@ In this step, we will set up our environment by cloning the repository that cont {{< docs/ignore >}} + + ```bash docker-compose -f loki-fundamentals/docker-compose.yml up -d - ```{{exec}} + ``` + + {{< /docs/ignore >}} This will spin up the following services: @@ -94,10 +98,12 @@ We will be access two UI interfaces: To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. {{< docs/ignore >}} -**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** + + **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** + {{< /docs/ignore >}} -### OpenTelelmetry Receiver OTLP +### Recive OpenTelemetry logs via gRPC and HTTP First, we will configure the OpenTelemetry receiver. `otelcol.receiver.otlp` accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. @@ -122,7 +128,7 @@ In this configuration: For more information on the `otelcol.receiver.otlp` configuration, see the [OpenTelemetry Receiver OTLP documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.otlp/). -### OpenTelemetry Processor Batch +### Create batches of logs using a OpenTelemetry Processor Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. @@ -140,7 +146,7 @@ In this configuration: For more information on the `otelcol.processor.batch` configuration, see the [OpenTelemetry Processor Batch documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.processor.batch/). -### OpenTelemetry Exporter OTLP HTTP +### Export logs to Loki using a OpenTelemetry Exporter Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. @@ -191,20 +197,26 @@ This docker-compose file relies on the `loki-fundamentals_loki` docker network. {{< docs/ignore >}} + **Note: This docker-compose file relies on the `loki-fundamentals_loki` docker network. If you have not started the observability stack, you will need to start it first.** + {{< /docs/ignore >}} + ```bash docker compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build ``` - {{< docs/ignore >}} + + ```bash docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --build -```{{exec}} +``` + + {{< /docs/ignore >}} This will start the following services: @@ -237,8 +249,10 @@ Finally to view the logs in Loki, navigate to the Loki Logs Explore view in Graf In this example, we configured Alloy to ingest OpenTelemetry logs and send them to Loki. This was a simple example to demonstrate how to send logs from an application instrumented with OpenTelemetry to Loki using Alloy. Where to go next? {{< docs/ignore >}} + ### Back to Docs Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) + {{< /docs/ignore >}} From 6978cea6a09b5ef0aa7d5ace2f51b17d54754e87 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Wed, 3 Jul 2024 14:57:33 +0100 Subject: [PATCH 13/35] Added reason for Kafka label --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 27068f765a4d..52390943e32f 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -155,6 +155,8 @@ In this configuration: - `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component. - `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`. +In this case we are using the `__meta_kafka_topic` label to dynamically set the `topic` label on the incoming logs. This will allow us to identify and restrive logs steams based on the Kafka topic in the Log Explorer App in Grafana. This can be useful when you have multiple applications sending logs to Alloy using different Kafka topics. + For more information on the `loki.relabel` configuration, see the [Loki Relabel documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.relabel/). ### Write logs to Loki From f8f5eaa9355e51472908e95341329ffc757ab3a9 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Thu, 4 Jul 2024 12:22:48 +0100 Subject: [PATCH 14/35] Updated sandbox tip --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 8 ++++++-- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 7 +++++-- 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 52390943e32f..bf04b35247bf 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -27,11 +27,15 @@ Before you begin, ensure you have the following to run the demo: {{< admonition type="tip" >}} -Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-kafka-logs). -![Interactive](https://raw.githubusercontent.com/grafana/killercoda/staging/assets/loki-ile.svg) +Alternatively, you can try out this example in our interactive learning environment: [Sending Logs to Loki via Kafka using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-kafka-logs). + +It's a fully configured environment with all the dependencies already installed. + +![Interactive](https://raw.githubusercontent.com/grafana/killercoda/prod/assets/loki-ile.svg) {{< /admonition >}} + ## Scenario In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 9dad3c32cc59..50da8ccf016b 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -28,8 +28,11 @@ Before you begin, ensure you have the following to run the demo: {{< admonition type="tip" >}} -Alternatively, you can try out this example in our online sandbox. Which is a fully configured environment with all the dependencies pre-installed. You can access the sandbox [here](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). -![Interactive](https://raw.githubusercontent.com/grafana/killercoda/staging/assets/loki-ile.svg) +Alternatively, you can try out this example in our interactive learning environment: [Sending Logs to Loki via Kafka using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). + +It's a fully configured environment with all the dependencies already installed. + +![Interactive](https://raw.githubusercontent.com/grafana/killercoda/prod/assets/loki-ile.svg) {{< /admonition >}} From a36bbc065fa005f84ab8c7f76d60e74b072208d8 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Thu, 4 Jul 2024 15:20:43 +0100 Subject: [PATCH 15/35] Updated sandbox link --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 ++ docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 ++ 2 files changed, 4 insertions(+) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index bf04b35247bf..3107d142b207 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -32,6 +32,8 @@ Alternatively, you can try out this example in our interactive learning environm It's a fully configured environment with all the dependencies already installed. ![Interactive](https://raw.githubusercontent.com/grafana/killercoda/prod/assets/loki-ile.svg) + +Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). {{< /admonition >}} diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 50da8ccf016b..ff507790a298 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -33,6 +33,8 @@ Alternatively, you can try out this example in our interactive learning environm It's a fully configured environment with all the dependencies already installed. ![Interactive](https://raw.githubusercontent.com/grafana/killercoda/prod/assets/loki-ile.svg) + +Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). {{< /admonition >}} From 0b396159d7c1a23dcd312a192edb330f5a860dfb Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Mon, 22 Jul 2024 16:47:09 +0100 Subject: [PATCH 16/35] Update docs/sources/send-data/alloy/_index.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/_index.md b/docs/sources/send-data/alloy/_index.md index c60605d9f2ac..5afdf1ab69b0 100644 --- a/docs/sources/send-data/alloy/_index.md +++ b/docs/sources/send-data/alloy/_index.md @@ -14,7 +14,7 @@ Grafana Alloy is a versatile observability collector that can ingest logs in var ## Installing Alloy -To get started with Grafana Alloy and send logs to Loki, you need to install and configure Alloy. You can follow the [official documentation](https://grafana.com/docs/alloy/latest/get-started/install/) to install Alloy on your preferred platform. +To get started with Grafana Alloy and send logs to Loki, you need to install and configure Alloy. You can follow the [Alloy documentation](https://grafana.com/docs/alloy/latest/get-started/install/) to install Alloy on your preferred platform. ## Components of Alloy for logs From cdfef22be1d5efd89b156055118e9adef152c9be Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Mon, 22 Jul 2024 16:47:20 +0100 Subject: [PATCH 17/35] Update docs/sources/send-data/alloy/_index.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/_index.md b/docs/sources/send-data/alloy/_index.md index 5afdf1ab69b0..daad0d6906db 100644 --- a/docs/sources/send-data/alloy/_index.md +++ b/docs/sources/send-data/alloy/_index.md @@ -21,7 +21,7 @@ To get started with Grafana Alloy and send logs to Loki, you need to install and Alloy pipelines are built using components that perform specific functions. For logs these can be broken down into three categories: - **Collector:** These components collect/receive logs from various sources. This can be scraping logs from a file, receiving logs over HTTP, gRPC or ingesting logs from a message queue. -- **Transformer:** These components can be used to manipulate logs before they are sent to a writer. This can be used to add additional metadata, filter logs or batch logs before sending them to a writer. +- **Transformer:** These components can be used to manipulate logs before they are sent to a writer. This can be used to add additional metadata, filter logs, or batch logs before sending them to a writer. - **Writer:** These components send logs to the desired destination. Our documentation will focus on sending logs to Loki, but Alloy supports sending logs to various destinations. ### Log components in Alloy From 8a8883834344aecc7ddd64df0266c9fb971f3a3f Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Mon, 22 Jul 2024 16:54:41 +0100 Subject: [PATCH 18/35] Added suggested fixes --- .../alloy/examples/alloy-kafka-logs.md | 66 +++++++++++-------- .../alloy/examples/alloy-otel-logs.md | 35 +++++++--- 2 files changed, 63 insertions(+), 38 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 3107d142b207..7061305c87d5 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -14,10 +14,12 @@ killercoda: # Sending Logs to Loki via Kafka using Alloy -Alloy nativley supports receiving logs via Kafka. In this example, we will configure Alloy to recive logs via kafka using two different methods: +Alloy natively supports receiving logs via Kafka. In this example, we will configure Alloy to receive logs via Kafka using two different methods: - [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. - [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. + + ## Dependencies Before you begin, ensure you have the following to run the demo: @@ -25,7 +27,6 @@ Before you begin, ensure you have the following to run the demo: - Docker - Docker Compose - {{< admonition type="tip" >}} Alternatively, you can try out this example in our interactive learning environment: [Sending Logs to Loki via Kafka using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-kafka-logs). @@ -39,11 +40,8 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos ## Scenario - -In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: - -- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in. -- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created. +In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services: +- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. - **Simulation Service:** Generates sensor data for each plant. - **Websocket Service:** Manages the websocket connections for the application. - **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. @@ -89,7 +87,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< /docs/ignore >}} This will spin up the following services: - ```bash + ```console ✔ Container loki-fundamentals-grafana-1 Started ✔ Container loki-fundamentals-loki-1 Started ✔ Container loki-fundamentals-alloy-1 Started @@ -108,19 +106,32 @@ We will be access two UI interfaces: In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy` file to include the Kafka logs configuration. -{{< docs/ignore >}} +### Open your Code Editor and Locate the `config.alloy` file -**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy` file in the code editor: +{{< docs/ignore >}} +**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab. +1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory). +1. Click on the `config.alloy` file to open it in the code editor. {{< /docs/ignore >}} + +1. Open the `loki-fundamentals` directory in a code editor of your choice. +1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory). +1. Click on the `config.alloy` file to open it in the code editor. + + +The below configuration snippets will be added to the `config.alloy` file. + ### Source logs from kafka First, we will configure the Loki Kafka source. `loki.source.kafka` reads messages from Kafka using a consumer group and forwards them to other `loki.*` components. -The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in forward_to. +The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy loki.source.kafka "raw" { brokers = ["kafka:9092"] @@ -146,7 +157,7 @@ For more information on the `loki.source.kafka` configuration, see the [Loki Kaf Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy loki.relabel "kafka" { forward_to = [loki.write.http.receiver] @@ -161,7 +172,7 @@ In this configuration: - `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component. - `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`. -In this case we are using the `__meta_kafka_topic` label to dynamically set the `topic` label on the incoming logs. This will allow us to identify and restrive logs steams based on the Kafka topic in the Log Explorer App in Grafana. This can be useful when you have multiple applications sending logs to Alloy using different Kafka topics. +Lastly, we will configure the Loki write component. `loki.write` receives log entries from other Loki components and sends them over the network using the Loki logproto format. For more information on the `loki.relabel` configuration, see the [Loki Relabel documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.relabel/). @@ -169,7 +180,7 @@ For more information on the `loki.relabel` configuration, see the [Loki Relabel Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy loki.write "http" { endpoint { @@ -186,6 +197,7 @@ For more information on the `loki.write` configuration, see the [Loki Write docu ### Reload the Alloy configuration to check the changes Once added, save the file. Then run the following command to request Alloy to reload the configuration: + ```bash curl -X POST http://localhost:12345/-/reload @@ -207,24 +219,18 @@ curl -X POST http://localhost:12345/-/reload - ## Step 3: Configure Alloy to ingest OpenTelemetry logs via Kafka Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components. -{{< docs/ignore >}} - -**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** - -{{< /docs/ignore >}} ### Source OpenTelemetry logs from Kafka First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy otelcol.receiver.kafka "default" { @@ -248,11 +254,15 @@ In this configuration: For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/). +### Open your Code Editor and Locate the `config.alloy` file + +Like before, we generate our next pipeline configuration within the same `config.alloy` file. The below configuration snippets will be added **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. + ### Batch OpenTelemetry logs before sending Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy otelcol.processor.batch "default" { output { @@ -270,7 +280,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy otelcol.exporter.otlphttp "default" { client { @@ -295,9 +305,9 @@ curl -X POST http://localhost:12345/-/reload The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). -## Stuck? Need help? +## Stuck? Need help (Full Configuration)? -If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: +If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy`. This differs from the previous `Stuck? Need help` section as we are replacing the entire configuration file with the completed configuration file. Rather than just adding the first Loki Raw Pipeline configuration. ```bash @@ -343,7 +353,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d -- {{< /docs/ignore >}} This will start the following services: -```bash +```console ✔ Container greenhouse-db-1 Started ✔ Container greenhouse-websocket_service-1 Started ✔ Container greenhouse-bug_service-1 Started @@ -374,7 +384,7 @@ In this example, we configured Alloy to ingest logs via Kafka. We configured All {{< docs/ignore >}} ### Back to Docs -Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) +Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) {{< /docs/ignore >}} diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index ff507790a298..d711d19b1b74 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -19,6 +19,8 @@ Alloy natively supports receiving logs in the OpenTelemetry format. This allows - **OpenTelemetry Processor:** This component will accept telemetry data from other `otelcol.*` components and place them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. - **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*` components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. + + ## Dependencies Before you begin, ensure you have the following to run the demo: @@ -26,7 +28,6 @@ Before you begin, ensure you have the following to run the demo: - Docker - Docker Compose - {{< admonition type="tip" >}} Alternatively, you can try out this example in our interactive learning environment: [Sending Logs to Loki via Kafka using Alloy](https://killercoda.com/grafana-labs/course/loki/alloy-otel-logs). @@ -36,14 +37,15 @@ It's a fully configured environment with all the dependencies already installed. Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). {{< /admonition >}} + ## Scenario In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: -- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in. -- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created. +- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. +- **plant Service:** Manages the creation of new plants and updates other services when a new plant is created. - **Simulation Service:** Generates sensor data for each plant. - **Websocket Service:** Manages the websocket connections for the application. - **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. @@ -85,7 +87,7 @@ In this step, we will set up our environment by cloning the repository that cont {{< /docs/ignore >}} This will spin up the following services: - ```bash + ```console ✔ Container loki-fundamentals-grafana-1 Started ✔ Container loki-fundamentals-loki-1 Started ✔ Container loki-fundamentals-alloy-1 Started @@ -102,17 +104,30 @@ We will be access two UI interfaces: To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy` file to include the OpenTelemetry logs configuration. -{{< docs/ignore >}} +### Open your Code Editor and Locate the `config.alloy` file - **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy` file in the code editor: +{{< docs/ignore >}} +**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** +1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab. +1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory). +1. Click on the `config.alloy` file to open it in the code editor. {{< /docs/ignore >}} + +1. Open the `loki-fundamentals` directory in a code editor of your choice. +1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory). +1. Click on the `config.alloy` file to open it in the code editor. + + +The below configuration snippets will be added to the `config.alloy` file. + ### Recive OpenTelemetry logs via gRPC and HTTP First, we will configure the OpenTelemetry receiver. `otelcol.receiver.otlp` accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy otelcol.receiver.otlp "default" { @@ -137,7 +152,7 @@ For more information on the `otelcol.receiver.otlp` configuration, see the [Open Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy otelcol.processor.batch "default" { output { @@ -155,7 +170,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. -Open the `config.alloy` file in the `loki-fundamentals` directory and copy the following configuration: +Add the following configuration to the `config.alloy` file: ```alloy otelcol.exporter.otlphttp "default" { client { @@ -256,7 +271,7 @@ In this example, we configured Alloy to ingest OpenTelemetry logs and send them {{< docs/ignore >}} ### Back to Docs -Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) +Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy) {{< /docs/ignore >}} From 50c043577ba23bd76e862f8f3e6bf8c9592e56a6 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Mon, 22 Jul 2024 17:24:55 +0100 Subject: [PATCH 19/35] fixed editor location --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 7061305c87d5..c23f791a1f35 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -210,6 +210,7 @@ The new configuration will be loaded this can be verified by checking the Alloy If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy` using the completed configuration file: + ```bash cp loki-fundamentals/completed/config-raw.alloy loki-fundamentals/config.alloy @@ -225,6 +226,10 @@ curl -X POST http://localhost:12345/-/reload Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy` file along with the existing components. +### Open your Code Editor and Locate the `config.alloy` file + +Like before, we generate our next pipeline configuration within the same `config.alloy` file. The below configuration snippets will be added **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. + ### Source OpenTelemetry logs from Kafka @@ -254,9 +259,6 @@ In this configuration: For more information on the `otelcol.receiver.kafka` configuration, see the [OpenTelemetry Receiver Kafka documentation](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/). -### Open your Code Editor and Locate the `config.alloy` file - -Like before, we generate our next pipeline configuration within the same `config.alloy` file. The below configuration snippets will be added **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. ### Batch OpenTelemetry logs before sending From 2be81dc2e869848e1a36d6314e7b93ba93f6ffb3 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:05:44 +0100 Subject: [PATCH 20/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index c23f791a1f35..f287e4de362e 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -119,7 +119,7 @@ Grafana Alloy requires a configuration file to define the components and their r 1. Open the `loki-fundamentals` directory in a code editor of your choice. -1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory). +1. Locate the `config.alloy` file in the top level directory, `loki-fundamentals'. 1. Click on the `config.alloy` file to open it in the code editor. From 8f574a8c3066ddcc4dc9031d1b891bccc77110d9 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:05:57 +0100 Subject: [PATCH 21/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index f287e4de362e..26f9c01b6a5b 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -123,7 +123,7 @@ Grafana Alloy requires a configuration file to define the components and their r 1. Click on the `config.alloy` file to open it in the code editor. -The below configuration snippets will be added to the `config.alloy` file. +You will copy all three of the following configuration snippets into the `config.alloy` file. ### Source logs from kafka From 7b62e810b3cf6e8d1ad00f7b2ee2156281b68fa0 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:06:08 +0100 Subject: [PATCH 22/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 26f9c01b6a5b..bc94d09805ca 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -157,7 +157,7 @@ For more information on the `loki.source.kafka` configuration, see the [Loki Kaf Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component. -Add the following configuration to the `config.alloy` file: +Now add the following configuration to the `config.alloy` file: ```alloy loki.relabel "kafka" { forward_to = [loki.write.http.receiver] From dcc392c494acbca8b6311ea7666bd48930ffaf2a Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:06:15 +0100 Subject: [PATCH 23/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index bc94d09805ca..5bdaa059d93c 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -180,7 +180,7 @@ For more information on the `loki.relabel` configuration, see the [Loki Relabel Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format. -Add the following configuration to the `config.alloy` file: +And finally, add the following configuration to the `config.alloy` file: ```alloy loki.write "http" { endpoint { From 7019cc88969472801d7508f9071c853f0dc6e4bf Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:06:28 +0100 Subject: [PATCH 24/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 5bdaa059d93c..8a537ec46d38 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -228,7 +228,7 @@ Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we nee ### Open your Code Editor and Locate the `config.alloy` file -Like before, we generate our next pipeline configuration within the same `config.alloy` file. The below configuration snippets will be added **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. +Like before, we generate our next pipeline configuration within the same `config.alloy` file. You will add the following configuration snippets to the file **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. ### Source OpenTelemetry logs from Kafka From c4d2afab75fffd347511852d517867bb31a13c54 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Wed, 24 Jul 2024 10:06:38 +0100 Subject: [PATCH 25/35] Update docs/sources/send-data/alloy/examples/alloy-otel-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index d711d19b1b74..a9751e40a8d5 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -121,7 +121,7 @@ Grafana Alloy requires a configuration file to define the components and their r 1. Click on the `config.alloy` file to open it in the code editor. -The below configuration snippets will be added to the `config.alloy` file. +You will copy all three of the following configuration snippets into the `config.alloy` file. ### Recive OpenTelemetry logs via gRPC and HTTP From ee78506f71d2236e0f57cafaa33cb8370e9da562 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Wed, 24 Jul 2024 10:47:43 +0100 Subject: [PATCH 26/35] fixed images and further corrections --- .../send-data/alloy/examples/alloy-kafka-logs.md | 15 +++++++-------- .../send-data/alloy/examples/alloy-otel-logs.md | 13 ++++++------- 2 files changed, 13 insertions(+), 15 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index c23f791a1f35..387caefd8077 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -32,7 +32,7 @@ Alternatively, you can try out this example in our interactive learning environm It's a fully configured environment with all the dependencies already installed. -![Interactive](https://raw.githubusercontent.com/grafana/killercoda/prod/assets/loki-ile.svg) +![Interactive](/media/docs/loki/loki-ile.svg) Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). {{< /admonition >}} @@ -123,7 +123,7 @@ Grafana Alloy requires a configuration file to define the components and their r 1. Click on the `config.alloy` file to open it in the code editor. -The below configuration snippets will be added to the `config.alloy` file. +You will copy all three of the following configuration snippets into the `config.alloy` file. ### Source logs from kafka @@ -157,7 +157,7 @@ For more information on the `loki.source.kafka` configuration, see the [Loki Kaf Next, we will configure the Loki relabel rules. The `loki.relabel` component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka` component. -Add the following configuration to the `config.alloy` file: +Now add the following configuration to the `config.alloy` file: ```alloy loki.relabel "kafka" { forward_to = [loki.write.http.receiver] @@ -180,7 +180,7 @@ For more information on the `loki.relabel` configuration, see the [Loki Relabel Lastly, we will configure the Loki write component. `loki.write` receives log entries from other loki components and sends them over the network using the Loki logproto format. -Add the following configuration to the `config.alloy` file: +Now add the following configuration to the `config.alloy` file: ```alloy loki.write "http" { endpoint { @@ -228,15 +228,14 @@ Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we nee ### Open your Code Editor and Locate the `config.alloy` file -Like before, we generate our next pipeline configuration within the same `config.alloy` file. The below configuration snippets will be added **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. +Like before, we generate our next pipeline configuration within the same `config.alloy` file. You will add the following configuration snippets to the file **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file. ### Source OpenTelemetry logs from Kafka First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka` accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*` components. -Add the following configuration to the `config.alloy` file: - +Now add the following configuration to the `config.alloy` file: ```alloy otelcol.receiver.kafka "default" { brokers = ["kafka:9092"] @@ -264,7 +263,7 @@ For more information on the `otelcol.receiver.kafka` configuration, see the [Ope Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. -Add the following configuration to the `config.alloy` file: +Now add the following configuration to the `config.alloy` file: ```alloy otelcol.processor.batch "default" { output { diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index d711d19b1b74..1228dcda5fd1 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -33,7 +33,7 @@ Alternatively, you can try out this example in our interactive learning environm It's a fully configured environment with all the dependencies already installed. -![Interactive](https://raw.githubusercontent.com/grafana/killercoda/prod/assets/loki-ile.svg) +![Interactive](/media/docs/loki/loki-ile.svg) Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repository](https://github.com/grafana/killercoda). {{< /admonition >}} @@ -111,7 +111,7 @@ Grafana Alloy requires a configuration file to define the components and their r {{< docs/ignore >}} **Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor` tab.** 1. Expand the `loki-fundamentals` directory in the file explorer of the `Editor` tab. -1. Locate the `config.alloy` file in the `loki-fundamentals` directory (Top level directory). +1. Locate the `config.alloy` file in the top level directory, `loki-fundamentals'. 1. Click on the `config.alloy` file to open it in the code editor. {{< /docs/ignore >}} @@ -121,14 +121,13 @@ Grafana Alloy requires a configuration file to define the components and their r 1. Click on the `config.alloy` file to open it in the code editor. -The below configuration snippets will be added to the `config.alloy` file. +You will copy all three of the following configuration snippets into the `config.alloy` file. ### Recive OpenTelemetry logs via gRPC and HTTP First, we will configure the OpenTelemetry receiver. `otelcol.receiver.otlp` accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application. -Add the following configuration to the `config.alloy` file: - +Now add the following configuration to the `config.alloy` file: ```alloy otelcol.receiver.otlp "default" { http {} @@ -152,7 +151,7 @@ For more information on the `otelcol.receiver.otlp` configuration, see the [Open Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. -Add the following configuration to the `config.alloy` file: +Now add the following configuration to the `config.alloy` file: ```alloy otelcol.processor.batch "default" { output { @@ -170,7 +169,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. -Add the following configuration to the `config.alloy` file: +Now add the following configuration to the `config.alloy` file: ```alloy otelcol.exporter.otlphttp "default" { client { From 3aa3d1a5513522fa4cdbe580df863d72463bd4b4 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Thu, 25 Jul 2024 09:09:31 +0100 Subject: [PATCH 27/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 3c976f81645f..0b665dd3c34d 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -204,7 +204,7 @@ curl -X POST http://localhost:12345/-/reload ``` -The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). +The new configuration will be loaded. You can verify this by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). ## Stuck? Need help? From 3252a647df896d8104def8d7cf30c7079cb56dbb Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Thu, 25 Jul 2024 09:09:41 +0100 Subject: [PATCH 28/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 0b665dd3c34d..9e5859316d59 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -281,7 +281,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. -Add the following configuration to the `config.alloy` file: +Finally, add the following configuration to the `config.alloy` file: ```alloy otelcol.exporter.otlphttp "default" { client { From 31be3a4a7ba78cc13251386e0569b2edf3c281da Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Thu, 25 Jul 2024 09:09:52 +0100 Subject: [PATCH 29/35] Update docs/sources/send-data/alloy/examples/alloy-kafka-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 9e5859316d59..3a70fd2c5106 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -304,7 +304,7 @@ curl -X POST http://localhost:12345/-/reload ``` -The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). +The new configuration will be loaded. You can verify this by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). ## Stuck? Need help (Full Configuration)? From e830c863cbd4b3f038601428c27bd81f0b4c2184 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Thu, 25 Jul 2024 09:10:02 +0100 Subject: [PATCH 30/35] Update docs/sources/send-data/alloy/examples/alloy-otel-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 1228dcda5fd1..7c4d81db7710 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -45,7 +45,7 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services: - **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. -- **plant Service:** Manages the creation of new plants and updates other services when a new plant is created. +- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. - **Simulation Service:** Generates sensor data for each plant. - **Websocket Service:** Manages the websocket connections for the application. - **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. From af3b832ea76e54add01cbae0d56ff2b8965bda78 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Thu, 25 Jul 2024 09:10:16 +0100 Subject: [PATCH 31/35] Update docs/sources/send-data/alloy/examples/alloy-otel-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 7c4d81db7710..2fa5dd822ad9 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -149,7 +149,7 @@ For more information on the `otelcol.receiver.otlp` configuration, see the [Open ### Create batches of logs using a OpenTelemetry Processor -Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. +Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch` accepts telemetry data from other `otelcol` components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching. Now add the following configuration to the `config.alloy` file: ```alloy From 2b90d10080ff5b227fdec7d0ec167c803db0d63d Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Thu, 25 Jul 2024 09:10:26 +0100 Subject: [PATCH 32/35] Update docs/sources/send-data/alloy/examples/alloy-otel-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 2fa5dd822ad9..1f330eccfd8f 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -167,7 +167,7 @@ For more information on the `otelcol.processor.batch` configuration, see the [Op ### Export logs to Loki using a OpenTelemetry Exporter -Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. +Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp` accepts telemetry data from other `otelcol` components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki's native OTLP endpoint. Now add the following configuration to the `config.alloy` file: ```alloy From 3ea3d84503605e606ff295d2d73e7c80d5b408c4 Mon Sep 17 00:00:00 2001 From: Jay Clifford <45856600+Jayclifford345@users.noreply.github.com> Date: Thu, 25 Jul 2024 09:10:38 +0100 Subject: [PATCH 33/35] Update docs/sources/send-data/alloy/examples/alloy-otel-logs.md Co-authored-by: J Stickler --- docs/sources/send-data/alloy/examples/alloy-otel-logs.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md index 1f330eccfd8f..caf787fc07ec 100644 --- a/docs/sources/send-data/alloy/examples/alloy-otel-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-otel-logs.md @@ -189,7 +189,7 @@ curl -X POST http://localhost:12345/-/reload ``` -The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). +The new configuration will be loaded. You can verify this by checking the Alloy UI: [http://localhost:12345](http://localhost:12345). ## Stuck? Need help? From 9fa414fd1e51e28b09d569fc60d6799c9a5ffbf8 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Thu, 25 Jul 2024 09:17:03 +0100 Subject: [PATCH 34/35] added bullet back in --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index 3a70fd2c5106..f2bd4293f86e 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -42,6 +42,7 @@ Provide feedback, report bugs, and raise issues in the [Grafana Killercoda repos ## Scenario In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services: - **User Service:** Manages user data and authentication for the application. Such as creating users and logging in. +- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created. - **Simulation Service:** Generates sensor data for each plant. - **Websocket Service:** Manages the websocket connections for the application. - **Bug Service:** A service that when enabled, randomly causes services to fail and generate additional logs. From 2e71b0d8501cdc0c15d92f2fc853810ea6e4da49 Mon Sep 17 00:00:00 2001 From: Jayclifford345 Date: Thu, 25 Jul 2024 09:20:13 +0100 Subject: [PATCH 35/35] Removed line --- docs/sources/send-data/alloy/examples/alloy-kafka-logs.md | 2 -- 1 file changed, 2 deletions(-) diff --git a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md index f2bd4293f86e..f75cfcc72ac7 100644 --- a/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md +++ b/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md @@ -173,8 +173,6 @@ In this configuration: - `forward_to`: The list of receivers to forward the logs to. In this case, we are forwarding the logs to the `loki.write.http.receiver`. Though in this case, we are directly calling the rule from the `loki.source.kafka` component. So `forward_to` is being used as a placeholder as it is required by the `loki.relabel` component. - `rule`: The relabeling rule to apply to the incoming logs. In this case, we are renaming the `__meta_kafka_topic` label to `topic`. -Lastly, we will configure the Loki write component. `loki.write` receives log entries from other Loki components and sends them over the network using the Loki logproto format. - For more information on the `loki.relabel` configuration, see the [Loki Relabel documentation](https://grafana.com/docs/alloy/latest/reference/components/loki.relabel/). ### Write logs to Loki