Skip to content

Commit

Permalink
docs: add info about compression (#15699)
Browse files Browse the repository at this point in the history
Signed-off-by: J Stickler <julie.stickler@grafana.com>
Co-authored-by: Christian Haudum <christian.haudum@gmail.com>
  • Loading branch information
JStickler and chaudum authored Jan 14, 2025
1 parent 9e617ec commit 709a3a2
Showing 1 changed file with 11 additions and 3 deletions.
14 changes: 11 additions & 3 deletions docs/sources/configure/bp-configure.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,22 +23,22 @@ One issue many people have with Loki is their client receiving errors for out of

There are a few things to dissect from that statement. The first is this restriction is per stream. Let’s look at an example:

```
```bash
{job="syslog"} 00:00:00 i'm a syslog!
{job="syslog"} 00:00:01 i'm a syslog!
```

If Loki received these two lines which are for the same stream, everything would be fine. But what about this case:

```
```bash
{job="syslog"} 00:00:00 i'm a syslog!
{job="syslog"} 00:00:02 i'm a syslog!
{job="syslog"} 00:00:01 i'm a syslog! <- Rejected out of order!
```
What can you do about this? What if this was because the sources of these logs were different systems? You can solve this with an additional label which is unique per system:
```
```bash
{job="syslog", instance="host1"} 00:00:00 i'm a syslog!
{job="syslog", instance="host1"} 00:00:02 i'm a syslog!
{job="syslog", instance="host2"} 00:00:01 i'm a syslog! <- Accepted, this is a new stream!
Expand All @@ -50,6 +50,14 @@ But what if the application itself generated logs that were out of order? Well,

It's also worth noting that the batching nature of the Loki push API can lead to some instances of out of order errors being received which are really false positives. (Perhaps a batch partially succeeded and was present; or anything that previously succeeded would return an out of order entry; or anything new would be accepted.)

## Use `snappy` compression algorithm

`Snappy` is currently the Loki compression algorithm of choice. It performs much better than `gzip` for speed, but it is not as efficient in storage. This was an acceptable tradeoff for us.

Grafana Labs has found that `gzip` was very good for compression but was very slow, and this was causing slow query responses.

`LZ4` is a good compromise of speed and compression performance. While compression is slightly slower than `snappy`, the compression ratio is higher, resulting in smaller chunks in object storage.

## Use `chunk_target_size`

Using `chunk_target_size` instructs Loki to try to fill all chunks to a target _compressed_ size of 1.5MB. These larger chunks are more efficient for Loki to process.
Expand Down

0 comments on commit 709a3a2

Please sign in to comment.