Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve ingester reads #1550

Closed
cyriltovena opened this issue Jan 20, 2020 · 4 comments
Closed

Improve ingester reads #1550

cyriltovena opened this issue Jan 20, 2020 · 4 comments
Labels
component/loki keepalive An issue or PR that will be kept alive and never marked as stale. type/enhancement Something existing could be improved

Comments

@cyriltovena
Copy link
Contributor

cyriltovena commented Jan 20, 2020

When running queries across high throughput streams, we can see that deduplications from ingester data in querier is hurting a lot and time consuming.

For example one query ran for 38s with those logs as information:


Fetched chunks | 264 |  
-- | -- | --
Time Fetching chunk (ms) | 954 |  
Total Duplicates | 16322155 |  
Total bytes compressed (MB) | 150 |  
Total bytes uncompressed (MB) | 980 |  
Total exec time (ms) | 38000 |  
level | "debug"

As you can see, ingesters seems to send tons of duplicates, causing slow down.

Currently we query ingesters for the whole time range of the request.

I propose that we find the latest chunk time for each streams in the storage, and use that as part of the query for the ingesters, minimising result sent from ingesters to only what we don't have from the storage.

With this map of metric name to start time, we should be able to build a different stream iterator here
https://github.com/grafana/loki/blob/master/pkg/ingester/instance.go#L203 that would have a better start time.

/cc @gouthamve @slim-bean @owen-d

Although it should be noted that this might not improve performance by a lot since we run with:

  chunk_idle_period: 15m
  chunk_retain_period: 1m
@cyriltovena cyriltovena added component/loki type/enhancement Something existing could be improved keepalive An issue or PR that will be kept alive and never marked as stale. labels Jan 20, 2020
@owen-d
Copy link
Member

owen-d commented Jan 20, 2020

Cortex handles this via a querier.query-ingesters-within configuration (which can be set to ingester.max-chunk-age. That sounds like a good place to start as it's a low lift which may help us validate if the strategy you're describing is necessary.

@cyriltovena
Copy link
Contributor Author

Just looked at our p99 for age chunks and it's around 5 to 15 hours, so we do have a lot of chunks in memory.

Apparently cortex had this issue and fixed it by batching ingester reads.

@slim-bean
Copy link
Collaborator

@cyriltovena I believe we could close this now? We have fixed chunks lengths now as well as query-ingesters-within WDYT?

@slim-bean
Copy link
Collaborator

Closing

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/loki keepalive An issue or PR that will be kept alive and never marked as stale. type/enhancement Something existing could be improved
Projects
None yet
Development

No branches or pull requests

3 participants