Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Base64 encoding of chunk ExternalKey in filesystem object store prevents per-tenant subdirectories #4291

Closed
jfolz opened this issue Sep 8, 2021 · 8 comments

Comments

@jfolz
Copy link
Contributor

jfolz commented Sep 8, 2021

Describe the bug
Docs for filesystem object storage state that:

A folder is created for every tenant all the chunks for one tenant are stored in that directory.

What happens instead is that chunks of different tenants are stored at the top level of the specified directory.

The Chunk.ExternalKey method suggests that a hierarchical structure is intended (UserID aka tenant is separated by a slash from the remainder of the key), but the base64 encoding used by the storage client prevents this from happening.

To Reproduce
Steps to reproduce the behavior:

  1. Start Loki multi-tenant with filesystem storage.
  2. Send logs from two or more tenants.

Expected behavior
Chunks for tenants are stored in subdirectories as stated.

Environment:

  • Infrastructure: bare-metal
  • Deployment tool: shell
@jfolz
Copy link
Contributor Author

jfolz commented Sep 8, 2021

Notably, base64.StdEncoding uses the forward slash character, so it may produce paths with semi-arbitrary subdirectories. base64.URLEncoding uses _ instead which would be safe.

@LinTechSo
Copy link
Contributor

LinTechSo commented Feb 1, 2022

hi all, any updates?
i hit same issue in version 2.4

@jfolz
Copy link
Contributor Author

jfolz commented Feb 1, 2022

@LinTechSo check #4333

@LinTechSo
Copy link
Contributor

Hi @jfolz , Thanks. but where should i put this[ tenant_folders: true ] or similar in loki yaml configuration?

@jfolz
Copy link
Contributor Author

jfolz commented Feb 1, 2022

It's only a WIP PR by me, not something that's currently released. You can watch it to see progress. It's currently blocked on some other changes though (see discussion there).

@stale
Copy link

stale bot commented Apr 18, 2022

Hi! This issue has been automatically marked as stale because it has not had any
activity in the past 30 days.

We use a stalebot among other tools to help manage the state of issues in this project.
A stalebot can be very useful in closing issues in a number of cases; the most common
is closing issues or PRs where the original reporter has not responded.

Stalebots are also emotionless and cruel and can close issues which are still very relevant.

If this issue is important to you, please add a comment to keep it open. More importantly, please add a thumbs-up to the original issue entry.

We regularly sort for closed issues which have a stale label sorted by thumbs up.

We may also:

  • Mark issues as revivable if we think it's a valid issue but isn't something we are likely
    to prioritize in the future (the issue will still remain closed).
  • Add a keepalive label to silence the stalebot if the issue is very common/popular/important.

We are doing our best to respond, organize, and prioritize all issues but it can be a challenging task,
our sincere apologies if you find yourself at the mercy of the stalebot.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Apr 18, 2022
@jfolz
Copy link
Contributor Author

jfolz commented Apr 18, 2022

.

@stale stale bot removed the stale A stale issue or PR that will automatically be closed. label Apr 18, 2022
@jfolz
Copy link
Contributor Author

jfolz commented Jul 31, 2022

Support for schema v12 added for filesystem storage in #5291 fixes this.

@jfolz jfolz closed this as completed Jul 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants