-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What do you want from Loki in 2021? #3119
Comments
I'll go first, I've put off us making packages (e.g. deb, rpm) because I have some experience here and it's an amazing hassle, I also don't think I can count on two hands anymore how many different package managers there are now.... #52, amongst the oldest of Loki issues, describes this request. It has a lot of thumbs too, and recent comments. I still don't want to do it, but tell me it should be done. Thumbs up this comment and thumbs up that issue if you want this. Go add a comment to that issue for which package managers you would like. I still question if spending time on this and then the continued time on maintaining, releasing, fixing... etc is worth it vs the other things Loki needs, but I know too this is a big hassle for people who just want it to be easier to use Loki, and we do want this too. |
Up to now I have not used Loki, but I am looking for a way to get exemplars (trace discovery article). I would like to be able to jump from a Grafana metric to some traces/logs. |
A guide (documentation) for an on prem HA configuration. |
I would really love a web-app/GUI like in Prometheus with the possibility to look at individual streams and delete unused/unwanted/discontinued streams. This would be Very nice during setup and config changes of both Loki and other agents feeding info to Loki. Also think this would make troubleshooting and testing way more effective. Parts of this is mentioned in #577 😃 |
I know probably 2021 would not be enough for this, but... any integration with machine learning tools for log analysis ? :) |
Maybe better documentation for loki installation as micro services in general. I am doing it now with tanka in EKS and Onprem and it is very hard to known what needs to be overritten. More examples, maybe some video demo. Loki is excellent! Thanks very much! |
I'd be really interested in enhancements to the ergonomics of promtail pipelines, specifically around generating metrics (or some equivalent feature). Narrowing down the label/field set, combining fields/labels, and generating multiple metrics from the same stream would all go along way toward making a super powerful tool to add meaningful observability to third party tools and software that only log. |
I second that! The metrics extraction feature is really awesome. I'd even propose to allow disabling (read: not configuring) a Promtail instance from trying to ship any of the logs to Loki, but to only extract and provide metrics. Just a few more mutation features, such as allowing multiplications / divisions to be able to properly produce the Prometheus recommended base-units (https://prometheus.io/docs/practices/naming/#base-units) out of milliseconds or bits which might be coming out of the log files. This would only have to be rather simple, not a full blown math library. Or would this (already) be covered by the -> I raised an issue: #3129 |
Allowing for better debugging / development of pipelines promtail should be able to print the parsed logs to STDOUT or some other "debug log file" as json, just like it would send them out to loki - https://grafana.com/docs/loki/latest/clients/promtail/troubleshooting/#troubleshooting-promtail . This would then not require the round-trip via a running loki instance when developing and fine-tuning the log parsing pipeline. |
Great stuff so far everyone! Thanks so much for the feedback, keep it coming! |
I'd love a way to drop labels in a logql query. e.g. after I
|
I want pattern detection, being able to see all errors that are of the same type but only variant from date or number such IPs. |
Is this not already possible with --dry-run and --stdin ? |
@cyriltovena According to https://grafana.com/docs/loki/latest/clients/promtail/troubleshooting/#dry-running this is only about reading from stdin or not updating the pointers of files. I was rather talking about not requiring a Loki instance to send the data to, but to simply dump it to stdout. This would also allow i.e. unit tests with a given input producing the proper output (JSON) to develop and test Promtail pipelines. |
--dry-run and --string can work independently, you can use that for testing pipelines. |
@guettli You can try Grafana TNS Observability Demo. The demo can be set up very easy and can try metric -> logs -> traces, metrics -> traces -> logs correlation. |
Download Logs for manual processing (through grafana) Loki is often focused on specific log formats. That's great if you own, know and control the whole stack. But from operations perspective, in practice, you often don't know the exact log format of an application you deploy or you have mixed formats in the same stream (e.g. syslog). I'd like to dump everything to loki and would like to get it out in the same format. Through grafana, that's how the users are authorized to access the logs. |
Ability to stream logs into Loki. Now there's only HTTP-based API for batch pushes. It would be cool to have something closer to real-time, like continuously pushing raw Protobuf messages directly to Loki via TCP or UDP without any GRPC/HTTP/JSON overheads. |
We also looking for this and made some prototypes but IMO is unrelated for Loki. It is higher level abstraction and more relevant to Graphana and alert manager that log collection mechanism. Our prototype perfectly works with Graylog (Elastic), Loki, Fluentd, and plain files. |
Or RSocket front because of backpresure control. |
Would be awesome to remove the contraint of sending data ordered for one set of labels. This way no more "out of order" errors.. |
And complete the backport if fluent-bit Loki native plugin :) As it is still missing some features from the Go FB plugin. |
A discussion was started on the Loki forums here. The idea revolves around being able to add additional data to a log line. Taking something from the environment or service discovery and adding it to the log line vs adding it as a label. The solution could take other forms to, like some kind of non-indexed fields or metadata sent with log lines, however I'm concerned this would make Loki more complicated and I'm not sure that complexity adds benefit (I could be convinced otherwise though) I think it would be great to have a way to add data to a log line Promtail, supporting structured formats like logfmt and json. |
The ability to enrich logs using another datasource. For example; logs may contain information relating to a user-id. It would be useful to be able to translate this user-id into a customer/tenant-id, so that the event could be associated with a specific tenant. It could also be used to fetch a user friendly name for a specific user-id, so that it may be displayed more clearly in the logs as well. logstash provides this functionality via jdbc integration plugin (I believe fluentd also offers something similar). A query (or HTTP call) could be ran on every log entry, or it could cache all key value mappings and only update the cache every minute or so, instead looking up a hash map on each log entry. I originally thought this is what @slim-bean was suggesting, however the discussion he linked to seems like its discussing a different issue (If not, Ill delete this comment and 👍 the above one) |
More technical documentation, articles, how-tos and examples for promtail options, templating, metrics extraction and the processing pipeline in general. The current corpus of configuration example is very lean. So more varied documented use-cases and more varied configuration examples. |
Loki data retention policies based on criteria like size, time and labels. |
A generic data enrichment model using gRPCs and/or REST APIs. Hopefully this is not too noisy given the 2 prior posts on enrichment by @slim-bean and @McPo. Why not add gRPC and/or REST API stages and pipelines, and allow the user to decide if they want to use it on ingest in Promtail, or when querying via LogQL, similar to how the template stage and line_format pipeline work now? If the gRPC and REST API endpoints are required to accept and return a list of strings, or a map of strings, it creates a generic enrichment model that can be used by anyone that can stand up a gRPC or REST API endpoint. As @slim-bean suggested, it would be quite helpful if the enrichment results could be added to the log line in Promtail so that high cardinality values could be used, and optionally create labels in LogQL. This would support @McPo's user info enrichment use case, geolocation lookup use cases, as well as many others. It would also allow Loki users to pre-enrich or post-enrich logs as needed. |
oss store + local store ( for cache ) Such a scene : query logs in 7 days use local store, further then use oss |
What I hope most is to have a histogram like es/kibana。 discussion ref: grafana/grafana#19949 |
First of all, Loki is excellent, thank you for your work ! We would be interested in querying Loki with multiple tenants. This would allow us to retrieve information / KPI on a set of tenants. Here are some links where the subject has already been discussed : |
Promtail (or Loki - unsure where it's getting lost) should preserve the whole raw log message. For example, in its default configuration with the following syslog message:
|
Are you referring to the RFC5424 support? https://grafana.com/docs/loki/latest/clients/promtail/configuration/#syslog
set
see |
Recording rules write to multiple Prometheus servers |
Encryption of data so logs can be safely stored on object storage |
Add a geolocation option to LogQL Matching IP addresses. |
Give us the ability to read only the contents of the last file (in terms of the last write) present in a given directory. USE CASE 1: my log directory containing many log files (hundreds or thousands of files produced daily). Promtail will remain reading all these files until they are rotated. Alternatively, add a date into promtail path configuration, something like:
Reason is that in that path I have all the logs with ie: app_xyz_2021-11-16 USE CASE 2: IIS log directory contain many files like
and so on The format is Many thanks! |
I know it's 2022 but reduced and more consistent memory usage. (perhaps @cyriltovena #2900 improved index handling?) |
@splitice funny you're talking about we're writing the design doc as we speak to replace the index for this reason. |
@cyriltovena glad, I'm looking forward to my infinite "increase the resources to give to loki" to come to an end |
What's you period for the index ? it should never grow past a day normally. |
@cyriltovena better guidance on the "correct" settings to use via defaults or documentation would make configuring Loki and it's memory consumption less of a problem. I see that v1.4.2 goes some way towards this but It'd be great to have some better examples of real world use case best practices to extrapolate from. |
@cyriltovena 24hrs |
What I think would help with the index (not backed by benchmarking):
In my case we currently suspect the problem is caused by a specific label that has nearly 1m values per day (from total documents of around 5-10m/day). To increase cardinality we have introduced a prefix field to index on (cardinality should then be in the 100-1000 document range). I'll know if it works within a few days. |
I would love the basic ability to pull logs from Promtail to Loki |
@phanngl you mean Loki should scrape Promtail? What is the motivation? |
My company took a spartan approach to our infrastructure. Nearly all outbound traffic are blocked. Only inbound connections from devs' pc and monitoring server are allowed. Opening the firewall to allow push from Promtail maybe possible but we have to jump through a bunch of bureaucracy hops, which is a PITA. I know it's not a good idea to make Loki scrape Promtail. Just want to hear your thought about this. |
Ah, so Loki would be able to reach Promtail but not the other way around? |
yeah that's the case. |
I'd love to have a way to query all logs/stream together. Afaik right now, a stream selector is need, which makes it impossible to query everything, right ? |
Support http api paging query |
If this issue is still current, I would love to be able to make custom buttons in Loki using label-populated regexes just like the buttons that appear to take the user to Tempo. My first use-case would be to use logged snowflake QueryIds to link to the query detail ex: |
@Andrew-Wichmann This is a great feature request! 🙂 I have created a feature request for this: grafana/grafana#62660 |
thank you! =) |
Have a feature you would really like to see implemented in Loki? Or a bug you would really like to see fixed?
We use Loki everyday and love it, but maybe it isn't working so well for you.
Or maybe you use Loki different than we do and can think of improvements.
What to do:
It can be difficult to prioritize work for a project like Loki which can be used in many different ways, it's hard to tell how common some might be or how popular a feature request is, your feedback would be greatly appreciated!
One request, we can keep the thread easier to read if we don't have too many discussions, prefer creating a separate issue or use the existing issue to have specific conversations.
The text was updated successfully, but these errors were encountered: