-
-
Notifications
You must be signed in to change notification settings - Fork 68
LogQL Supported Features
LogQL is Loki's PromQL-inspired query language. cLoki supports many query types and will continue to introduce more and more. To see Loki's capabilities (aka what cLoki will work to achieve) go to Grafana's LogQL Page
There are two types of LogQL queries:
- Log queries return the contents of log lines.
- Metric queries extend log queries to calculate values based on query results.
All LogQL queries contain a log stream selector.
The stream selector determines which log streams to include in a query’s results. The stream selector is specified by one or more comma-separated key-value pairs. Each key is a log label and each value is that label’s value.
Consider this stream selector:
{app="mysql",name="mysql-backup"}
All log streams that have both a label of app
whose value is mysql
and a label of name
whose value is mysql-backup
will be included in the query results. A stream may contain other pairs of labels and values, but only the specified pairs within the stream selector are used to determine which streams will be included within the query results.
The =
operator after the label name is a label matching operator. The following label matching operators are supported:
- |=: Label contains string
- !=: Label does not contain string
- |~: Label contains a match to the regular expression
- !~: Label does not contain a match to the regular expression
A log pipeline can be appended to a log stream selector to further process and filter log streams. It usually is composed of one or multiple expressions, each expressions is executed in sequence for each log line.
A log pipeline can be composed of:
- Line Filter Expression
- Parser Expression
- Label Filter Expression
- Line Format Expression
- Labels Format Expression
- Unwrap Expression (metrics)
The line filter expressions are used to filter the contents of returned logs, discarding those lines that do not match the case sensitive expression.
The following filter operators are supported:
- |=: Log line contains string
- !=: Log line does not contain string
- |~: Log line contains a match to the regular expression
- !~: Log line does not contain a match to the regular expression
A complete query using this example:
{job="mysql"} |= "error"
Parser expression can parse and extract labels from the log content. Those extracted labels can then be used for filtering using label filter expressions or for metric aggregations.
The json parser operates in two modes:
-
without parameters:
- Adding
| json
to your pipeline will extract all json properties as labels if the log line is a valid json document. Nested properties are flattened into label keys using the _ separator. {job="0.6611336793589486_json"} | json
- Adding
-
with parameters:
- Using
| json label="expression"
in your pipeline will extract only the specified json fields to labels. {job="0.6611336793589486_json"} | json my_field="somevalue"
- Using
- rate(log-range): calculates the number of entries per second
- count_over_time(log-range): counts the entries for each log stream within the given range.
- bytes_rate(log-range): calculates the number of bytes per second for each stream.
- bytes_over_time(log-range): counts the amount of bytes used by each log stream for a given range.
- absent_over_time(log-range): returns an empty vector if the range vector passed to it has any elements and a 1-element vector with the value 1 if the range vector passed to it has no elements. (absent_over_time is useful for alerting on when no time series and logs stream exist for label combination for a certain amount of time.)
- sum(count_over_time({label=value}[range])) by (label)
- | json: support for JSON output is working
sum_over_time({test_id="0.6611336793589486_json"} | json lbl_int1="str_id"| unwrap lbl_int1 [5s]) by (test_id)