Search Commands#
Commands are what make log-store a tool that is much more powerful than something like grep. Commands let you visualize, group, and even parse logs even further.
Basic Command Syntax#
Commands consist of the command’s name, followed by one or more positional or named parameters, or functions. Commands
are specified and separated using the pipe (|
) character, much like the Unix/Linux command line. Every search query
implicitly ends in a command that specifies how it is displayed. If no command is specified,
the implicit table
command is used.
Positional Parameters#
Positional parameters are separated by whitespace (typically a single space), and are usually required. They can be either
single parameters, or an array: ['string literal', 1234, True]
.
Named Parameters#
Named parameters are often optional, and used to change the functionality of a command. The split
command for example,
has 2 optional named parameters sep
and regex
. These parameters change how the split
command splits the value of
a log. Named parameters can also be single parameters, or an array.
Function Parameters#
Function parameters are a name followed by parenthesis, followed by parameters: mean(field1, field2)
. Functions are usually used
to denote applying a function to a field, or list of fields.
Optional and Required Parameters#
In the descriptions below, optional parameters are denoted by enclosing them in angle brackets:
* <optional_positional_parameter>
* <optional_named_parameter=>
* <optional_function()>
Array parameters end in square brackets: array_param[]
. However, most array parameters do not require square brackets
if only a single value is used. For example: [field]
and field
usually both work.
Display Commands#
Display commands specify how results are displayed. Display commands can only be used at the end of a search query.
If a display command is not found, the table
command is used implicitly.
table#
Displays log entries as a table. Missing fields will show up blank, and headers will be included for all fields that are returned by the query, not necessarily all fields found in all records.
table <include=[]> <exclude=[]> <order=[]>
Parameters#
include
- specifies the field or fields to include. All fields are included by default.exclude
- specifies the field or fields to exclude. Both parameters cannot be specified.order
- the order in which the fields should appear. You can use a*
to skip fields, specifying those that are in the beginning or end of the table.
Examples#
- Excluding the
body
field:1h | table exclude=body
- Including both
query
andreq_id
:1h | table include=[query, req_id]
records#
Displays log entries like records in a file. Missing fields are not displayed, but are shown when the entry is expanded.
records <include=[]> <exclude=[]> <order=[]>
Parameters#
include
- specifies the field or fields to include. All fields are included by default.exclude
- specifies the field or fields to exclude. Both parameters cannot be specified.order
- the order in which the fields should appear. You can use a*
to skip fields, specifying those that are in the beginning or end.
Examples#
- Excluding the
body
field:1h | records exclude=body
- Including both
query
andreq_id
:1h | records include=[query, req_id]
json#
Lists each log entry on a line as JSON. You can click the date to expand the JSON to be “pretty printed”.
json <include=[]> <exclude=[]>
Parameters#
include
- specifies the field or fields to include. All fields are included by default.exclude
- specifies the field or fields to exclude. Both parameters cannot be specified.order
- the order in which the fields should appear. You can use a*
to skip fields, specifying those that are in the beginning or end.
Examples#
- Excluding both the
query
andresponse_time
fields:1h | json exclude=[query, response_time]
- Including only the
response_time
field:1h | json include=[response_time]
chart#
Graphically charts the results of a query. You must specify the type of chart: line
, bar
, stack
, or pie
.
The line
, bar
, and stack
charts operate on a time series of logs. The specified fields are charted on the y-axis,
with time on the x-axis.
The pie
charts a single entry of the form: label: value
, where value
is some number. This type of chart
is almost always used with the histo
command, to generate a histogram of the logs.
chart type field <field ...> <N> <func(field)> <by=>
Parameters#
type
- One of the chart types:line
,bar
,stack
, orpie
field
- The field(s) to chart; only required forline
,bar
, orstack
. You can specify any number of fields to chartN
- A constant to chart as well.func(field)
- A simple stats function (min
,max
,avg
ormean
, orpNN
whereNN
is some percentile) to chart for the field.by
- An optional field(s) to further generate charts by separating data by the field(s); not used withpie
.
Examples#
- Line chart of response time:
1h | chart line "response_time"
- Line chart of response time by method:
1h | method != null | chart line "response_time" by=method
- Line chart of response time, the min, max, and avg response time as well:
1h | chart line response_time min(response_time) max(response_time) avg(response_time)
- Bar chart of size:
1h | chart bar "size"
- Bar chart of size and the constant 1000:
1h size != null | chart bar size 1000
- Stack chart of response time, summed together over a minute, and charted by tier:
1h | bucket t 1m | agg sum(response_time) by=[t,tier] | chart stack "response_time:sum" by=tier
- Pie chart using the
histo
command:1h | histo "tier" | chart pie
cluster#
cluster field <method=>
field
- The field to use when clustering logs.method
- Optional method for clustering.auto
will automatically determine the number of clusters, and which logs belong to each cluster.equal
will use equality to determine the cluster; it is the same as group-by in SQL. Defaults toauto
.
Examples#
- Cluster the logs by the path field:
1h | cluster path
Transformation Commands#
The following commands can be used to modify the log entries returned from a search. Logs are transformed, and passed from one command to the next. This mimics the piping found on the Unix/Linux command line. Ultimately all search commands end in a display command.
agg#
Aggregate fields by a given method, and optionally by another field. New fields are generated based upon the field names,
aggregation methods used, and optional “by” fields. While the agg
command is often used with the chart
command, it
can be helpful to view the output from the command via the table
command first, so you know the names of the generated
fields. Note: The generated fields often contain a colon (:
) which is not a legal character for a field, so all generated
fields must be specified using quotes.
agg method1(field1, <field2, ...>) <method2(field3, field4, ...)> <by=[]>
Parameters#
method1
- The method used to aggregate the field, or fields. The method can be any one of:min
,max
,count
,mean
,median
, orpnn
, wherenn
is a value between1
and99
.field1
- The field to aggregate. A list of fields can be supplied as well.by
- An optional field or fields to group the aggregations by.
There are two basic ways to use the aggregate function agg
:
- Specifying a field and method you want to aggregate by. The field will be aggregated, and the method determines how.
- Specifying a field and method, and another field to group the aggregation by.
Methods#
Below are the list of available methods for aggregation. Note that most methods only work when the value is a number
(float or int), and will automatically insert 0
if the value is not a number:
count
- Counts the number of logs. This method does not take a field, and will ignore any arguments passed to it.sum
- Sums up the values associated with the field.mean
- Compute the arithmatic mean (average) of the values associated with the field.avg
- This is an alias formean
.average
- This is an alias formean
.median
- Compute the approximate median value for the values associated with the field. The exact median is not returned as that would be too memory intensive.min
- Returns the smallest value associated with the field.max
- Returns the largest value associated with the field.p*
- Returns the percentile (with*
replaced with 1 or 2-digit number) for the values associated with the field.
Examples#
- Average field
response_time
:1h | agg mean(response_time)
- Get the 95th percentile of the field
response_time
, grouping byresp_code
:1h | agg p95(response_time) by=resp_code
bucket#
Round the timestamp down to a given interval, or group the timestamp by another field.
There are two ways to use the bucket
command, but both modify the timestamp of the logs. The first rounds down the
timestamp to a given interval. The second changes the timestamp to match that of other logs that have the same value
for another field.
bucket duration OR field
duration
- A relative time duration to round the timestamp down to.field
- The field to group timestamps by.
Examples#
- Round the timestamps down to the nearest minute:
1h | bucket 1m
- Set the timestamps for all request IDs to the same value:
1h | bucket req_id
concat#
Concatenates two or more fields into a new field.
concat fields[] new_field <sep=> <skip_missing=>
Parameters#
fields
- An array of fields to concatenate together.new_field
- The name of the new field.sep
- The separator (if any) to use between fields; can also usesep
.skip_missing
- Boolean that indicates if an entry should be skipped if one of the fields is missing; defaults to false.
Examples#
- Concatenating the
req_id
andresponse_time
fields together with a space:1h | concat [req_id, response_time] req_time sep=' '
- Concatenating the
method
andresp_code
fields with a colon, skipping missing entries:1h | concat [method, resp_code] method_code skip_missing=true sep=':'
extract#
Extract part(s) of a value, and make it a field(s). This command uses regular expressions to match part(s) of a value, and create new fields.
extract field regex new_fields[]
Parameters#
field
- The field to match the regular expression against. If you want to extract values from multiple fields, use this command multiple times.regex
- A regular expression, with optional capture groups, used to extract the value(s) from the field. If capture groups are provided, they correspond to thenew_fields
provided. If none is provided, whatever is matched will become the new field.new_fields
- An array of new field names, which correspond to the values captured by the regular expression. You must supply as many names as capture groups.
Examples#
- Extract the 4 parts of the
ip
address:1h | extract ip "(\d+)\.(\d+)\.(\d+)\.(\d+)" [ip1, ip2, ip3, ip4]
- Extract the first part of the
path
field:1h | extract path "^/.+/" root
histo#
Counts the unique values of a given field in the entries. This command is a shortcut for | agg count(field)
.
histo field
Parameters#
field
- The field to count the unique values of
Examples#
- Count the unique method values:
1h | histo method
lift#
Parses a field’s value as JSON, and lifts those fields up one level as if they were originally in the top-level of the log.
Nested objects will simply have their fields and values lifted one level. Nested arrays will be given the field name of
field[i]
, where i
is the index in the array.
Info
You should consider parsing you logs with a tool like log-ship before sending them to log-store, to avoid having to use this command. Parsing your logs before sending them to log-store is much more efficient than parsing them with each search.
lift field <filter_on_error=true> <prefix=false> <keep_existing=false>
Parameters#
field
- The field to parse as JSON.filter_on_error
- Boolean to indicate if the log should be filtered if the field is not found, or any other error occurs. Default istrue
, but note it is faster to filter with a search condition.prefix
- Boolean to indicate if new fields be prefixed with the current field? If set totrue
, nested objects will have the formfield.nested_field
and arrays will have the formfield.field[i]
. Default isfalse
.keep_existing
- Boolean to indicate if the existing field and value should be kept in the log. Defaults isfalse
.
Examples#
Using the following logs in each example, the output of each version of the command is shown below it: ```json lines {“t”: 1672531200, “msg”: {“hello”: “world”}} {“t”: 1672531201, “msg”: [“hello”, “world”]}
`1h | lift msg`
```json lines
{"t": 1672531200, "hello": "world"}
{"t": 1672531201, "msg[0]": "hello", "msg[1]": "world"}
1h | lift msg filter_on_error=false
```json lines
{“t”: 1672531200, “hello”: “world”}
{“t”: 1672531201, “msg[0]”: “hello”, “msg[1]”: “world”}
`1h | lift msg prefix=true`
```json lines
{"t": 1672531200, "msg.hello": "world"}
{"t": 1672531201, "msg.msg[0]": "hello", "msg.msg[1]": "world"}
1h | lift msg keep_existing=true
```json lines
{“t”: 1672531200, “msg”: {“hello”: “world”}, “hello”: “world”}
### overlay
The `overlay` command is used _only_ with the [`chart`](#chart) command. The command shows logs that match a search criteria
on a line, bar, or stacked chart.
```text
overlay field (=|~|!=|!~|<|≤|>|≥) value <field (=|~|!=|!~|<|≤|>|≥) value>...
Parameters#
field
- The field in the log to use for filtering.(=|~|!=|!~|<|≤|>|≥)
- A comparison operator.value
- The value to use for filtering.
Examples#
- Chart response time and overlay whenever the method is a PUT:
1h tier="web" | chart line response_time | overlay method="PUT"
pivot#
Swap fields for values in the results. Note: This can create “impossible” JSON entries if there are duplicate values.
pivot
Examples#
- Count the unique method values, then pivot:
1h | histo method | pivot
python#
Runs a function written in Python which can modify or filter logs. See this page for more information.
python name
Parameters#
name
- The name of the Python command to run
Info
You cannot save Python commands on the demo site, and any example would be meaningless without seeing the associated code. See the Python page for more information.
rename#
Renames a field to a new name.
Warning
Note: If the new name already exists in the log, it is overwritten.
rename old_field new_field
Parameters#
old_field
- The current field to rename.new_field
- The new field name to use.
Examples#
- Rename
method
tohttp_method
:1h | rename method http_method
split#
Splits a field by a specified separator character, or a regular expression, generating new fields.
split field <sep=> <regex=>
Parameters#
field
- The field to splitsep
- The character used to split the valueregex
- A regular expression to use to split the value. Can only specify eithersep
orregex
; defaults to whitespace
Examples#
- Split
query
by whitespace:1h | split query
- Split
user_agent
by/
or space:1h | split user_agent regex='[/ ]'
where#
Filters out logs, keeping those that match the provided comparison. Note: This command should be used after other commands, not
to perform a search upon the logs. The where
command is slower than providing a search criteria.
where field (=|~|!=|!~|<|≤|>|≥) value
Parameters#
field
- The field in the log to use for filtering.(=|~|!=|!~|<|≤|>|≥)
- A comparison operator.value
- The value to use for filtering.
Examples#
- Filter when the average
response_time
is greater than 100:1h | bucket t 5m | agg mean(response_time) by=t | where "mean:response_time" > 100