Skip to content

Search Commands#

Commands are what make log-store a tool that is much more powerful than something like grep. Commands let you visualize, group, and even parse logs even further.

Basic Command Syntax#

Commands consist of the command’s name, followed by one or more positional or named parameters, or functions. Commands are specified and separated using the pipe (|) character, much like the Unix/Linux command line. Every search query implicitly ends in a command that specifies how it is displayed. If no command is specified, the implicit table command is used.

Positional Parameters#

Positional parameters are separated by whitespace (typically a single space), and are usually required. They can be either single parameters, or an array: ['string literal', 1234, True].

Named Parameters#

Named parameters are often optional, and used to change the functionality of a command. The split command for example, has 2 optional named parameters sep and regex. These parameters change how the split command splits the value of a log. Named parameters can also be single parameters, or an array.

Function Parameters#

Function parameters are a name followed by parenthesis, followed by parameters: mean(field1, field2). Functions are usually used to denote applying a function to a field, or list of fields.

Optional and Required Parameters#

In the descriptions below, optional parameters are denoted by enclosing them in angle brackets: * <optional_positional_parameter> * <optional_named_parameter=> * <optional_function()>

Array parameters end in square brackets: array_param[]. However, most array parameters do not require square brackets if only a single value is used. For example: [field] and field usually both work.

Display Commands#

Display commands specify how results are displayed. Display commands can only be used at the end of a search query. If a display command is not found, the table command is used implicitly.

table#

Displays log entries as a table. Missing fields will show up blank, and headers will be included for all fields that are returned by the query, not necessarily all fields found in all records.

table <include=[]> <exclude=[]> <order=[]>

Parameters#

  • include - specifies the field or fields to include. All fields are included by default.
  • exclude - specifies the field or fields to exclude. Both parameters cannot be specified.
  • order - the order in which the fields should appear. You can use a * to skip fields, specifying those that are in the beginning or end of the table.

Examples#

records#

Displays log entries like records in a file. Missing fields are not displayed, but are shown when the entry is expanded.

records <include=[]> <exclude=[]> <order=[]>

Parameters#

  • include - specifies the field or fields to include. All fields are included by default.
  • exclude - specifies the field or fields to exclude. Both parameters cannot be specified.
  • order - the order in which the fields should appear. You can use a * to skip fields, specifying those that are in the beginning or end.

Examples#

json#

Lists each log entry on a line as JSON. You can click the date to expand the JSON to be “pretty printed”.

json <include=[]> <exclude=[]>

Parameters#

  • include - specifies the field or fields to include. All fields are included by default.
  • exclude - specifies the field or fields to exclude. Both parameters cannot be specified.
  • order - the order in which the fields should appear. You can use a * to skip fields, specifying those that are in the beginning or end.

Examples#

chart#

Graphically charts the results of a query. You must specify the type of chart: line, bar, stack, or pie. The line, bar, and stack charts operate on a time series of logs. The specified fields are charted on the y-axis, with time on the x-axis.

The pie charts a single entry of the form: label: value, where value is some number. This type of chart is almost always used with the histo command, to generate a histogram of the logs.

chart type field <field ...> <N> <func(field)> <by=>

Parameters#

  • type - One of the chart types: line, bar, stack, or pie
  • field - The field(s) to chart; only required for line, bar, or stack. You can specify any number of fields to chart
  • N - A constant to chart as well.
  • func(field) - A simple stats function (min, max, avg or mean, or pNN where NN is some percentile) to chart for the field.
  • by - An optional field(s) to further generate charts by separating data by the field(s); not used with pie.

Examples#

cluster#

Beta Groups logs by a specified field using a Machine Learning algorithm to determine the number of clusters, and which logs belong to each cluster.

cluster field <method=>
  • field - The field to use when clustering logs.
  • method - Optional method for clustering. auto will automatically determine the number of clusters, and which logs belong to each cluster. equal will use equality to determine the cluster; it is the same as group-by in SQL. Defaults to auto.

Examples#

Transformation Commands#

The following commands can be used to modify the log entries returned from a search. Logs are transformed, and passed from one command to the next. This mimics the piping found on the Unix/Linux command line. Ultimately all search commands end in a display command.

agg#

Aggregate fields by a given method, and optionally by another field. New fields are generated based upon the field names, aggregation methods used, and optional “by” fields. While the agg command is often used with the chart command, it can be helpful to view the output from the command via the table command first, so you know the names of the generated fields. Note: The generated fields often contain a colon (:) which is not a legal character for a field, so all generated fields must be specified using quotes.

agg method1(field1, <field2, ...>) <method2(field3, field4, ...)> <by=[]>

Parameters#

  • method1 - The method used to aggregate the field, or fields. The method can be any one of: min, max, count, mean, median, or pnn, where nn is a value between 1 and 99.
  • field1 - The field to aggregate. A list of fields can be supplied as well.
  • by - An optional field or fields to group the aggregations by.

There are two basic ways to use the aggregate function agg:

  1. Specifying a field and method you want to aggregate by. The field will be aggregated, and the method determines how.
  2. Specifying a field and method, and another field to group the aggregation by.

Methods#

Below are the list of available methods for aggregation. Note that most methods only work when the value is a number (float or int), and will automatically insert 0 if the value is not a number:

  • count - Counts the number of logs. This method does not take a field, and will ignore any arguments passed to it.
  • sum - Sums up the values associated with the field.
  • mean - Compute the arithmatic mean (average) of the values associated with the field.
  • avg - This is an alias for mean.
  • average - This is an alias for mean.
  • median - Compute the approximate median value for the values associated with the field. The exact median is not returned as that would be too memory intensive.
  • min - Returns the smallest value associated with the field.
  • max - Returns the largest value associated with the field.
  • p* - Returns the percentile (with * replaced with 1 or 2-digit number) for the values associated with the field.

Examples#

bucket#

Round the timestamp down to a given interval, or group the timestamp by another field.

There are two ways to use the bucket command, but both modify the timestamp of the logs. The first rounds down the timestamp to a given interval. The second changes the timestamp to match that of other logs that have the same value for another field.

bucket duration OR field 
  • duration - A relative time duration to round the timestamp down to.
  • field - The field to group timestamps by.

Examples#

concat#

Concatenates two or more fields into a new field.

concat fields[] new_field <sep=> <skip_missing=>

Parameters#

  • fields - An array of fields to concatenate together.
  • new_field - The name of the new field.
  • sep - The separator (if any) to use between fields; can also use sep.
  • skip_missing - Boolean that indicates if an entry should be skipped if one of the fields is missing; defaults to false.

Examples#

extract#

Extract part(s) of a value, and make it a field(s). This command uses regular expressions to match part(s) of a value, and create new fields.

extract field regex new_fields[]

Parameters#

  • field - The field to match the regular expression against. If you want to extract values from multiple fields, use this command multiple times.
  • regex - A regular expression, with optional capture groups, used to extract the value(s) from the field. If capture groups are provided, they correspond to the new_fields provided. If none is provided, whatever is matched will become the new field.
  • new_fields - An array of new field names, which correspond to the values captured by the regular expression. You must supply as many names as capture groups.

Examples#

histo#

Counts the unique values of a given field in the entries. This command is a shortcut for | agg count(field).

histo field

Parameters#

  • field - The field to count the unique values of

Examples#

lift#

Parses a field’s value as JSON, and lifts those fields up one level as if they were originally in the top-level of the log. Nested objects will simply have their fields and values lifted one level. Nested arrays will be given the field name of field[i], where i is the index in the array.

Info

You should consider parsing you logs with a tool like log-ship before sending them to log-store, to avoid having to use this command. Parsing your logs before sending them to log-store is much more efficient than parsing them with each search.

lift field <filter_on_error=true> <prefix=false> <keep_existing=false>

Parameters#

  • field - The field to parse as JSON.
  • filter_on_error - Boolean to indicate if the log should be filtered if the field is not found, or any other error occurs. Default is true, but note it is faster to filter with a search condition.
  • prefix - Boolean to indicate if new fields be prefixed with the current field? If set to true, nested objects will have the form field.nested_field and arrays will have the form field.field[i]. Default is false.
  • keep_existing - Boolean to indicate if the existing field and value should be kept in the log. Defaults is false.

Examples#

Using the following logs in each example, the output of each version of the command is shown below it: ```json lines {“t”: 1672531200, “msg”: {“hello”: “world”}} {“t”: 1672531201, “msg”: [“hello”, “world”]}


`1h | lift msg`
```json lines
{"t": 1672531200, "hello": "world"}
{"t": 1672531201, "msg[0]": "hello", "msg[1]": "world"}

1h | lift msg filter_on_error=false ```json lines {“t”: 1672531200, “hello”: “world”} {“t”: 1672531201, “msg[0]”: “hello”, “msg[1]”: “world”}


`1h | lift msg prefix=true`
```json lines
{"t": 1672531200, "msg.hello": "world"}
{"t": 1672531201, "msg.msg[0]": "hello", "msg.msg[1]": "world"}

1h | lift msg keep_existing=true ```json lines {“t”: 1672531200, “msg”: {“hello”: “world”}, “hello”: “world”}


### overlay
The `overlay` command is used _only_ with the [`chart`](#chart) command. The command shows logs that match a search criteria
on a line, bar, or stacked chart.

```text
overlay field (=|~|!=|!~|<|≤|>|≥) value <field (=|~|!=|!~|<|≤|>|≥) value>...

Parameters#

  • field - The field in the log to use for filtering.
  • (=|~|!=|!~|<|≤|>|≥) - A comparison operator.
  • value - The value to use for filtering.

Examples#

pivot#

Swap fields for values in the results. Note: This can create “impossible” JSON entries if there are duplicate values.

pivot

Examples#

python#

Runs a function written in Python which can modify or filter logs. See this page for more information.

python name

Parameters#

  • name - The name of the Python command to run

Info

You cannot save Python commands on the demo site, and any example would be meaningless without seeing the associated code. See the Python page for more information.

rename#

Renames a field to a new name.

Warning

Note: If the new name already exists in the log, it is overwritten.

rename old_field new_field

Parameters#

  • old_field - The current field to rename.
  • new_field - The new field name to use.

Examples#

split#

Splits a field by a specified separator character, or a regular expression, generating new fields.

split field <sep=> <regex=>

Parameters#

  • field - The field to split
  • sep - The character used to split the value
  • regex - A regular expression to use to split the value. Can only specify either sep or regex; defaults to whitespace

Examples#

where#

Filters out logs, keeping those that match the provided comparison. Note: This command should be used after other commands, not to perform a search upon the logs. The where command is slower than providing a search criteria.

where field (=|~|!=|!~|<|≤|>|≥) value

Parameters#

  • field - The field in the log to use for filtering.
  • (=|~|!=|!~|<|≤|>|≥) - A comparison operator.
  • value - The value to use for filtering.

Examples#