Skip to content

Adding Logs#

Logs are sent to log-store as line-delimited JSON; ie, one JSON object per line. Using this format allows you to send various fields, and configurations of those fields, on a message-by-message basis. This provides maximum flexibility in storing logs.

Sending Logs to log-store#

The recommended way to send logs to log-store is via log-ship, the open source log shipper specifically designed for log-store. log-ship allows you to easily extract logs from various sources, including gathering system metrics, easily parse and transform them via Python, and then send them to log-store.

log-ship has various input, transform, and output plugins. The output plugin used with log-store is the tcp_socket plugin.

Logs can also be sent to log-store via Unix domain socket, if the logs are generated on the same server log-store is running on. log-ship’s unix_socket plugin can be used to send logs to a Unix domain socket.

Log Format#

Logs are sent to log-store as JSON objects, with a few additional restrictions on field names, and values.

Value Restrictions#

To keep indexing efficient and querying simple, arrays and nested JSON objects are “flattened” into strings.

JSON arrays are converted to string of the basic form: [ item1, item2, ... ]. You can still search for values in these strings using regular expressions. See the section on searching Fields and Values.

JSON objects are converted to “raw” JSON strings. These too can be searched via regular expressions.

It is highly recommended that you use the - prefix (see Field Prefixes) on any field that might be an array or object. This will prevent log-store from indexing these values, as they most likely will never be searched for exact equality. This is an optimization, but one that can have a large impact on ingest speed.

Field Names#

Field names MUST start with an uppercase, lowercase, number, or prefix, followed by any uppercase, lowercase, number, or any of the following special characters: [A-Za-z0-9][A-Za-z0-9_-]. The dot/period (.) cannot be used in field names, as it is reserved for future hierarchical searching.

Required Fields#

There is only one required field: t. This field denotes the time of the log using Unix time. A handy converter can be found at epochconverter.com

Field Prefixes#

There are a few reserved field prefixes. These prefixes have special meaning, but are stripped when used in the GUI. For example, log_level and +log_level are both accessed using the field name log_level in the GUI.

  • - indicates that a field should not be indexed at all. This is probably the most important prefix to use, as it can greatly speed up the ingest process if long fields do not need to be indexed. You can still search by these fields, but it will be very slow as every log will need to be examined to see if it matches the search criteria.
  • + fields are tokenized by whitespace, and then each token treated as a separate value. This makes ingesting logs slightly slower (more work needs to be done to tokenized and index each token), but makes searching for tokens faster because the = operator can be used instead of regular expressions.

Adding Prefixes Later#

If you have already ingested logs, and the add prefixes to fields for logs of the same type, these prefixes are not retroactively applied. For example, the fields message and +message are stored as two different fields, even though they are accessed by the same field name message in the GUI. For this reason, it is best to think about fields and how they will be used before adding (or not) a prefix. As logs expire, any old fields without a prefix are purged. So it is recommended to start without prefixes, and only if you are not getting the performance you need (for either ingesting or searching), then begin to add prefixes.