Want a dashboard that works out of the box?
Use one of our ready-to-use templates to create a dashboard.
Take your observability to the next level by visualizing exactly what’s vital for you. Create tailored charts by writing your own custom SQL queries.
Use one of our ready-to-use templates to create a dashboard.
Create your first tailored chart using a custom SQL query. Start by creating a dashboard in Dashboards → Create dashboard.
Create a new chart on your dashboard and use the query below:
SELECT {{time}} AS time,
countMerge(rows) AS value,
level AS series
FROM {{source}}
WHERE time BETWEEN {{start_time}} AND {{end_time}}
GROUP BY time, series
Dashboard queries use ClickHouse SQL, which is largely similar to ANSI SQL you’re likely familiar with. In SQL query above, we use countMerge
ClickHouse function.
On top of SQL, dashboard queries feature variables like {{source}}
or {{time}}
. Variables conveniently insert selected source, current time range, and other selected values into your queries. Read more about Read more about [query variables][docs-logs-using-logtail-dashboards-variables].
Chart queries don't query logs directly. Logs are aggregated into efficient metrics. Logs to metrics aggregations control which columns you can use in your queries. In SQL query above, we use level
column.
Follow our examples below or start with SQL tutorial from W3Schools.
Read more about ClickHouse SQL syntax and compatibility with ANSI SQL.
Ingested logs and pure metrics are processed into aggregated metrics and columns you can group by.
You can query metrics and group by columns extracted from your logs and pure metrics:
For Logs, we automatically extract a few common metrics and columns based on source type. You can add more columns to use in dashboard queries.
Pure metrics can always be filtered and grouped by name
and tags
. There are also aggregated value_*
and rate_*
. Read more about writing SQL queries for metrics.
Find the complete list of columns extracted from your logs and pure metrics in Sources → your source → Configure → Advanced settings → Logs to metrics section.
Under Logs to metrics you will find aggregated metrics and columns you can group by:
avgMerge(flight_delay_avg)
. Check which aggregations are available for specific metrics in the Metrics table.Use countMerge(rows)
to get the number of underlying log lines or metric data points.
Note that you don't have to add _count
suffix to rows
metric.
Check out the examples below to see how to use metrics and group by columns.
Chart the number of logs with the error log level:
SELECT {{time}} AS time, countMerge(rows) AS value
FROM {{source}}
WHERE time BETWEEN {{start_time}} AND {{end_time}}
AND level = 'error'
GROUP BY time, series
Use countMerge(rows)
to count the number of log lines. Using count(*)
would give you wrong results.
Filter by the level
column. The column has to be present in Logs to metrics → Group by.
Chart the average duration of request:
SELECT {{time}} AS time,
avgMerge(request_duration_avg) AS "Request duration"
FROM {{source}}
WHERE time BETWEEN {{start_time}} AND {{end_time}}
GROUP BY time, series
Use avgMerge(request_duration_avg)
to calculate an average for the request_duration
metric. The metric has to be added in Logs to metrics → Metrics with avg
aggregation.
Column names Request duration
will be used in chart tooltips and the legend. Select which column should be charted on the Y-axis on the right side in Chart setting → Axes.
Chart the 95th percentile request duration:
SELECT {{time}} AS time,
quantilesMerge(0.5, 0.9, 0.95, 0.99)(request_duration_quantiles)[3]
AS "95th percentile duration"
FROM {{source}}
WHERE time BETWEEN {{start_time}} AND {{end_time}}
GROUP BY time, series
Here we have the request_duration
metric with p50
, p90
, p95
, and p99
percentiles set up in Logs to metrics. Calculate the 95th percentile request duration using quantilesMerge
. The quantilesMerge
function gives us all four percentiles. We select the third percentile using [3]
.
Check out writing SQL metrics queries.
TODO: Filter by aggregated metrics using HAVING (?)
Add metrics and columns to group by in Sources → your source → Configure → Advanced settings → Logs to metrics section.
Add metrics for numeric fields in your logs that you want to chart. We'll process the fields into aggregated metrics.
Fill out the inputs:
JSONExtract()
or any valid ClickHouse SQL expression.
JSONExtract(json, 'request', 'duration_ms', Nullable(Float64))
to extract request.duration_ms
field from your logs.Nullable(Float64)
parses the value as float and prevents errors when the field is missing. See all ClickHouse data types. avg
and p95
to get the average and 95th percentile values.Add fields to the table that you want to filter and group by. We’ll extract the fields for use in dashboard queries.
Fill out the inputs:
JSONExtract()
or any valid ClickHouse SQL expression.
JSONExtract(json, 'context', 'system', 'hostname', Nullable(String))
to extract context.system.hostname
field from your logs.Nullable(String)
parses the value as string and prevents errors when the field is missing. See all ClickHouse data types. String (LowCardinality)
for strings.