Pino is a powerful logging framework for Node.js that boasts exceptional speed and a comprehensive set of features. In fact, its impressive performance has earned it a default spot in the open-source Fastify web server for logging output. Pino's versatility also extends to its ease of integration with other Node.js web frameworks, making it a top choice for developers looking for a reliable and flexible logging solution.
Pino includes all the standard features expected in any logging framework, such as customizable log levels, formatting options, and multiple log transport options. Its flexibility is one of its standout features, as it can be easily extended to meet specific requirements, making it a top choice for a wide range of applications.
This tutorial will guide you through creating a logging service for your Node.js application using Pino. You will learn how to leverage the framework's many features and customize them to achieve an optimal configuration for your specific use case.
By the end of this tutorial, you will be well-equipped to implement a production-ready logging setup in your Node.js application with Pino, helping you to streamline your logging process and improve the overall performance and reliability of your application.
Prerequisites
Before proceeding with the rest of this article, ensure that you have a recent
version of Node.js and npm
installed
locally on your machine. This article also assumes that you are familiar with
the basic concepts of logging in Node.js.
Getting started with Pino
To get the most out of this tutorial, create a new Node.js project to experiment with the concepts we will be discussing. Start by initializing a new Node.js project using the commands below:
mkdir pino-logging && cd pino-logging
npm init -y
Afterward, go ahead and install the latest version of pino through the command below. This examples in this article are compatible with version 8.x which is the latest at the time of writing.
npm install pino
Create a new logger.js
file in the root of your project directory, and
populate it with the following contents:
const pino = require('pino');
module.exports = pino({});
This snippet requires the pino
package and exports a logger instance that is
created by executing the top-level pino()
function. We'll explore all the
different ways you can customize the Pino logger, but for now let's go ahead and
use the exported logger in a new index.js
file as shown below:
const logger = require('./logger');
logger.info('Hello, world!');
Once you save the file, execute the program using the following command:
node index.js
You should observe the following output:
{"level":30,"time":1677506333497,"pid":39977,"hostname":"fedora","msg":"Hello, world!"}
The first thing you'll notice about the output above is that it's structured and formatted in JSON, the most prevalent industry standard for structured logging. You'll also notice that besides the log message, some other details are present in the log entry:
- The log level indicating the severity of the event being logged.
- The time of the event (the number of milliseconds elapsed since January 1, 1970 00:00:00 UTC).
- The hostname of the machine where the program is running
- The process ID of the program
We'll discuss how you can customize or remove each of these fields, and how to enrich your logs with other contextual fields later on in this tutorial.
Prettifying JSON logs in development
While JSON is great for production logging due to its simplicity, flexibility, and widespread support amongst logging tools, it's not the easiest to read for humans especially when its printed on one line. To make the JSON output from Pino easier to read in development environments (where logs are typically printed to the standard output), you can adopt the one of following approaches.
Using jq
jq is a nifty command-line tool for processing JSON data. You can pipe your JSON logs to it to colorize and pretty-print it:
node index.js | jq
{
"level": 30,
"time": 1677669391146,
"pid": 557812,
"hostname": "fedora",
"msg": "Hello, world!"
}
If the JSON output is too large, you can remove irrelevant fields by using jq's
del()
function:
node index.js | jq 'del(.time,.hostname,.pid)'
{
"level": 30,
"msg": "Hello, world!"
}
You can also use a whitelist instead which is also handy for rearranging the order of the fields:
node index.js | jq '{msg,level}'
{
"msg": "Hello, world!",
"level": 30
}
You can transform your JSON logs in many other ways through jq
so ensure to
check out its documentation to learn
more.
Using pino-pretty
The Pino team have also provided the pino-pretty package for converting newline-delimited JSON entries into a more human readable plaintext output:
You need to install the pino-pretty
package first:
npm install pino-pretty --save-dev
Once the installation completes, you'll be able to pipe your application logs to
pino-pretty
as shown below:
node index.js | npx pino-pretty
You will observe that the logs are now reformatted and colorized to make them easier to read:
[12:33:00.352] INFO (579951): Hello, world!
If you want to customize the output of the pino-pretty
transport, check out
the pino-pretty
transport.
Log levels in Pino
The default log levels in Pino are (ordered by ascending
severity) trace
, debug
, info
, warn
, error
, and fatal
, and each of
these have a corresponding method on the logger:
const logger = require('./logger');
logger.fatal('fatal');
logger.error('error');
logger.warn('warn');
logger.info('info');
logger.debug('debug');
logger.trace('trace');
When you execute the code above, you will get the following output:
{"level":60,"time":1643664517737,"pid":20047,"hostname":"fedora","msg":"fatal"}
{"level":50,"time":1643664517738,"pid":20047,"hostname":"fedora","msg":"error"}
{"level":40,"time":1643664517738,"pid":20047,"hostname":"fedora","msg":"warn"}
{"level":30,"time":1643664517738,"pid":20047,"hostname":"fedora","msg":"info"}
Notice how the severity level is represented by a number which increments in 10s
according to the severity of the event. You'll also observe that no entry is
emitted for the debug()
and trace()
methods. This is because the default
minimum level on a Pino logger is info
and this causes any event with a
severity less than info
to be suppressed.
Setting the minimum log level is typically done when creating the logger and controlled through an environmental variable so that it's possible to change it in different environments without making code modifications:
const logger = require('./logger');
module.exports = pinoLogger({
level: process.env.PINO_LOG_LEVEL || 'info',
});
If the PINO_LOG_LEVEL
variable is set in the environment, that value will be
used. Otherwise, it falls back to the info
level. The example below sets the
minimum level to error
so that the events below the error
level are all
suppressed.
PINO_LOG_LEVEL=error node index.js
{"level":60,"time":1643665426792,"pid":22663,"hostname":"fedora","msg":"fatal"}
{"level":50,"time":1643665426793,"pid":22663,"hostname":"fedora","msg":"error"}
You can also change the minimum level on a logger
instance at anytime through
its level
property:
const logger = require('./logger');
logger.level = 'debug'; // only trace messages will be suppressed now
. . .
This is useful if you want to change the minimum log level at runtime perhaps by exposing a protected endpoint for this purpose:
app.get('/changeLevel', (req, res) => {
const { level } = req.body;
// check that the level is valid then change it:
logger.level = level;
});
Customizing log levels in Pino
Pino does not restrict you to just the default levels that it provides. You can
easily add your own customs levels through the customLevels
property on the
options
object. For example, you can use the standard
Syslog levels by
creating an object that defines the integer priority of each level, then assign
the object to the customLevels
property. You should also enable the
useOnlyCustomLevels
option so that Pino's original log levels are omitted in
favour of the defined custom levels.
const pino = require('pino');
const levels = {
emerg: 80,
alert: 70,
crit: 60,
error: 50,
warn: 40,
notice: 30,
info: 20,
debug: 10,
};
module.exports = pino({
level: process.env.PINO_LOG_LEVEL || 'info',
customLevels: levels,
useOnlyCustomLevels: true,
});
At this point, you can log events at each defined custom level through their respective methods:
logger.emerg('Emergency');
logger.alert('Alert');
logger.crit('Critical');
{"level":80,"time":1643669711817,"pid":29472,"hostname":"fedora","msg":"Emergency"}
{"level":70,"time":1643669711818,"pid":29472,"hostname":"fedora","msg":"Alert"}
{"level":60,"time":1643669711818,"pid":29472,"hostname":"fedora","msg":"Critical"}
Customizing the default fields
In this section, we'll take a quick glance at the process of modifying the standard fields that come with every Pino log entry. However, be sure to explore the comprehensive range of Pino options at your disposal.
Using string labels for severity levels
Instead of outputting the integer value of each severity level, you can use the level name instead by specifying the formatters configuration below:
. . .
module.exports = pino({
level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { level: label.toUpperCase() };
},
},
});
This change causes severity level on each entry to be upper-case labels:
{"level":"ERROR","time":1677673626066,"pid":636012,"hostname":"fedora","msg":"error"}
{"level":"WARN","time":1677673626066,"pid":636012,"hostname":"fedora","msg":"warn"}
{"level":"INFO","time":1677673626066,"pid":636012,"hostname":"fedora","msg":"info"}
You can also rename the level
property by returning something like this from
the function:
module.exports = pinoLogger({
level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { severity: label.toUpperCase() };
},
},
});
{"severity":"ERROR","time":1677676496547,"pid":693683,"hostname":"fedora","msg":"error"}
{"severity":"WARN","time":1677676496547,"pid":693683,"hostname":"fedora","msg":"warn"}
{"severity":"INFO","time":1677676496547,"pid":693683,"hostname":"fedora","msg":"info"}
Customizing the timestamp format
Pino's default timestamp is the number of milliseconds elapsed since January 1,
1970 00:00:00 UTC (as produced by the Date.now()
function). You can customize
this output through the timestamp
property on the options
object when
creating a logger. We recommend outputting your timestamps in the ISO-8601
format:
const pino = require('pino');
module.exports = pino({
level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { level: label.toUpperCase() };
},
},
timestamp: pino.stdTimeFunctions.isoTime,
});
```json
[output]
{"level":"INFO","time":"2023-03-01T12:36:14.170Z","pid":650073,"hostname":"fedora","msg":"info"}
You can also change the property name from time
to timestamp
by specifying a
function that returns a partial JSON representation of the time (prefixed with a
comma) like this:
pino({
timestamp: () => `,"timestamp":"${new Date(Date.now()).toISOString()}"`,
})
{"label":"INFO","timestamp":"2023-03-01T13:19:10.018Z","pid":698279,"hostname":"fedora","msg":"info"}
Customizing the default bindings
Pino binds two extra properties to each log entry by default: the program's
process ID (pid
), and the name of the host where the log entry was generated.
You can customize them through the bindings
function on the formatters
object:
const pino = require('pino');
module.exports = pino({
level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
bindings: (bindings) => {
return { pid: bindings.pid, host: bindings.hostname };
},
level: (label) => {
return { level: label.toUpperCase() };
},
},
timestamp: pino.stdTimeFunctions.isoTime,
});
{"level":"INFO","time":"2023-03-01T13:24:28.276Z","process_id":707519,"host":"fedora","msg":"info"}
You may decide to omit any of the fields by removing it from the returned object, and you can also add your custom properties here if you intend for them to appear in every log entry:
bindings: (bindings) => {
return {
pid: bindings.pid,
host: bindings.hostname,
node_version: process.version,
};
},
{"level":"INFO","time":"2023-03-01T13:31:28.940Z","pid":719462,"host":"fedora","node_version":"v18.14.0","msg":"info"}
Other useful examples of global data that can be added to every log entry include the application version, operating system, configuration settings, git commit hash, and more.
Adding context to your logs
Adding contextual data to logs refers to the practice of including additional information that provides more context or details about the events being logged. This information can help with troubleshooting, debugging, and monitoring applications.
For example, if an error occurs in a web application, including contextual data such as the request's ID, the endpoint being accessed, or and the user ID that triggered the request can help with identifying the root cause of the issue more quickly.
In Pino, the primary way to add contextual data to your log entries is through
the
mergingObject
parameter
on a level method:
logger.error(
{ transaction_id: '12343_ff', user_id: 'johndoe' },
'Transaction failed'
);
The above snippet produces the following output:
{"level":"ERROR","time":"2023-03-01T13:47:00.302Z","pid":737430,"hostname":"fedora","transaction_id":"12343_ff","user_id":"johndoe","msg":"Transaction failed"}
It's also useful to set some contextual data on all logs produced in a scope so that you don't have to repeat them at each log point. This is done in Pino through child loggers:
const logger = require('./logger');
logger.info('starting the program');
function getUser(userID) {
const childLogger = logger.child({ userID });
childLogger.trace('getUser called');
// retrieve user data and return it
childLogger.trace('getUser completed');
}
getUser('johndoe');
logger.info('ending the program');
Execute the code with trace
as the minimum level:
PINO_LOG_LEVEL node index.js
{"level":"INFO","time":"2023-03-01T14:15:47.168Z","pid":764167,"hostname":"fedora","msg":"starting the program"}
{"level":"TRACE","time":"2023-03-01T14:15:47.169Z","pid":764167,"hostname":"fedora","userID":"johndoe","msg":"getUser called"}
{"level":"TRACE","time":"2023-03-01T14:15:47.169Z","pid":764167,"hostname":"fedora","userID":"johndoe","msg":"getUser completed"}
{"level":"INFO","time":"2023-03-01T14:15:47.169Z","pid":764167,"hostname":"fedora","msg":"ending the program"}
Notice how the userID
property is present only within the context of the
getUser()
function. Using child loggers in this manner allows you to add
context to log entries without the data at log point. It also makes it easier to
filter and analyze logs based on specific criteria, such as user ID, function
name, or other relevant contextual details.
Logging errors with Pino
Logging errors is an important practice that will help you track and diagnose issues that occur in production. When an error or exception occurs, you should log all the relevant details including its severity, a description of the problem, and any relevant contextual information.
You can log errors with Pino by passing the error object as the first argument
to the error()
method followed by the log message:
const logger = require('./logger');
function alwaysThrowError() {
throw new Error('processing error');
}
try {
alwaysThrowError();
} catch (err) {
logger.error(err, 'An unexpected error occurred while processing the request');
}
This produces a log entry that includes an err
property containing the type of
the error, its message, and a complete stack trace which is handy for
troubleshooting.
{
"level": "ERROR",
"time": "2023-03-01T14:28:17.821Z",
"pid": 781077,
"hostname": "fedora",
"err": {
"type": "Error",
"message": "processing error",
"stack": "Error: processing error\n at alwaysThrowError (/home/ayo/dev/betterstack/community/demo/pino-logging/main.js:4:9)\n at Object.<anonymous> (/home/ayo/dev/betterstack/community/demo/pino-logging/main.js:8:3)\n at Module._compile (node:internal/modules/cjs/loader:1226:14)\n at Module._extensions..js (node:internal/modules/cjs/loader:1280:10)\n at Module.load (node:internal/modules/cjs/loader:1089:32)\n at Module._load (node:internal/modules/cjs/loader:930:12)\n at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)\n at node:internal/main/run_main_module:23:47"
},
"msg": "An unexpected error occurred while processing the request"
}
Handling uncaught exceptions and unhandled promise rejections
Pino does not include a special mechanism for logging uncaught exceptions or
promise rejections, so you must listen for the uncaughtException
and
unhandledRejection
events and log the exception using the FATAL
level before
exiting the program (after attempting a graceful shutdown):
process.on('uncaughtException', (err) => {
// log the exception
logger.fatal(err, 'uncaught exception detected');
// shutdown the server gracefully
server.close(() => {
process.exit(1); // then exit
});
// If a graceful shutdown is not achieved after 1 second,
// shut down the process completely
setTimeout(() => {
process.abort(); // exit immediately and generate a core dump file
}, 1000).unref()
process.exit(1);
});
You can use a process manager like [PM2][pm2], or a service like Docker to automatically restart your application if it goes down due to an uncaught exception. Don't forget to set up [health checks][health-checks] so you can continually monitor the state of your application.
Transporting your Node.js logs
Pino defaults to logging to the standard output as you've seen throughout this tutorial, but you can also configure it to log to a file or other destinations (such as a log management service).
You'll need to use the transports feature which was introduced in v7 of the library. These transports operate inside worker threads so that main thread of the application is kept free from transforming log data or sending them to remote services (which could significantly increase the latency of your HTTP responses).
Here's how to use the built-in pino/file
transport to route your logs to a
file (or a file descriptor):
const pino = require('pino');
const fileTransport = pino.transport({
target: 'pino/file',
options: { destination: `${__dirname}/app.log` },
});
module.exports = pino(
{
level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { level: label.toUpperCase() };
},
},
timestamp: pino.stdTimeFunctions.isoTime,
},
fileTransport
);
Henceforth, all logs will be sent to an app.log
file in the current working
directory instead of the standard output. Unlike,
Winston,
its main competition in the Node.js logging space, Pino does not provide a
built-in mechanism to rotate your log files so they don't become too large.
You'll need to rely on external tools such as
Logrotate for this
purpose.
Another way to log into files (or file descriptors) is by using the pino.destination() API like this:
const pino = require('pino');
module.exports = pino(
{
level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { level: label.toUpperCase() };
},
},
timestamp: pino.stdTimeFunctions.isoTime,
},
pino.destination(`${__dirname}/app.log`)
);
Note that the pino/file
transport actually uses pino.destination()
under the
hood. The main difference between the two is that the former runs in a worker
thread while the latter runs in the main thread. When logging only to the
standard output or local files, using pino/file
may introduce some overhead
because the data has to be moved off the main thread first. So you should stick
with pino.destination()
in such cases. Using pino/file
is recommended only
when you're logging to multiple destinations at once such as to a local file and
a third-party log management service.
Pino also supports legacy transports which run in a completely separate process from the Node.js program. See the relevant documentation for more details.
Logging to multiple destinations in Pino
Logging to multiple destinations is a common use case that is also supported in
Pino v7+ transports. You'll need to create a targets
array and place all the
transport objects within it like this:
const pino = require('pino');
const transport = pino.transport({
targets: [
{
target: 'pino/file',
options: { destination: `${__dirname}/app.log` },
},
{
target: 'pino/file', // logs to the standard output by default
},
],
});
module.exports = pino(
{
level: process.env.PINO_LOG_LEVEL || 'info',
timestamp: pino.stdTimeFunctions.isoTime,
},
transport
);
This snippet configures Pino to log to the standard output and the app.log
file simultaneously. Note that the formatters.level
function cannot be used
when logging to multiple transports that's why it was omitted in the example
above. If you leave it in, you will get the following error:
Error: option.transport.targets do not allow custom level formatters
You can change the second object to use the
pino-pretty transport if you'd like
a prettified output to be delivered to stdout
instead of the JSON formatted
output (note that pino-pretty
must be installed first):
const transport = pino.transport({
targets: [
{
target: 'pino/file',
options: { destination: `${__dirname}/app.log` },
},
{
target: 'pino-pretty',
},
],
});
node index.js && echo $'\n' && cat app.log
[14:33:41.932] INFO (259060): info
[14:33:41.933] ERROR (259060): error
[14:33:41.933] FATAL (259060): fatal
{"level":30,"time":"2023-03-03T13:33:41.932Z","pid":259060,"hostname":"fedora","msg":"info"}
{"level":50,"time":"2023-03-03T13:33:41.933Z","pid":259060,"hostname":"fedora","msg":"error"}
{"level":60,"time":"2023-03-03T13:33:41.933Z","pid":259060,"hostname":"fedora","msg":"fatal"}
Keeping sensitive data out of your logs
One of the most critical best practices for application logging involves keeping sensitive data out of your logs. Such data includes (but is not limited to) the following:
- Financial data such as card numbers, pins, bank accounts, etc.
- Passwords or application secrets.
- Any data that can be used to identify a person such as email addresses, names, phone numbers, addresses, identification numbers and more.
- Medical records
- Biometric data, and more.
Including sensitive data in logs can lead to data breaches, identity theft, unauthorized access, or other malicious activities which could damage trust in your business, sometimes irreparably. It could also expose your business to fines and other penalties from regulatory bodies such as GDPR, PCI, and HIPPA. To prevent such incidents, it's crucial to always sanitize your logs to ensure such data do not accidentally sneak in.
There are several practices you can adopt to keep sensitive data out of your logs, but we cannot discuss them all here. We'll focus only on Log redaction, a technique for identifying and remove sensitive data from the logs, while preserving the relevant information needed for troubleshooting or analysis purposes. Pino uses the fast-redact package to provide log redaction capabilities for Node.js applications.
For example, you might have a user
object with the following structure:
const user = {
id: 'johndoe',
name: 'John Doe',
address: '123 Imaginary Street',
passport: {
number: 'BE123892',
issued: 2023,
expires: 2027,
},
phone: '123-234-544',
};
If this object is logged as is, you will expose sensitive data such as the user's name, address, passport details and phone number:
logger.info({ user }, 'User updated');
{
"level": "info",
"time": 1677660968266,
"pid": 377737,
"hostname": "fedora",
"user": {
"id": "johndoe",
"name": "John Doe",
"address": "123 Imaginary Street",
"passport": {
"number": "BE123892",
"issued": 2023,
"expires": 2027
},
"phone": "123-234-544"
},
"msg": "User updated"
}
To prevent this from happening, you must set up your logger
instance in
advance to redact the sensitive fields. Here's how:
const pino = require('pino');
module.exports = pino({
level: process.env.PINO_LOG_LEVEL || 'info',
formatters: {
level: (label) => {
return { level: label };
},
},
redact: ['user.name', 'user.address', 'user.passport', 'user.phone'],
});
The redact
option above is used to specify an array of fields should be
redacted in the logs. The above configuration will replace the name
,
address
, passport
, and phone
fields in any user
object supplied at log
point with a [Redacted]
placeholder:
{
"level": "info",
"time": 1677662887561,
"pid": 406515,
"hostname": "fedora",
"user": {
"id": "johndoe",
"name": "[Redacted]",
"address": "[Redacted]",
"passport": "[Redacted]",
"phone": "[Redacted]"
},
"msg": "User updated"
}
This way, only the id
field is present in the logs. You can also to change the
placeholder string using the following configuration:
module.exports = pino({
redact: {
paths: ['user.name', 'user.address', 'user.passport', 'user.phone'],
censor: '[PINO REDACTED]',
},
});
{
"level": "info",
"time": 1677663111963,
"pid": 415221,
"hostname": "fedora",
"user": {
"id": "johndoe",
"name": "[PINO REDACTED]",
"address": "[PINO REDACTED]",
"passport": "[PINO REDACTED]",
"phone": "[PINO REDACTED]"
},
"msg": "User updated"
}
Finally, you can decide to remove the fields entirely by specifying the remove
option. This might be preferable to reduce the verbosity of your logs so they
don't take up storage resources unnecessarily.
module.exports = pino({
redact: {
paths: ['user.name', 'user.address', 'user.passport', 'user.phone'],
censor: '[PINO REDACTED]',
remove: true,
},
});
{
"level": "info",
"time": 1677663213497,
"pid": 419647,
"hostname": "fedora",
"user": {
"id": "johndoe"
},
"msg": "User updated"
}
While this is a handy way to reduce the risk of sensitive data being included in
your logs, it can be easily bypassed if you're not careful. For example, if the
user
object is nested inside some other object or placed at the top level, the
redaction filter will not match the fields anymore and the sensitive fields will
make it through.
// the current redaction filter will match
logger.info({ user }, 'User updated');
// the current redaction filter will not match
logger.info({ nested: { user } }, 'User updated');
logger.info(user, 'User updated');
You'll have to update the filters to look like this to catch these three cases:
module.exports = pino({
redact: {
paths: [
'name',
'address',
'passport',
'phone',
'user.name',
'user.address',
'user.passport',
'user.phone',
'*.user.name', // * is a wildcard covering a depth of 1
'*.user.address',
'*.user.passport',
'*.user.phone',
],
remove: true,
},
});
Of course, you should enforce that objects are being logged in a consistent manner throughout your application during the review process, but since you can't account for every variation that may make it through, it's best to not rely on this technique as a primary solution for preventing sensitive data from making it through to your logs.
Log redaction should be used as more of a backup measure that can help to catch problems that were missed in the review process. Ideally, don't log any objects that may contain any sensitive data in the first place. Extracting only the necessary non-sensitive fields that is necessary to provide context about the event being logged is the best way to reduce the risk of sensitive data from making it into your logs.
Logging HTTP requests with Pino
You can use Pino in your Node.js web application no matter the framework you're using. Fastify users should note that while logging with Pino is built into the framework, it is disabled by default so you must enable it first.
const fastify = require('fastify')({
logger: true
})
Once enabled, Pino will log all incoming requests to the server in the following manner:
{"level":30,"time":1675961032671,"pid":450514,"hostname":"fedora","reqId":"req-1","res":{"statusCode":200},"responseTime":3.1204520016908646,"msg":"request completed"}
If you use some other framework, see the Pino ecosystem page for the specific integration that works with your framework.
The example below demonstrates how to use the pino-http package to log HTTP requests in Express:
const express = require('express');
const logger = require('./logger');
const axios = require('axios');
const pinoHTTP = require('pino-http');
const app = express();
app.use(
pinoHTTP({
logger,
})
);
app.get('/crypto', async (req, res) => {
try {
const response = await axios.get(
'https://api2.binance.com/api/v3/ticker/24hr'
);
const tickerPrice = response.data;
res.json(tickerPrice);
} catch (err) {
logger.error(err);
res.status(500).send('Internal server error');
}
});
app.listen('4000', () => {
console.log('Server is running on port 4000');
});
Also, ensure your logger.js
file is set up to log to both the standard output
and a file like this:
const pino = require('pino');
const transport = pino.transport({
targets: [
{
target: 'pino/file',
options: { destination: `${__dirname}/server.log` },
},
{
target: 'pino-pretty',
},
],
});
module.exports = pino(
{
level: process.env.PINO_LOG_LEVEL || 'info',
timestamp: pino.stdTimeFunctions.isoTime,
},
transport
);
Then install the required dependencies using the command below:
npm install express axios pino-http
Start the server on port 4000 and make a GET request to the /crypto
route
through curl
:
node index.js
curl http://localhost:4000/crypto
You'll observe the following the following prettified log output in the server console which corresponds to the HTTP request:
[15:30:54.508] INFO (291881): request completed
req: {
"id": 1,
"method": "GET",
"url": "/crypto",
"query": {},
"params": {},
"headers": {
"host": "localhost:4000",
"user-agent": "curl/7.85.0",
"accept": "*/*"
},
"remoteAddress": "::ffff:127.0.0.1",
"remotePort": 36862
}
res: {
"statusCode": 200,
"headers": {
"x-powered-by": "Express",
"content-type": "application/json; charset=utf-8",
"content-length": "1099516",
"etag": "W/\"10c6fc-mMUyGYJwdl+yk7A7N/rYiPWqFjo\""
}
}
responseTime: 2848
The server.log
file will contain the raw JSON output
cat server.log
{"level":30,"time":"2023-03-03T14:30:54.508Z","pid":291881,"hostname":"fedora","req":{"id":1,"method":"GET","url":"/crypto","query":{},"params":{},"headers":{"host":"localhost:4000","user-agent":"curl/7.85.0","accept":"*/*"},"remoteAddress":"::ffff:127.0.0.1","remotePort":36862},"res":{"statusCode":200,"headers":{"x-powered-by":"Express","content-type":"application/json; charset=utf-8","content-length":"1099516","etag":"W/\"10c6fc-mMUyGYJwdl+yk7A7N/rYiPWqFjo\""}},"responseTime":2848,"msg":"request completed"}
You can further customize the output of the pino-http
module by taking a look
at its API documentation.
Centralizing and monitoring your Node.js logs
One of the main advantages of logging in a structured format is the ability to ingest them into a centralized logging system to be indexed, searched, and analyzed in an efficient manner. By consolidating all log data into a central location, you will gain a holistic view of your systems' health and performance, making it easier to identify patterns, spot anomalies, and troubleshoot issues.
[Centralizing logs][log-aggregation] also simplifies compliance efforts by providing a single source of truth for auditing and monitoring purposes. It helps ensure that all logs are properly retained and easily accessible for any regulatory or legal requirements.
Furthermore, with the right tools in place, centralizing logs can enable real-time alerting and proactive monitoring, allowing you to detect and respond to issues before they become critical. This can greatly reduce downtime and minimize the impact on your organization's operations.
Now that you've configured Pino in your Node.js application to output structured logs, the next step is to centralize your logs in a log management system so that you can actually reap the benefits that comes with structured logging. Logtail is one such solution that can tail your logs, analyze and visualize them, and help with alerting when certain patterns are detected.
There are several ways to get your logs from your Node.js application into Logtail, but one of the easiest ways is to use its Pino transport like this:
const transport = pino.transport({
targets: [
{
target: 'pino/file',
options: { destination: `${__dirname}/app.log` },
},
{
target: '@logtail/pino',
options: { sourceToken: '<your_logtail_source_token>' },
},
{
target: 'pino-pretty',
},
],
});
Which such configuration in place, your logs will be centralized in Logtail and you can them as they come through in the live tail page. You can also filter them using any of the attributes in the logs, or created automated alerts to notify you of important occurrences (such as a spike in errors).
Final thoughts
In this article, we've provided a comprehensive overview of Node.js logging with Pino. We've discussed the benefits of using Pino for logging, its key features, and how to configure and customize it for your specific needs. We hope that the information contained in this guide has helpful in demystifying logging with Pino and how to use it effectively in your Node.js applications.
It is impossible to learn everything about Pino and its capabilities in one article, so we highly recommend consulting their official documentation for more help on its basic and advanced features.
Thanks for reading, and happy coding!
Make your mark
Join the writer's program
Are you a developer and love writing and sharing your knowledge with the world? Join our guest writing program and get paid for writing amazing technical guides. We'll get them to the right readers that will appreciate them.
Write for us
Build on top of Better Stack
Write a script, app or project on top of Better Stack and share it with the world. Make a public repository and share it with us at our email.
[email protected]or submit a pull request and help us build better products for everyone.
See the full list of amazing projects on github