Caching is one of the most effective optimizations that you can apply to an
application. It involves storing some data in a temporary location called a
cache so that it can be retrieved much faster when it is requested in the
future. This data is often the result of some earlier computation, API request,
or database query.
The main goal of caching is to improve application performance. Since data can
often be retrieved much faster from the cache, the need to repeat a network
request or database query for the same data is reduced and this can
significantly reduce the latency associated with a particular operation. Caching
also reduces network costs since less data is transferred, and makes application
performance much more reliable and predictable as you're less susceptible to the
effects of network congestion, service downtime, load spikes, and other
challenges.
In this article, we'll discuss how to set up caching in a Node.js application
through Redis, a popular and versatile in-memory database
that is often used as a distributed database cache for web applications. It can
be used with a wide variety of programming languages and environments, and it
has a lot of other uses besides caching.
By following through with this tutorial, you will learn the following
aspects of Redis caching in Node.js:
How to measure the performance of a web application through benchmarking.
Techniques to improve the effectiveness of caching.
Prerequisites
Before you proceed with the remainder of this tutorial, ensure that you have met
the following requirements:
Basic experience with building server applications in Node.js.
A recent version of Node.js and
npm installed on your computer or server.
Step 1 — Setting up the Node.js application
In this tutorial, we will demonstrate the concept of caching in Node.js by
modifying the
Hacker News Search
application from the earlier tutorial on Express and
Pug. You don't need to follow the
tutorial to build the application; you can clone it to your machine using the
command below:
The /search route above expects a query which is subsequently passed on to the
searchHN() function for querying the
Hacker News API provided by Algolia to get the
top stories for that search term. Once the JSON response from the API is
retrieved, it is used to render an HTML document through the search.pug
template in the views folder.
You can test the application by starting the development server through the
command below. Note that the server will automatically restart whenever a change
is detected in any project files.
Copied!
npm run dev
Afterward, head over to http://localhost:3000 (or
http://<your_server_ip>:3000) in your web browser and make a search request
for a popular term. You should observe that relevant results from Hacker News
are fetched and displayed for your query.
Now that our demo application has been set up, let's create and run a benchmark
to determine how quickly our application can resolve requests to the /search
route so that we can figure out our current performance before we implement
caching in an attempt to improve it.
Step 2 — Benchmarking the application with Artillery
In this step, we will utilize the
Artillery package to measure our
application's performance in its current state so that we can easily quantify
the differences after adding caching through Redis. Having a baseline
measurement is an essential step before carrying out any performance
optimization. It helps you determine if the optimization had the desired effect
and if the trade-offs are worth it.
Ensure that you are in the project directory, then install the artillery
package globally through npm:
Copied!
npm install -g artillery
Afterward, the artillery command should be accessible. Ensure that running
artillery --version yields a version number of 2.0.0 or higher:
Artillery utilizes
test definition files
to determine the configuration parameters for a test run. They are YAML files
that consist of two main sections: config and scenarios. The former
specifies settings for the test, such as target URL, HTTP headers, virtual
users, requests per user, and more, while the latter describes the actions that
each virtual user must take during the test.
Create an artillery.yml file in the root of your project directory and open it
in your text editor:
In the config section, the target is the URL of the server. This phases
block describes a load phase that lasts 30 seconds, where 10 new virtual users
are created every second. When a virtual user is created, they execute the
scenario that is defined in the scenarios block which is making a GET request
to the /search route. After all the virtual users have completed their
scenarios, the test will complete and a summary of the results will be printed
to the console.
Although this test probably isn't representative of a real-world scenario, it
will give us valuable data about the performance of the /search endpoint that
we can refer back to after carrying out the planned optimizations. You should
check out the
Artillery docs to
discover how to create realistic user flows for your application.
Let's go ahead and execute the artillery.yml script through the command shown
below. Ensure that the Hacker News application is running in a separate terminal
before executing this command.
Copied!
artillery run artillery.yml
You should observe the following summary at the bottom of the output produced by
Artillery.
The exact numbers will likely differ in your test run, but the following is the
explanation of the report above:
10 requests were made to the server every second, and 600 requests in total.
All 600 requests were successful (200 OK).
The minimum and maximum response times was 187ms and 2534ms respectively.
The average response time was 584.2ms.
The 95th and 99th percentile numbers are 1002.4ms and 1587.9ms respectively.
This means, 95% of the time, the request was fulfilled below 1002.4ms and 99%
of the time, it was below 1587.9ms.
Now that we have some quantifiable data on the current performance of the
/search route in our Node.js application, let's go ahead and install Redis in
the next section.
Step 3 — Installing and setting up Redis
This section will describe how to install and set up Redis on Ubuntu 20.04. If
you're on a different operating system, head to the
download page to get the latest version for your
system. Note that Redis is not officially supported in Windows, but you're able
to install and set it up through
Windows Subsystem for Linux (WSL)
in Windows 10 or later.
Although Redis is already available in the default Ubuntu repositories,
installing it from there is somewhat discouraged as the available version is
usually not the latest. To ensure that we get the latest stable version, we'll
use the
official Ubuntu PPA
which is maintained by the Redis team.
Run the following command in a new terminal instance to add the repository to
the apt index. The local package index should update immediately after adding
the repository.
Copied!
sudo add-apt-repository ppa:redislabs/redis -y
Afterward, install the redis package through the command below:
Copied!
sudo apt install redis
Once the command finishes, verify the version of Redis that was installed:
Copied!
redis-server --version
Output
Redis server v=6.2.6 sha=00000000:0 malloc=jemalloc-5.1.0 bits=64 build=9c9e426e2f96cc51
After installing Redis, open its configuration file in your text editor:
Copied!
sudo nano /etc/redis/redis.conf
Inside the file, find the supervised directive and change its value from no
to systemd. This directive declares the init system that manages Redis as a
service. It's being set to systemd here as that's what Ubuntu uses by default.
/etc/redis/redis.conf
Copied!
. . .
# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
# supervised no - no supervision interaction
# supervised upstart - signal upstart by putting Redis into SIGSTOP mode
# supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
# supervised auto - detect upstart or systemd method based on
# UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
# They do not enable continuous liveness pings back to your supervisor.
supervised systemd
. . .
Save and close the file after making the changes, then start the Redis service
using the command below:
Copied!
sudo systemctl start redis
Go ahead and confirm that the Redis service is running through the following
command:
Copied!
sudo systemctl status redis
You should observe the following output:
Output
● redis-server.service - Advanced key-value store
Loaded: loaded (/lib/systemd/system/redis-server.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-01-07 10:47:24 UTC; 4s ago
Docs: http://redis.io/documentation,
man:redis-server(1)
Main PID: 793992 (redis-server)
Status: "Ready to accept connections"
Tasks: 5 (limit: 1136)
Memory: 3.0M
CGroup: /system.slice/redis-server.service
└─793992 /usr/bin/redis-server 127.0.0.1:6379
Jan 07 10:47:24 ubuntu-20-04 systemd[1]: Starting Advanced key-value store...
Jan 07 10:47:24 ubuntu-20-04 systemd[1]: Started Advanced key-value store.
This indicates that Redis is up and running, and it is set up to automatically
start every time the server is rebooted.
As a final way to confirm that your Redis installation is functioning correctly,
launch the redis-cli prompt:
Copied!
redis-cli
In the resulting prompt, enter the ping command. You should receive a PONG
output:
Copied!
127.0.0.1:6379> ping
PONG
You can type exit afterwards to exit the redis-cli prompt.
Now that your Redis instance is fully operational, let's go ahead and install
the necessary packages for working with Redis in Node.js in the next section.
Step 5 — Installing and configuring the Redis package for Node.js
Utilizing Redis as a caching solution for Node.js applications is made easy
through the redis package provided by the
core team. Go ahead and install it in your application through the npm as
shown below:
Copied!
npm install redis
Once the installation is completed, open the server.js file in your text
editor:
Copied!
nano server.js
Import the redis package at the top of the file below the other imports, and
create a new Redis client as shown below:
redisClient.on('ready', () => console.log('Redis is ready'));
await redisClient.connect();
await redisClient.ping();
})();
. . .
The default port for Redis is 6379 so that's what its supplied to the
createClient() method. Other configuration options for this method can be
accessed through its
documentation page.
After creating a Redis client, you should listen for at least the ready and
errorevents before proceeding.
Once you save the file, the application will restart, and you will see 'Redis is
ready' in the output provided that your Redis instance is up and running.
Output
. . .
[rundev] App server restarted
server-0 | Hacker news server started on port: 3000
server-0 | Redis is ready
In the next section, we'll implement a caching strategy for the /search route
in our Hacker News application so that the speed of resolving search queries is
greatly improved.
Step 6 — Caching API responses in Redis
In this step, you'll cache the responses for each search term in Redis so that
they can be reused for subsequent requests if the exact search term is repeated.
We'll utilize the popular
Cache-Aside Pattern
which specifies that an attempt is made to retrieve the requested data from the
cache first before reaching out to the original data source if the item does not
exist in the cache. Subsequently, the retrieved data is stored in the cache so
that repeated requests for the same data can be resolved more quickly.
When a request is fulfilled by successfully retrieving the requested data from
the cache, it is known as a cache hit. If the original data store has to be
accessed to fulfill a request, it is known as a cache miss. A good caching
strategy will ensure that most requests will result in a cache hit. However, the
occasional cache miss cannot be avoided, especially for data that is updated
frequently.
Start by opening the server.js file in your text editor:
This code uses the Redis client that we created in the previous step to cache
and retrieve the JSON response received from the Algolia API. The Redis key
consists of a concatenation of the search: prefix and a lowercase version of
the search term so that each key is unique to the specified search term. The
first step is to use the get() method to check the cache for the specified
key. If this key doesn't exist, this method will return null so that you'll
know to query the source for the data.
After retrieving the data, you can then use set() or setEx() to store the
data in the cache under the key name. The setEx() method is preferred here so
that a timeout of five minutes (300 seconds) is set on the key. Therefore, each
cached result is reused for a maximum of five minutes before it is expired and
refreshed which helps us avoid serving stale results for a specific search term.
Now that you've integrated the Redis library to implement a basic caching
strategy, let's go ahead and rerun the earlier benchmark to see if the changes
have had the desired effect.
Step 7 — Rerunning the benchmark
Return to the terminal and execute the command below to send virtual users to
your server once again:
Copied!
artillery run artillery.yml
You should observe the following results once the command exits:
Compared to the previous run, we get much lower min and median response times
(24ms and 26.3ms respectively), and 95% of all the requests were completed
within 48.9ms. This is a massive improvement from earlier numbers (187ms, 584ms,
and 1002ms respectively). The response times are much lower in this run because
only the first couple of users hit the API directly, while the vast majority of
the requests were fulfilled using the cached data.
Aside from the primary benefit of reduced latency and faster response times for
users, it also minimizes our costs especially when we're working with a paid API
since we can reuse a response several times before it needs to be refreshed.
APIs also often have rate limits or downtime so this approach also helps prevent
resource starvation.
Now that we've seen an example of how effective caching can be at improving the
speed of request completion, let's discuss a few considerations for deciding
what to cache and how we can achieve a high cache hit rate in our applications.
Step 8 — Achieving a high cache hit rate
When deciding on a caching strategy for your Node.js application, you need to
find the most optimal way to cache data to achieve a high cache hit rate. The
ideal candidates for caching are those data that can be reused for several
requests before it needs to be updated. If the data changes frequently such that
it cannot be reused for a subsequent request, then it is not a good candidate
for caching.
In the above example, the results for a search term for Hacker News is unlikely
to change significantly within five minutes so it makes sense to keep reusing
the response from the Hacker News API for that duration of time. Depending on
the sensitivity of the data, you can potentially cache it for more extended
periods or even forever if the data is never going to change and thus can be
reused indefinitely.
Another consideration for caching data is how frequently it is requested. Data
that is not requested often should probably not be cached even if it can be
reused. This is because cache storage is usually limited so you want it only to
be used for frequently accessed resources in your application's hotspots.
The cache-aside pattern discussed and implemented above is just one of many
patterns that you can employ for caching. Here's a brief overview of some other
patterns that you can investigate further:
Read-through pattern: data is always read from the cache. When there's a
cache miss, the data is loaded from the data source, stored in the cache, and
returned to the application.
Write-behind (Write-back) pattern: data is always written to the cache
first before it is updated in the data store sometime afterward.
Write-through pattern: similar to Write-behind, but data store updates are
made synchronously in the same transaction so that the cached data is never
stale.
Refresh-ahead pattern: frequently accessed cached data is refreshed before
they expire so that data staleness is minimized or eliminated entirely. It is
commonly used in latency-sensitive workloads.
Before we wrap up this tutorial, let's discuss another important aspect of
caching that you should be aware of.
Step 9 — Maintaining the cache
There are only two hard things in Computer Science: cache invalidation and
naming things. -- Phil Karlton
Cache invalidation and cache eviction are important considerations when
implementing a caching strategy in your application. The former deals with how
cache entries are refreshed or removed when they go stale, while the latter is
solely about removing items from the cache regardless of whether they are stale
or not.
Cache invalidation
In the example used for this tutorial, we're using a Time To Live (TTL) value
to invalidate our cached objects after five minutes. When an application
attempts to read an expired key, it is treated as though the key is not found
and the original data store is queried once again. This approach guarantees that
even if the cached value goes stale, it won't be stale for more than five
minutes. Depending on the data being cached, the tolerance for staleness can be
lower or higher. For example, a trending news stories site might tolerate only a
few seconds of staleness, but Covid-19 statistics may only need to be updated
once or twice per day.
Cache eviction
Cache eviction refers to a policy by which older items are removed from the
cache as newer ones are added. Since cache storage is usually limited compared
to the primary data store, having such a policy will ensure that only relevant
items are present in the cache at all times. We won't be able to cover the
different eviction policies in this article, but you should investigate the
following: Least Recently Used (LRU), Least Frequently Used (LFU), Most Recently
Used (MRU), First In First Out (FIFO).
Conclusion
In this article, we discussed the what and why of caching in Node.js, then
demonstrated how to benchmark an endpoint to measure its performance before
applying any optimizations. Subsequently, we set up Redis and integrated it with
our Node.js application before implementing the cache-aside pattern for caching
API responses. We then repeated the benchmark to illustrate how caching can
measurably improve application performance before rounding off with a discussion
on some important caching concepts to be aware of.
The entire code used in this tutorial can be downloaded from
GitHub. Thanks for
reading, and happy coding!
Article by
Ayooluwa Isaiah
Ayo is a technical content manager at Better Stack. His passion is simplifying and communicating complex technical ideas effectively. His work was featured on several esteemed publications including LWN.net, Digital Ocean, and CSS-Tricks. When he's not writing or coding, he loves to travel, bike, and play tennis.
Are you a developer and love writing and sharing your knowledge with the world? Join our guest
writing program and get paid for writing amazing technical guides. We'll get them to the right
readers that will appreciate them.