Node.js has built-in web server capabilities that is perfectly capable of being
used in production. However, the conventional advice that has persisted from its
inception is that you should always hide a production-ready Node.js application
behind a reverse proxy server.
In this tutorial, you will learn why the recommended practice of placing a
reverse proxy in front of a Node.js server is a good one to follow and how you
can set one up quickly with only a few lines of code. We'll start by discussing
what a reverse proxy is and the benefits it provides before you get some
hands-on practice by setting up a reverse proxy for a Node.js application
through NGINX, one of the most popular options for
this purpose.
By following through with this tutorial, you will learn the following aspects of configuring an NGINX reverse proxy server:
Installing NGINX and editing its configuration files.
Redirecting traffic from NGINX to a Node.js application.
Setting up load balancing with NGINX.
Prerequisites
Before you proceed with the remainder of this tutorial, ensure that you have met
the following requirements:
SSH access to a Linux server that includes a non-root user with root access.
An Ubuntu 20.04 server will be used in this tutorial.
A recent version of Node.js and
npm installed on your server.
Optionally, you should have a domain name pointing to your server's IP
address.
What is a reverse proxy, and why should you use it?
A reverse proxy is a special kind of web server that accepts requests from
various clients, forwards each request to the appropriate server that can handle
it, and returns the server's response to the originating client. It is usually
positioned at the edge of the network to intercept client requests before they
reach the origin server. It is often configured to modify the request in some
manner before routing it appropriately.
Once a response is sent back by the origin server, it also goes through the
reverse proxy where further processing may occur. For example, the response body
may be subjected to gzip compression or encryption for security purposes.
Another common use case for a reverse proxy is to enable SSL or TLS in
situations where the underlying server does not support it.
The use of a reverse proxy provides several benefits for web applications:
It increases the security of your backend servers by preventing information
about them (such as their IP address, programming language, e.t.c.) from
leaking to the outside world. This makes it much harder for malicious actors
to launch a targeted attack (such as a
Distributed Denial of Service (DDoS)
attack) on the server. Instead, the attack will be directed at the reverse
proxy. Many of them provide features to help fend off such attacks by
blacklisting a particular client IP, or limiting network connections from a
specific client.
Compressing responses before they are delivered back to the client helps save
bandwidth and data costs for end-users.
Caching responses at the reverse proxy means that they can be served directly
without consulting the origin server, which can decrease response times
significantly. An added benefit is that the load on the origin server is much
reduced, which will increase performance.
It can encrypt the communications between server and client so that resources
on the server are freed up for the application's business logic. In addition,
dedicated reverse proxy tools like NGINX are typically able to outperform
Node.js in SSL (or TLS) encryption and decryption.
Load balancing is an everyday use case for reverse proxies. They can
distribute the load evenly for applications with multiple back-end servers to
achieve an optimal user experience and ensure high availability. It can also
redirect traffic to a server that's geographically closest to the originating
client to decrease latency.
There are many options to select from when it comes to reverse proxy
servers—Apache, HAProxy,
NGINX, Caddy and
Traefix to name a few. NGINX is chosen here because of
its track record as the
most popular
and performant option in its category with lots of features that should satisfy
most use cases.
NGINX can be used as a reverse proxy, load balancer, mail proxy and HTTP cache.
It is also often used to serve static files from the filesystem, an area it
particularly excels in
when compared to Node.js (over twice as fast compared to Express' static
middleware).
Before we install and set up NGINX on our Linux server, let's create a Node.js
application in the next step.
Step 1 — Setting up a Node.js project
In this step, you will set up a basic Node.js application that will be used to
demonstrate the concepts discussed in this article. This application will
provide a single endpoint for retrieving price change statistics for various
cryptocurrencies in the last 24 hours. It utilizes a free API from
Binance as the data source.
Create a directory on your filesystem for this demo Node.js project and change
into it:
Copied!
mkdir cypto-stats && cd cypto-stats
Initialize your project with a package.json file:
Copied!
npm init -y
Afterwards, install the necessary dependencies:
fastify as the web server framework,
got for making HTTP requests, and
node-cache for in-memory caching.
Copied!
npm install fastify got node-cache
Once the installation completes, create a new server.js file in the root of
your project directory and open it in a text editor:
Copied!
nano server.js
Go ahead and populate the file with the following code, which sets up a
/crypto endpoint for retrieving the price change statistics and caching it for
five minutes.
Save and close the file, then return to your terminal and run the following
command to start the server on port 3000:
Copied!
node server.js
You should see the following output, indicating that the server started
successfully:
Output
{"level":30,"time":1638163169765,"pid":3474,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3000>"}
Now that a running Node.js application is in place, let's go ahead and install
the NGINX server in the next section.
Step 2 — Installing and setting up NGINX
In this step, you will install NGINX on your server through its package manager.
Since NGINX is already in the default Ubuntu repositories, you should first
update the local package index and install the nginx package.
Run the following commands in a separate terminal instance:
Copied!
sudo apt update
Copied!
sudo apt install nginx
After the installation is complete, run the following command to confirm that it
was installed successfully and see the installed version.
Copied!
nginx -v
You should observe the following output:
Output
nginx version: nginx/1.18.0 (Ubuntu)
If you cannot install NGINX successfully using the method described above, try
the alternative procedures listed on the
NGINX installation guide
and ensure that you're able to install NGINX before proceeding.
After installing NGINX, Ubuntu should enable and start it automatically. You can
confirm that the nginx service is up and running through the command below:
Copied!
sudo systemctl status nginx
The following output indicates that the service started successfully:
Output
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-11-28 12:17:36 UTC; 6s ago
Docs: man:nginx(8)
Process: 532819 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/S>
Process: 532829 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 532831 (nginx)
Tasks: 2 (limit: 1136)
Memory: 5.7M
CGroup: /system.slice/nginx.service
├─532831 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─532832 nginx: worker process
Nov 28 12:17:36 ubuntu-20-04 systemd[1]: Starting A high performance web server and a reverse proxy server...
If you're running a system firewall, don't forget to allow access to NGINX
before proceeding:
Copied!
sudo ufw allow 'NGINX Full'
You can now open your server's IP address in the browser to verify that
everything is working. You should see the default NGINX landing page:
If you're not sure about your server's public IP address, run the command below
to print it to the standard output:
Copied!
curl -4 icanhazip.com
Now that you've successfully installed and enabled NGINX, you can proceed to the
next step where it will be configured as a reverse proxy for your Node.js
application.
Step 3 — Configuring NGINX as a Reverse Proxy
In this step, you will create a server block configuration file for your
application in the NGINX sites-available directory and set up NGINX to proxy
requests to your application.
First, change into the /etc/nginx/sites-available/ directory:
Copied!
cd /etc/nginx/sites-available/
Create a new file in this directory with the domain's name on which you wish to
expose your application, and open it in your text editor. This tutorial will use
your_domain, but ensure to replace it with your actual domain (if available).
Copied!
nano your_domain
Once open, populate the file with the following NGINX server block:
The server block above defines a virtual server used to handle requests of a
defined type. The server_name directive indicates the server IP address or
domain name that is mapped to your IP address, while the location block is
used to define how NGINX should handle requests for the specified URI. Finally,
the proxy_pass directive is used here to direct all requests in the root
location to the specified address.
Once you've saved the file, head back to your terminal and create a symbolic
link (symlink) of this your_domain file in the /etc/nginx/sites-enabled
directory:
The difference between the sites-available and sites-enabled directory is
that the former is for storing all of your virtual host (website)
configurations, whether or not they're currently enabled, while the latter
contains symlinks to files in the sites-available folder so that you can
selectively disable a virtual host by removing its symlink.
Before your changes can take effect, you need to reload the nginx
configuration as shown below:
Copied!
sudo nginx -s reload
In the next step, we'll test the NGINX reverse proxy by making requests to the
running app through the server's public IP address or connected domain to
confirm that it works as expected.
Step 4 — Testing your application
At this point, you should be able to access your Node.js application via the
domain or public IP address of the Ubuntu server. Run the command below to
access the /crypto endpoint with curl:
Once you can access your Node.js application in the manner described above,
you've successfully set up NGINX as a reverse proxy for your application.
Step 5 — Load balancing multiple Node.js servers
Load balancing refers to the process of distributing incoming traffic across
multiple servers so that the workload is spread evenly between them. The main
benefit of load balancing is that it improves the responsiveness and
availability of the application.
In this step, you'll use the pm2 process manager
to create many independent instances of your Node.js application and configure
NGINX to distribute incoming requests evenly between them.
Return to your Node.js project directory in the terminal, and run the following
command to install the pm2 package:
Copied!
npm install pm2@latest
Afterward, open the server.js file in your text editor:
The NODE_APP_INSTANCE environmental variable to used to a number that's used
to differentiate between running processes. Since no two instances of an app
spawned by pm2 can have the same number, each one will use a different port on
the server:
Save and close the file, then kill the previous server instance with Ctrl-C
before running the command below to start the application in cluster mode using
the total number of available CPU cores on your server.
Copied!
npx pm2 start server.js -i max --name "cryptoStats"
You should observe a similar output to the one below:
Afterward, check the logs to see the ports where the Node.js application
instances are running:
Copied!
npx pm2 logs
A subset of the output for the above command is shown below:
Output
. . .
0|cryptoSt | {"level":30,"time":1638172796810,"pid":29333,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3000>"}
0|cryptoSt | {"level":30,"time":1638172796859,"pid":29340,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3001>"}
0|cryptoSt | {"level":30,"time":1638172796917,"pid":29349,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3002>"}
0|cryptoSt | {"level":30,"time":1638172797000,"pid":29362,"hostname":"Kreig","msg":"Server listening at <http://127.0.0.1:3003>"}
In this case, the application has four instances on ports 3000, 3001,
3002, and 3003. Armed with this information, we can now configure NGINX as a
load balancer. Return to the /etc/nginx/sites-available directory:
Copied!
cd /etc/nginx/sites-available
Open the your_domain file in your text editor:
Copied!
nano your_domain
Update the file as shown below:
Copied!
upstream cryptoStats {
server localhost:3000;
server localhost:3001;
server localhost:3002;
server localhost:3003;
}
server {
server_name <your_domain_or_server_ip>;
location / {
proxy_pass <http://cryptoStats>;
}
}
In the example above, there are four instances of the Node.js application
running on ports 3000 to 3003. All requests are proxied to the cryptoStats
server group, and NGINX applies load balancing to distribute the requests. Note
that when the load balancing method is not specified, it defaults to
round-robin.
Ensure to reload the NGINX configuration once again to apply your changes:
Copied!
sudo nginx -s reload
At this point, incoming requests to the domain or IP address will now be evenly
distributed across all specified servers in a round-robin fashion.
Conclusion and next steps
In this tutorial, you learned how to set up NGINX as a reverse proxy for a
Node.js application. You also utilized its load balancing feature to distribute
traffic to multiple servers, another recommended practice for production-ready
applications. Of course, NGINX can do a lot more than what we covered in this
article, so ensure to read through its documention
to learn more about how you can use its extensive features to achieve various
results.
Thanks for reading, and happy coding!
Article by
Ayooluwa Isaiah
Ayo is a technical content manager at Better Stack. His passion is simplifying and communicating complex technical ideas effectively. His work was featured on several esteemed publications including LWN.net, Digital Ocean, and CSS-Tricks. When he's not writing or coding, he loves to travel, bike, and play tennis.
Are you a developer and love writing and sharing your knowledge with the world? Join our guest
writing program and get paid for writing amazing technical guides. We'll get them to the right
readers that will appreciate them.