NGINX — Upstream Module (Part 01)

NGINX is a load-balancing tool widely used in the IT industry. It is a web server that can be used as a reverse proxy, mail proxy, or an HTTP cache. Upstream is a module used in NGINX to define the servers to be load balanced. This article will explain to you what is an “Upstream” and how to use it.

https://www.pexels.com/photo/person-using-laptop-computer-in-gray-scale-160109/

If you haven’t checked for NGINX basic configurations, I recommend you to look into my article: Basics of configuring NGINX. Now let’s dig into Upstream. With this article, I will be discussing basic upstream configurations. Part 02 of this article will discuss more advance topics like zone, state, hash etc. topics which are also used to config upstream servers.

What is Upstream?

When configuring NGINX, you need to define servers for load balancing or proxying. The servers defined in an upstream are referred to by proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, and grpc_pass directives in a server.

Look at the below code snippet.

upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;

server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}

server {
location / {
proxy_pass http://backend;
}
}

backend is the name of the upstream. At the end of upstream config, put curly braces {..}. The configurations are made inside these curly braces.

upstream <upstream_name> {

}

Inside the server block, the traffic is routed to the upstream using pass_proxy directive. This is the basic scenario that happens inside an NGINX configuration file.

As in the configuration file, you can add configurations for each server such as weight, backup, max_conn etc. to manage the traffic load. Let’s dig into what are these configurations or directives.

1. server

The server directive is used to define servers in an upstream. A server is defined as server the space and the server name. This is NOT the same server block defined after the upstream block.

server <server_name>;

At the end of the server the definition must put a semicolon ;.

The server should be defined inside the curly braces of upstream.

upstream <upstream_name> {
server <server_name>
}

The address can be specified as a domain name or IP address, with an optional port, or as a UNIX-domain socket path specified after the “unix:” prefix. If a port is not specified, the port 80 is used. A domain name that resolves to several IP addresses defines multiple servers at once.

2. weight

By default, requests are distributed between the servers using a weighted round-robin balancing method. weight directive configs the load weight of a server. By default it's 1. weight is used as weight=number. Let me explain it with an example.

upstream backend {
server backend1.example.com weight=3;
server backend2.example.com;
server backend3.example.com;
}

According to this example, if 5 requests come to the upstream, the first 3 requests are sent to the first server which is backend1.example.com and accordingly one request each for the other two servers. This is basically the same as traffic shifting.

If an error occurs during communication with a server, the request will be passed to the next server, and so on until all of the functioning servers will be tried. If a successful response could not be obtained from any of the servers, the client will receive the result of the communication with the last server.

3. max_conns

This configuration limits the maximum number of connections the server can connect. By default, this value is zero, which is unlimited connections. If the server group does not reside in the shared memory, the limitation works per each worker process.

If idle keepalive connections, multiple workers, and the shared memory are enabled, the total number of active and idle connections to the proxied server may exceed the max_conns value.

max_conns is defined as max_conns=<no_of_connection>.

4. max_fails

As I mentioned above, when a request comes to an upstream server, if the server is not available, it waits till the server is up and running, or if not it goes to the next server. Using max_fails the directive, you can set the number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout to consider the server unavailable for a duration also set by the fail_timeout parameter.

The default value of max_fails is 1. The zero value disables the accounting of attempts. A proxy unsuccessful attempt is an attempt where the proxy could not connect to the server due to a server internal error, lost connection, server unavailability etc.

max_fails is defined as max_fails=<no_of_fail_attempts>.

5. fail_timeout

This parameter sets, the time during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable; and the period of time the server will be considered unavailable.

By default, the parameter is set to 10 seconds.

fail_timeout is defined as fail_timeout=<time>.

6. backup

When you need to use one or many servers as a backup, you can use the directive backup so when the request pass to this server only when the primary servers are unavailable.

backup directive is used as;

upstream backend {
server backend1.example.com;
server backend2.example.com;
server backend3.example.com backup;
}

7. down

This directive is used to mark servers as temporarily unavailable.

These are the most common and basic configurations used in NGINX. For more information, you can refer to the NGINX documentation: Module ngx_http_upstream_module.

In a nutshell, we learned different directives used to configure NGINX upstream and how to use them. We will catch-up in another article with a more advanced topic on NGINX upstream. Keep in touch, Good luck !!

References

👋 Join FAUN today and receive similar stories each week in your inbox! Get your weekly dose of the must-read tech stories, news, and tutorials.

Follow us on Twitter 🐦 and Facebook 👥 and Instagram 📷 and join our Facebook and Linkedin Groups 💬

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇


NGINX — Upstream Module (Part 01) was originally published in FAUN on Medium, where people are continuing the conversation by highlighting and responding to this story.