How to Limit Network Bandwidth in NGINX Web Server

Previously, in our NGINX traffic management and security controls series, we have discussed how to limit the number of connections the same client can make to your web resources, using client identification parameters such as IP address. We also covered how to limit the rate of requests (limit the rate at which a client can make requests) for your web resources.

To ensure that your application usage bandwidth is not expended by a single client, you need to control the upload and download speeds per client. This is a common NGINX security control against DoS (Denial of Service) attacks from malicious users who are just trying to abuse site performance.

In this third part of the series, we will explain how to limit network bandwidth in the NGINX web server.

Limiting Bandwidth in NGINX

To limit bandwidth in NGINX, use the limit_rate directive which limits the rate of response transmission to a client. It is valid in the HTTP, server, location, and if statement within a location block, and it specifies the rate limit for a given context in bytes per second by default. You can also use m for megabytes or g for gigabytes.

limit_rate 20k;

Another related directive is limit_rate_after, which specifies that the connection should not be rate-limited until after a specified amount of data has been transferred. This directive can be set in the HTTP, server, location, and “if statement within a location block”.

limit_rate_after 500k;

Here is an example configuration to limit a client to download content through a single connection at a maximum speed of 20 kilobytes per second.

upstream api_service {
    server 10.1.1.10:9051;
    server 10.1.1.77:9052;
}

server {
    listen 80;
    server_name testapp.tecmint.com;
    root /var/www/html/testapp.tecmint.com/build;
    index index.html;

    location / {
        try_files $uri $uri/ /index.html =404 =403 =500;
    }
    location /api {
        proxy_pass http://api_service;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

       
   }
   location /documents {
        limit_rate 20k;
        limit_rate_after 500k;  
}
}

Once you have added the required settings explained above, save the changes and close the file. After, check if the NGINX configuration syntax is correct, like so:

$ sudo nginx -t

If all is OK, reload the NGINX service to effect the latest changes:

$ sudo systemctl reload nginx

Limiting Bandwidth and Number of Connections in NGINX

With the above configuration, the client can open several connections to increase bandwidth. Therefore, additionally, you can also limit connections per client using a parameter such as an IP address as we looked at before.

For example, you can limit one connection per IP address.

upstream api_service {
    server 127.0.0.1:9051;
    server 10.1.1.77:9052;
}

limit_conn_zone $binary_remote_addr zone=limitconnbyaddr:20m;
limit_conn_status 429;

server {
    listen 80;
    server_name testapp.tecmint.com;
    root /var/www/html/testapp.tecmint.com/build;
    index index.html;

    location / {
        try_files $uri $uri/ /index.html =404 =403 =500;
    }
    location /api {
        limit_conn   limitconnbyaddr  5;

        proxy_pass http://api_service;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

       
   }
   location  /documents {
        limit_rate 50k;
        limit_rate_after 500k;  
        limit_conn   limitconnbyaddr  1;
}
}

Limiting Bandwidth Dynamically in NGINX

As a parameter value to the limit_rate directive, you can specify variables to dynamically limit bandwidth. It is particularly useful in situations where the rate should be limited depending on a certain condition.

In this example, we are using the map block. It enabled you to create a new variable whose value depends on the values of one or more of the original variables ($slow and $limit_rate) specified in the first parameter.

upstream api_service {
    server 10.1.1.10:9051;
    server 10.1.1.77:9052;
}

map $slow $limit_rate {
    1     20k;
    2     30k;
}

server {
    listen 80;
    server_name testapp.tecmint.com;
    root /var/www/html/testapp.tecmint.com/build;
    index index.html;

    location / {
        try_files $uri $uri/ /index.html =404 =403 =500;
    }
    location /api {
        proxy_pass http://api_service;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
   }
   location /documents {
        limit_rate $limit_rate;
        limit_rate_after 500k;
}
}

Here is another sample configuration example to illustrate the dynamic limiting of bandwidth in NGINX. This configuration enables NGINX to limit bandwidth based on the TLS version. The directive limit_rate_after 512 implies the limit rate after headers have been sent.

upstream api_service {
    server 10.1.1.10:9051;
    server 10.1.1.77:9052;
}

map $ssl_protocol $response_rate {
    "TLSv1.1" 50k;
    "TLSv1.2" 100k;
    "TLSv1.3" 500k;
}

server {
    listen 443 ssl;
    ssl_protocols       TLSv1.1 TLSv1.2 TLSv1.3;
    ssl_certificate     /etc/ssl/testapp.crt;
    ssl_certificate_key   /etc/ssl/testapp.key;

    location / {
        limit_rate       $response_rate; # Limit bandwidth based on TLS version
        limit_rate_after 512;
        proxy_pass       http://api_service;
    }
}

That’s all we had for you in this part of the series. We shall continue to cover more topics concerning NGINX traffic management and security controls. But as usual, you can ask questions or share your thoughts on this guide via the feedback form below.

Reference: security controls guide on the NGINX website.

Hey TecMint readers,

Exciting news! Every month, our top blog commenters will have the chance to win fantastic rewards, like free Linux eBooks such as RHCE, RHCSA, LFCS, Learn Linux, and Awk, each worth $20!

Learn more about the contest and stand a chance to win by sharing your thoughts below!

Aaron Kili
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Each tutorial at TecMint is created by a team of experienced Linux system administrators so that it meets our high-quality standards.

Join the TecMint Weekly Newsletter (More Than 156,129 Linux Enthusiasts Have Subscribed)
Was this article helpful? Please add a comment or buy me a coffee to show your appreciation.

Got Something to Say? Join the Discussion...

Thank you for taking the time to share your thoughts with us. We appreciate your decision to leave a comment and value your contribution to the discussion. It's important to note that we moderate all comments in accordance with our comment policy to ensure a respectful and constructive conversation.

Rest assured that your email address will remain private and will not be published or shared with anyone. We prioritize the privacy and security of our users.