Why I Find Nginx Practically Better Than Apache

According to the latest web server survey by Netcraft, which was carried out towards the end of 2017, (precisely in November), Apache and Nginx are the most widely used open source web servers on the Internet.

Apache is a free, open-source HTTP server for Unix-like operating systems and Windows. It was designed to be a secure, efficient and extensible server that provides HTTP services in sync with the prevailing HTTP standards.

Ever since it’s launch, Apache has been the most popular web server on the Internet since 1996. It is the de facto standard for Web servers in the Linux and open source ecosystem. New Linux users normally find it easier to set up and use.

Nginx (pronounced ‘Engine-x’) is a free, open-source, high-performance HTTP server, reverse proxy, and an IMAP/POP3 proxy server. Just like Apache, it also runs on Unix-like operating systems and Windows.

Well known for it’s high performance, stability, simple configuration, and low resource consumption, it has over the years become so popular and its usage on the Internet is heading for greater heights. It is now the web server of choice among experienced system administrators or web masters of top sites.

Some of the busy sites powered by:

  • Apache are: PayPal, BBC.com, BBC.co.uk, SSLLABS.com, Apple.com plus lots more.
  • Nginx are: Netflix, Udemy.com, Hulu, Pinterest, CloudFlare, WordPress.com, GitHub, SoundCloud and many others.

There are numerous resources already published on the web concerning the comparison between Apache and Nginx (i really mean ‘Apache Vs Nginx’ articles), many of which clearly explain into detail, their top features and operations under various scenarios including performance measures in lab benchmarks. Therefore that will not be addressed here.

I will simply share my experience and thoughts about the whole debate, having tried out Apache and Nginx, both in production environments based on requirements for hosting modern web applications, in the next section.

Reasons Why I Find Nginx Practically Better Than Apache

Following are reasons why I prefer Nginx web server over Apache for modern web content delivery:

1. Nginx is Lightweight

Nginx is one of light weight web servers out there. It has small footprints on a system compared to Apache which implements a vast scope of functionality necessary to run an application.

Because Nginx puts together a handful of core features, it relies on dedicated third‑party upstream web servers such as an Apache backend, FastCGI, Memcached, SCGI, and uWSGI servers or application server, i.e language specific servers such as Node.js, Tomcat, etc.

Therefore its memory usage is far better suited for limited resource deployments, than Apache.

2. Nginx is Designed for High Concurrency

As opposed to Apache’s threaded- or process-oriented architecture (process‑per‑connection or thread‑per‑connection model), Nginx uses a scalable, event-driven (asynchronous) architecture. It employs a liable process model that is tailored to the available hardware resources.

It has a master process (which performs the privileged operations such as reading configuration and binding to ports) and which creates several worker and helper processes.

The worker processes can each handle thousands of HTTP connections simultaneously, read and write content to disk, and communicate with upstream servers. The helper processes (cache manager and cache loader) can manage on‑disk content caching operations.

This makes its operations scalable, and resulting into high performance. This design approach further makes it fast, favorable for modern applications. In addition, third‑party modules can be used to extend the native functionalities in Nginx.

3. Nginx is Easy to Configure

Nginx has a simple configuration file structure, making it super easy to configure. It consists of modules which are controlled by directives specified in the configuration file. In addition, directives are divided into block directives and simple directives.

A block directive is defined by braces ({ and }). If a block directive can have other directives inside braces, it is called a context such as events, http, server, and location.

http {
	server {
		
	}
}

A simple directive consists of the name and parameters separated by spaces and ends with a semicolon (;).

http {
	server {
		location / {
				
				## this is simple directive called root
			   	root  /var/www/hmtl/example.com/;

		}
		
	}
}

You can include custom configuration files using the include directive, for example.

http {
	server {

	}
	## examples of including additional config files
	include  /path/to/config/file/*.conf;
	include  /path/to/config/file/ssl.conf;
}

A practical example for me was how I managed to easily configure Nginx to run multiple websites with different PHP versions, which was a little of a challenge with Apache.

4. Nginx is an Excellent Frontend Proxy

One of the common uses of Nginx is setting it up as a proxy server, in this case it receives HTTP requests from clients and passes them to proxied or upstream servers that were mentioned above, over different protocols. You can also modify client request headers that are sent to the proxied server, and configure buffering of responses coming from the proxied servers.

Then it receives responses from the proxied servers and passes them to clients. It is mush easier to configure as a proxy server compared to Apache since the required modules are in most cases enabled by default.

5. Nginx is Remarkable for Serving Static Content

Static content or files are typically files stored on disk on the server computer, for example CSS files , JavaScripts files or images. Let’s consider a scenario where you using Nginx as a frontend for Nodejs (the application server).

Although Nodejs server (specifically Node frameworks) have built in features for static file handling, they don’t need to do some intensive processing to deliver non-dynamic content, therefore it is practically beneficial to configure the web server to serve static content directly to clients.

Nginx can perform a much better job of handling static files from a specific directory, and can prevent requests for static assets from choking upstream server processes. This significantly improves the overall performance of backend servers.

6. Nginx is an Efficient Load Balancer

To realize high performance and uptime for modern web applications may call for running multiple application instances on a single or distributed HTTP servers. This may in turn necessitate for setting up load balancing to distribute load between your HTTP servers.

Today, load balancing has become a widely used approach for optimizing operating system resource utilization, maximizing flexibility, cutting down latency, increasing throughput, achieving redundancy, and establishing fault-tolerant configurations – across multiple application instances.

Nginx uses the following load balancing methods:

  • round-robin (default method) – requests to the upstream servers are distributed in a round-robin fashion (in order of the list of servers in the upstream pool).
  • least-connected – here the next request is proxied to the server with the least number of active connections.
  • ip-hash – here a hash-function is used to determine what server should be selected for the next request (based on the client’s IP address).
  • Generic hash – under this method, the system administrator specifies a hash (or key) with the given text, variables of the request or runtime, or their combination. For example, the key may be a source IP and port, or URI. Nginx then distributes the load amongst the upstream servers by generating a hash for the current request and placing it against the upstream servers.
  • Least time (Nginx Plus) – assigns the next request to the upstream server with the least number of current connections but favors the servers with the lowest average response times.

7. Nginx is Highly Scalable

Furthermore, Nginx is highly scalable and modern web applications especially enterprise applications demand for technology that provides high performance and scalability.

One company benefiting from Nginx’s amazing scalability features is CloudFlare, it has managed to scale its web applications to handle more than 15 billion monthly page views with a relatively modest infrastructure, according to Matthew Prince, co-founder and CEO of CloudFare.

For a more comprehensive explanation, check out this article on the Nginx blog: NGINX vs. Apache: Our View of a Decade-Old Question.

Conclusion

Both Apache and Nginx can’t be replaced by each other, they have their strong and weak points. However, Nginx offers a powerful, flexible, scalable and secure technology for reliably and efficiently powering modern websites and web applications. What is your take? Let us know via the feedback form below.

Aaron Kili
Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Each tutorial at TecMint is created by a team of experienced Linux system administrators so that it meets our high-quality standards.

Join the TecMint Weekly Newsletter (More Than 156,129 Linux Enthusiasts Have Subscribed)
Was this article helpful? Please add a comment or buy me a coffee to show your appreciation.

1 thought on “Why I Find Nginx Practically Better Than Apache”

Leave a Reply to Mike Hunter Cancel reply

Thank you for taking the time to share your thoughts with us. We appreciate your decision to leave a comment and value your contribution to the discussion. It's important to note that we moderate all comments in accordance with our comment policy to ensure a respectful and constructive conversation.

Rest assured that your email address will remain private and will not be published or shared with anyone. We prioritize the privacy and security of our users.