LFCA: Learn Cloud Availability, Performance, and Scalability – Part 14

In the previous topic of our LFCA series, we gave an introduction to Cloud computing, the different types and Clouds, and cloud services and walked you through some of the benefits associated with Cloud computing.

If your business is still riding on the traditional IT computing environment, it’s time you leveled up and shifted to the cloud. It is estimated that by the end of 2021, over 90% of the total workload will be handled in the cloud.

Among the major benefits associated with embracing cloud computing are improved performance, high availability, and scalability. In fact, we brushed over these as one of the major benefits of using cloud technology.

In this topic, we focus on Cloud availability, performance, and scalability and seek to understand how these three coalesce to meet customer demands and ensure users access their data as they need it from any part of the world.

1. Cloud Availability

An organization’s IT applications & services are critical and any service disruption can have a profound effect on revenue. The expectation by customers is that services are accessible round the clock at any given time from any location. And that is what Cloud technology seeks to provide.

High availability is the ultimate goal of cloud computing. It seeks to provide the maximum possible uptime of a company’s services even in the face of disruption which can be occasioned by unprecedented server downtime or network degradation.

High availability is made possible by having redundant and failover systems. This happens in a cluster environment where multiple servers or systems perform the same tasks and thus providing redundancy.

When a server goes down, the rest can still continue running and providing the services provided by the affected server. A perfect example of redundancy is data replication across multiple database servers in a cluster. In the event the primary database server in the cluster experiences an issue, another database server will still provide the data required by users despite the failure.

Redundancy eliminates a single point of failure and ensures that there is 99.999% availability of services and applications. Clustering also provides load balancing among servers and ensures the workload is equitably distributed and no one server is overwhelmed.

2. Cloud Scalability

Another hallmark of cloud computing is scalability. Scalability is the ability to adjust cloud resources to meet changing demands. Simply put, you can seamlessly increase or decrease resources as and when needed to meet demand without compromising the quality of services or downtime.

Suppose you are running a blog that is beginning to get hits and more traffic. You can easily add more compute resources such as storage, RAM, and CPU to your cloud compute instance to handle the additional workload. Conversely, you can scale down the resources when necessary. This ensures that you only pay for what you need, and this underscores the economies of scale that the cloud provides.

Scalability is twofold: Vertical scaling and horizontal scaling.

Vertical Scaling

Also referred to as ‘scaling up’ vertical scaling involves adding more resources such as RAM, storage, and CPU to your cloud compute instance to accommodate additional workload. This is the equivalent of powering down your physical PC or server to upgrade the RAM or add an extra Hard drive or SSD.

Horizontal Scaling

Horizontal scaling, also known as ‘scaling out’ involves adding more servers to your pool of pre-existing servers to ensure distribution of workload across multiple servers. With horizontal scaling, you are not limited to the capacity of a single server, unlike vertical scaling. This provides more scalability and less downtime.

Scaling out is more desirable compared to scaling up

And here’s why. With horizontal scaling, you are basically adding more resources such as servers or storage to your already existing pool of resources. This allows you to combine the power and performance of multiple compute instances into one, and thus getting better performance as opposed to just adding resources on a single server. Additional servers imply that you won’t have to deal with a deficit of resources.

Additionally, horizontal scaling provides redundancy and fault tolerance in such a manner that even if one server is impacted, the rest will carry on proving access to the required services. Vertical scaling is associated with a single point of failure. If the compute instance crashes, then everything goes down with it.

Horizontal scaling also offers maximum flexibility as opposed to vertical scaling where applications are built as one large unit. This makes it more challenging to manage, upgrade or change sections of code without having to reboot the entire system. Scaling out allows for the decoupling of applications and allows for a seamless upgrade with minimal downtime.

3. Cloud Performance

Ensuring application performance meets customer demands can be quite an uphill task, especially if you have multiple components sitting in different environments that need to constantly communicate with each other.

Issues like latency are likely to manifest and impact performance. Also, it’s not easy to predict performance where resources are shared by various entities. Regardless, you can still achieve high performance and stay afloat by implementing the following measures.

1. Cloud Instance

Ensure to use the right cloud instances with enough resources to handle the workloads of your applications and services. For resource-intensive applications, ensure that you provision enough RAM, CPU, and storage resources to your cloud instance to avert a possible resource deficit.

2. Load Balancer

Implement a load balancer to equitably distribute network traffic between your resources. This will ensure that none of your applications is overwhelmed by demand. Suppose your web server is getting a lot of traffic that is causing delays and impacting performance.

A perfect solution would be to implement horizontal scaling with a total of 4 web servers sitting behind a load balancer. The load balancer will distribute network traffic across the 4 web servers and ensure none is overwhelmed by the workload.

3. Caching

Use caching solutions to speed up access to files by applications. Caches store frequently read data and thereby eliminate constant data lookups which can impact performance. They reduce latency and workload as the data is already cached, thereby improving response times.

Caching can be implemented on various levels such as application level, database level. Popular caching tools include Redis, Memcached, and Varnish cache.

4. Performance Monitoring

Lastly, be sure to monitor the performance of your servers and applications. Cloud providers provide native tools that can help you keep an eye on your cloud servers from a web browser.

Additionally, you can take your own initiative and install free and open-source monitoring tools that can help you keep tabs on your applications and servers. Examples of such applications include Grafana, Netdata, and Prometheus, to mention a few.

Conclusion

We cannot emphasize enough how availability, scaling, and performance are crucial in the cloud. The three factors determine the quality of service that you will get from your cloud vendor and ultimately draw the line between the success or failure of your business.

Hey TecMint readers,

Exciting news! Every month, our top blog commenters will have the chance to win fantastic rewards, like free Linux eBooks such as RHCE, RHCSA, LFCS, Learn Linux, and Awk, each worth $20!

Learn more about the contest and stand a chance to win by sharing your thoughts below!

James Kiarie
This is James, a certified Linux administrator and a tech enthusiast who loves keeping in touch with emerging trends in the tech world. When I'm not running commands on the terminal, I'm taking listening to some cool music. taking a casual stroll or watching a nice movie.

Each tutorial at TecMint is created by a team of experienced Linux system administrators so that it meets our high-quality standards.

Join the TecMint Weekly Newsletter (More Than 156,129 Linux Enthusiasts Have Subscribed)
Was this article helpful? Please add a comment or buy me a coffee to show your appreciation.

2 Comments

Leave a Reply
  1. “A perfect solution would be to implement horizontal scaling with a total of 4 web servers sitting behind a load balancer.”

    Why would 4 servers be “a perfect solution?” Any enterprise moving their entire compute instance into the cloud would require dozens, if not hundreds, of servers.

    Is this article – Part 14 – written from the point of view of the cloud provider or of the cloud user? While reading it, I got the impression that it provided instructions on how to build a cloud installation.

    Reply

Got Something to Say? Join the Discussion...

Thank you for taking the time to share your thoughts with us. We appreciate your decision to leave a comment and value your contribution to the discussion. It's important to note that we moderate all comments in accordance with our comment policy to ensure a respectful and constructive conversation.

Rest assured that your email address will remain private and will not be published or shared with anyone. We prioritize the privacy and security of our users.