Scalability, high availability, containers, fault tolerance and eventual consistency. Tech terms can be confusing to those new to server administration or development. In the coming weeks, we’ll be breaking down common — yet potentially confusing — terms you will undoubtedly come across in your learning journey.
Scalability is often referenced alongside its partner in quality systems, high availability, but these two concepts can become entangled for those just starting out. While high availability is a program, application, or server system’s ability to remain continuously operational despite any failures, scalability is a system or application’s ability to handle growth.
When an application or network scales, it allows for more throughput, or work, to be managed at a time. Whether this is more concurrent web server connections or more database queries, scaling is about one’s application maintaining its functionality for increasingly growing loads. You can also scale “down” during slow or low load periods to save money and increase efficiency. Ideally, scaling should be something that is configured to happen automatically and without manual intervention from an administrator.
Scaling is more than adding nodes, RAM, or CPU cores, however. An application or server that scales well will do so efficiently and practically, without undue cost and without affecting its availability. Scaling needs to be able to fit both into the budget of the computer or creator and the needs of the servers.
But how does one get a system to scale? For this, we have two options, known as horizontal and vertical scaling.
Horizontal scaling, or being able to scale “in and out,” refers to a process where, when more power is needed to manage throughput, more nodes are added to the system. Similarly, if less power is required, these nodes can be removed. This might seem like an expensive process, but given the advent of cloud computing and virtualization, horizontal scaling is often a cost-effective solution.
Horizontal scaling goes hand-in-hand with the idea of cluster computing. Cluster computing involves multiple nodes working together in such a way that they can be seen almost as a single system. These nodes are all configured to work the same way on the same task or tasks.
Vertical scaling, or scaling “up and down,” is when a system scales not by adding more nodes, but by providing the system with more power, often through the addition of more RAM or CPU cores. Vertical scaling uses fewer nodes and allows for an overall less complex system. Vertical scaling is also ideal for those using applications or programs that are not intended to “scale up.”
If hosting on virtual servers, this can prove to be an expensive option as prices increase along with server power; however, if working with your own bare metal servers, this can be a cost-effective solution.
As those new to server management learn to administer bigger and more complex systems, knowing how to let your servers expand with your application or network is paramount. Don’t leave your work to stagnate under heavy loads, instead, let it grow.