Load Balancing – Basic Concepts of Network Load Division

Load balancing common term in computing whereby the load of the network is divided between the amount of work that a computer has to do between two or more computers so that more work gets done in the same amount of time and, in general, all users get served faster.

Load balancing is usually implemented with hardware, software, or could also be implemented with a combination of both. On average, load balancing is the main reason for computer server clustering.

On the Internet, businesses whose websites get a lot of traffic usually require and utilize the concept of load balancing. For load balancing the Web traffic, there are a number of approaches. Work magnitude is distributed among more number of computers to ensure larger quantity of work completed in the given time limit and a faster service is at hand of all users. Load balancing can be implemented with hardware, software, or a combination of both. A Web page request is sent to a “manager” server, which then determines which of several identical or very similar Web servers to forward the request to for handling. One approach is to divert each request towards a separate server's host address within a domain name system table, round-robin fashion. Having a Web farm (as such a configuration is sometimes called) allows traffic to be handled more quickly. Since load balancing requires multiple servers, it is usually requires back up services. In few cases, the servers are dispersed in different destinations around the vicinity

 Load balancing differs from channel bonding; load balancing divides the traffic between network interfaces on a network socket (OSI model – layer 4) basis where as in channel bonding the division of traffic is between physical interfaces at a much lower level. Channel bonding uses the OSI model - layer 3 and if on a data link it uses OSI model – layer 2 basis with a protocol such as shortest path bridging.

The most widely applied area of load balancing involves the provision of   a unit Internet service derived from multiple servers, sometimes known as server farm. Commonly load-balanced systems include popular web sites, large Internet Relay Chat networks, high-bandwidth Network News Transfer Protocol servers Data servers, File Transfer Protocol sites, and Domain Name System servers,

Hardware and software load balancing systems have a wide range of special features. The most universal and widespread features are:

  • Asymmetric load: A ratio which is manually assigned to cause some servers to get more workload than others
  • Priority activation: if the load seems to be getting too high, standby servers can be brought online.
  • Distributed Denial of Service (DDoS) attack protection: load balancing can provide certain features such as SYN cookies and delayed-binding
  • HTTP compression: cuts down the amount of data to be transferred for HTTP objects by utilizing gzip compression available in all modern web browsers.
  • TCP offload: every HTTP request from every client has a different TCP connection.
  • TCP buffering: the load balancing buffers the responses from the server out to the clients, letting the web server to be free for threads for other tasks quicker
  • Firewall: straight connections to all the backend servers are prohibited, for network security reasons Firewall encompasses a set of rules which decide whether the traffic may pass through an interface or not.