June 24, 2012, 7:37 PM — Poor man's load balancing is properly called Round Robin DNS. Using this technique, you set up a series of A records in DNS, each with the same name, but pointing to a different IP address. As an example, say we want to balance web traffic amount five separate servers. We could set up five A records like this:
www IN A 192.168.0.3 www IN A 192.168.0.4 www IN A 192.168.0.5 www IN A 192.168.0.6 www IN A 192.168.0.7
Requests for www within this domain might be sent to any of the addresses listed. In fact, DNS will naturally spread the load across any systems defined in this way -- whenever a single system is name associated with multiple IP addresses.
This method of spreading the traffic works fairly well and is easy to implement. If you have a web server that is suffering under the demands of its visitors, adding a second server -- after cloning your original system -- is as easy as adding a second record. Nearly instantly, your original server will be handling only half the traffic. And this technique works as well for your public web servers as internal ones. Plus, it can be used for other types of services as well and is, of course, independent of operating systems and server types. Add your new record(s), restart your naming services (e.g., /etc/init.d/named restart) and you're there. Expect a short lag if you public IPs are involved for the new addresses to propogate.
Round robin DNS strategies are not perfect, but they're obviously simple and cheap to implement. The problems are 1) that they don't necessarily distribute the load quite evenly (some studies suggest that the initial and final addresses in a round robin list might get a bit more of the load, 2) that they don't take record caching into consideration, 3) there's no accomodation for matching requests to the geographical regions from which they originate (which might be done with more sophisticated load balancing technologies), and 4) there is no attempt to gauge how busy each server is prior to sending more requests its way. Dedicated load balancers monitor the load on each server; after all, not every request requires the same amount of processing load.
More complex schemes for load balancing have been proposed by the likes of Randall Schwartz who suggested back in 2000 that a system could be set up to monitor the load on each of the systems that are sharing in the data processing task before determining where to send new traffic. In fact, he provided a Perl script with the redirects to make this happen.