Poor man's load balancing is properly called Round Robin DNS. Using this technique, you set up a series of A records in DNS, each with the same name, but pointing to a different IP address. As an example, say we want to balance web traffic amount five separate servers. We could set up five A records like this:
www IN A 192.168.0.3 www IN A 192.168.0.4 www IN A 192.168.0.5 www IN A 192.168.0.6 www IN A 192.168.0.7
Requests for www within this domain might be sent to any of the addresses listed. In fact, DNS will naturally spread the load across any systems defined in this way -- whenever a single system is name associated with multiple IP addresses.
This method of spreading the traffic works fairly well and is easy to implement. If you have a web server that is suffering under the demands of its visitors, adding a second server -- after cloning your original system -- is as easy as adding a second record. Nearly instantly, your original server will be handling only half the traffic. And this technique works as well for your public web servers as internal ones. Plus, it can be used for other types of services as well and is, of course, independent of operating systems and server types. Add your new record(s), restart your naming services (e.g., /etc/init.d/named restart) and you're there. Expect a short lag if you public IPs are involved for the new addresses to propogate.
Round robin DNS strategies are not perfect, but they're obviously simple and cheap to implement. The problems are 1) that they don't necessarily distribute the load quite evenly (some studies suggest that the initial and final addresses in a round robin list might get a bit more of the load, 2) that they don't take record caching into consideration, 3) there's no accomodation for matching requests to the geographical regions from which they originate (which might be done with more sophisticated load balancing technologies), and 4) there is no attempt to gauge how busy each server is prior to sending more requests its way. Dedicated load balancers monitor the load on each server; after all, not every request requires the same amount of processing load.
More complex schemes for load balancing have been proposed by the likes of Randall Schwartz who suggested back in 2000 that a system could be set up to monitor the load on each of the systems that are sharing in the data processing task before determining where to send new traffic. In fact, he provided a Perl script with the redirects to make this happen. Check out:
While Round Robin DNS may not be perfect, I've managed some corporate web sites that used this technique either to increase responsiveness of a busy web server or to implement what I've come to call "Poor Man's Failover".
Poor Man's Failover
In Poor Man's Failover, you set up two identical web servers, but have one redirecting to the other. Your Round Robin DNS setup will continue to send half the requests to each of the two servers, but the redirecting server will simply turn around and send them to the first. So far, this isn't very exciting. However, in addition to the DNS setup, the redirecting server (which we will call the "secondary") is also monitoring the primary. If ever it determines that the primary is down, it stops redirecting and starts web services as an independent web server.
This type of setup is suited to situations in which the primary and secondary servers are not synchronized. Otherwise, straight load balancing would be a better choice. If you send updates to the primary and synchronize with the secondary once or twice a day, you'll want your primary to be the public-facing server and your secondary to be a toasty (i.e., not quite "hot") backup.
In cases in which your secondary mistakenly concludes that your primary is down and puts itself into independent server mode, the only "cost" would be that the two systems won't necessarily be exactly in sync with each other. The content on one could be slightly newer than that on the other. On the other hand, this setup can be automated so that the failover (the secondary going from a redirecting to a normal web service mode and back) happens without you're having to be on hand to make it happen. Make sure your monitoring script keeps a log showing when it goes in and out of indepent mode so that you can accurately assess how and when this setup is being called into play.