If you follow my posts lately you’ll notice that I’ve been writing a lot about server equipment and data centers. The reason is simple, that’s what I’m up to at the moment. Over the past few months my team and I have gone from evaluating co-location options to building our own rack server, going live in a new datacenter, and finally, bringing our productions sites online in our new virtualized environment. It’s been quite the experience but the actual virtualization of our hardware has been the most challenging part so far.
The concept behind server virtualization is simple - you have a powerful piece of hardware that no single application or service can fully utilize, so you allocate many smaller servers (virtual machines) to run simultaneously on a single device. Each virtual machine runs independently from one another, and each thinks it’s a real server with hardware all to itself. What’s more, each VM can have it’s own operatings system, processors, hard drives, network, and RAM allocated to it.
The number of VM’s you need and how they are configured will depend entirely on your needs. For example, if your purpose is to allow developers to load specific system configurations for testing, you might have several dozen very small VM’s available, each with just 1 cpu and maybe 512M of RAM. If you’re hosting a few large scale applications, you may have only one or two very large VM’s, each with maybe 12 cpus and 48GB of RAM.
In our case, and in many cases, we’re hosting websites for clients along with mission critical applications. Virtualization is perfectly suited for this type of situation as multiple servers can coexist alongside one another without any risk of cross communication. Excluding the specialty application servers, I’d like to discuss some general purpose web hosting strategies.
There are two philosophies for building a hosting infrastructure:
In a single server setup, you’re allocating a large amount of resources to one (or many) self contained VM’s which have every service needed to host your sites on them. A single VM would commonly contain a web server, database server, mail server, FTP server, and DNS server. This one VM would be able to host many sites and the management and configuration would be simplified thanks to its encapsulated nature.
In this setup, an abundance of resources are available to the VM and they are shared amongst the services. This is both a good and bad thing. It’s good because a few services which may be under heavy load will have access to the resources they need. It’s bad because a runaway or poorly configured service could bring down all of the other services by consuming all available resources.
If you outgrow the single server you can either allocate more resources to the VM or you can create a clone of the server and divvy up the sites that are hosted between the two and repeat as needed.
In a multi-server setup, you create a single dedicated VM for each service. That means that you’ll have a VM for the web/ftp server, another for the database server, another for the mail server, perhaps two more for the DNS server. Each server will perform a dedicated task and each can be allocated a specific amount of resources.
The complexity of this setup increases quite a bit over the single server. Each of these standalone VM’s need to communicate with one another to work properly which can be a headache. Each must also be configured and maintained individually. The benefits should be clear however. No one service can bring down the others, at least not directly. Each service can scale their resources up/down as needed. Each VM can be optimized for the task at hand, and so forth.
Just like the single server setup, if you outgrow a service VM, you can allocate more resources to that VM or you can spin up another one and have multiple web or multiple database servers in your cluster.
Aside from the complexity, a big negative to this strategy is that each VM is going to require a meaningful amount of resources to operate. Resources that they may never use but need to be allocated nonetheless. You are, after all, running 5 operating systems in this example rather than just 1. That’s also 5 OS’s that need to be security hardened, patched, and upgraded.
We chose to go with the multi-server setup. We’re using the free and excellent Windows Hyper-V Server 2012 as our baremetal OS. We’re currently running 6 VM’s which are unified as a single hosting platform using ISPConfig to manage them. The basics of our setup are:
Debian Wheezy on all VM’s
Nginx - Web Server
PHP-fpm - Web Server
MySql - Database Server
Bind9 - DNS1 and DNS2
Postfix/Dovecot - Mail Server
SquidProxy - Proxy Server
From a resources standpoint, we’re happy with the ability to control each service individually. That said, dedicating a full CPU to each DNS and proxy server is overkill. That can be balanced out by proper Virtual CPU weighting however.
From a configuration point of view, it was a struggle. It took a significant amount of time, research, effort, and hair pulling to finally get everything working together in harmony. In the end we learned a lot and are happy with the choice to go multi-server. We’ll be running 8 total VM’s on the rack server we built from scratch, and I was right, the first limitation we are going to hit is CPU power.
If you’ve got your own experiences with server virtualization i’d be very interested to hear about the choices you made in the comments. What bare metal virtualization server did you choose? What VM strategy did you go with? What were the biggest hurdles?