Filling a server rack with hardware can be done in an infinite number of ways. Unless you're Google, Amazon, Microsoft, etc. you don't have an infinite amount of space or money however. That being the case, what is the best approach to utilizing your limited space and budget most efficiently?
As I wrote about previously, my small company embarked on the journey to our own data center with our own equipment about 18 months ago. We've only got a small 1/3 private rack to fill so density was important to us. Our strategy was to build just a couple extremely powerful servers and virtualize them as needed. This has worked out well so far from a functional perspective, but it's got us a little worried when it comes to resiliency.
Today we've got 2 x 2U servers each with 24 processor cores (with hyper-threading) and 96 GB of RAM. We've also got a single 1U Synology Rackstation for extra storage, backups and the like. This allows us to comfortably run about 15 VM's of various sizes from 4U of rack space which is pretty good. While we've still got room, we need to plan for additional capacity.
So here's the question, if we are going to add additional capacity, should we stick with another beefed up 1U or 2U server? Or break that out into several physical servers?
The product that got me thinking about this choice is the SuperMicro MicroCloud platform. It's got some shortcomings for sure (single processor, Xenon E3 only in this model, see here for E5v2 version), but it's got a major advantage: dense isolation (™ Matt Mombrea). In just 3U of rack space, you get 8 physical servers, 12 if you jump up a level but i dislike low profile stuff.
Just like there are benefits to having a few high power servers, there are advantages to having many isolated servers.
Few High Power Servers:
- Least amount of rack space consumed
- Greater amount of resources available to VM's
- Simplified server management (fewer servers)
- Low amount of hardware to maintain and replace
- Simplified networking
- Hardware failure affects many VM's
- Expensive to build
- Expensive to make redundant
- Becomes less dense with higher availability
Many Isolated Servers (Micro Cloud):
- Still very dense
- Hardware failure on one blade doesn't affect VM's on other blades
- Easy to make redundant
- Inexpensive to add blades
- More complicated server management
- More complicated networking
- A lot of hardware to maintain and replace
- Fewer resources per server
- Older CPU platform
Regardless of the decision, I'll be sticking with a Hyper Visor on every box. I'm not sure I'll ever set up a single bare metal server without a Hyper Visor again. The benefit of being able to move a VM to any machine, virtualize the network, and adjust resources as needed is just too great.
What I'm thinking is that having a combination of both might make a lot of sense. If we deploy an 8 blade Micro Cloud, we can strategically place VM's to provide redundancy to any critical virtual machine. A single VM requiring 8 CPU cores can be hosted on a single blade, or easily on a larger 2U system. A live migration could be hosted on another. Since most of our VMs require 4 - 6 cores (some only need 1 or 2), a single blade can host both a primary VM and a live migration of another VM cooperatively.
If a VM needs more than 8 cores, it can be hosted on one of the larger 2U servers with the other 2U as a live migration, provided we move the smaller VMs to blades. Of course, you could accomplish that strategy with a couple more 2U beasts as well, but you'd still have the problem of one CPU going down potentially knocking out almost a dozen VMs.
So is it better to have a bunch of isolated servers which reduces the VM domino effect in exchange for increased hardware maintenance? Or just a few massive servers and be ready for the 4 am call to replace a CPU at any given moment? If you've got experience in this type of planning, I'm all ears. The key is that, like most, we can't afford to do what we really want which is to buy two of everything.