A few months ago I embarked on a journey to build an enterprise grade, rack mount server from scratch for our business. The project was a success on all accounts and we’re happily running a slew of virtual machines on the box today. That’s not to say I wouldn’t make some changes on the next build, which it so happens is being completed as we speak.
The first server is a beast. It’s fully maxed out in every way which was a conscious decision. Like anything, compromises must be made where cost/performance is concerned, so being maxed out in this case does not mean being at the technical top of what is available today.
In the last post, I detailed a list of regrets that I had when the project was complete. On this new build, I kept those points front and center as we planned the components. The most important distinction has to do with the processors.
CPU Core Ceiling
My main regret and worry was that we would exhaust our CPU resources before anything else on the server, and that turned out to be the case. It’s not to say that the server is running at 100% CPU utilization, it’s actually only at around 25%. The issue has more to do with the number of VM’s running on the machine and the requirement (OK, best practice) that even the smallest VM be designated 1 logical CPU core. That means that even when a single core VM is rarely using even a fraction of a core, that core is unavailable to another 6 core VM which may be spiking to 100% utilization at times. Since all logical cores are allocated on the box, a core cannot be added to a VM in need without taking one away from another multi core VM, which is our case is also in need. Such is the case when you have 9 VMs and 24 logical cores.
Since hardware is cheap and our experience has grown, we decided to add a second 2U server to our rack in order to provide us breathing room before it becomes a problem, and to give us ample room to grow.
On the last build we went with a pair of Six-core Xenon E5645 processors. While those chips offer a tremendous value/performance, they left us stranded on the aging LGA 1366 platform. This time around, we’ve opted for the more modern LGA 2011 platform and a single processor.
Yep, you heard that right, just one for now. For this box, we’re planning for expansion and catering to our current needs. This allows us to build a less expensive box that can be upgraded easily and cheaply to reach over 3x it’s current performance. Upgrading the socket also lead to an upgrade in the motherboard. We stuck with Supermicro for the chassis and motherboard as we’ve had nothing but good things to say about their products so far. Our first motherboard is loaded with 96GB of RAM, it’s max capacity. This motherboard, the SuperMicro MBD-X9DRL-3F-O can accept a whopping 256 GB of RAM.
Once again, we sourced our parts from Newegg.com except for the CPU heatsink which they were out of. The total cost came in at about $2,500, nearly half the cost of our first server. The components we used on this build:
So we’ve trimmed down a bit here on volume, but bulked up on potential. For the time being we’re going to run both our 2 SSD’s and 2 SATA drives in RAID 1 for redundancy. As this server is filled out, 2 additional SSD’s will be added and the RAID migrated to RAID 5. Additional SSD drives can be added in the same fashion.
Likewise, as our utilization grows, an additional CPU of the same variety can be added along with additional RAM. We also have quite a bit of headroom to upgrade both CPU’s up to higher frequencies and Eight-Core models.
A quick note on those ICYDOCK SSD adapters, I can't recommend those things enough for when you want to make an SSD fit into a hot swap drive bay that is built for 3.5" drives. They work perfectly.
This strategy gives us relief both in resources and finances making it a wise business decision in our estimation. We could have taken measures to optimize the software running on the original server to conserve its resources but that would surely cost far more in development time than the 1-2 day effort and $2,500 cost of a new machine.
Aside from growing into our latest server, our next project will probably be a dedicated SAN server. I’m pretty excited for that one, whenever it may come.