Have you decided to dump your leased server and hosting provider in favor of co-locating your own hardware or hosting it in-house? The next step is to get your new hardware in order. There are a few main choices you have when it comes to obtaining a new server:
1) Buy one from the big guys
You can save yourself loads of work and gain the peace of mind that comes with top notch support by buying a server from Dell, HP, or other big brands. One thing you won’t save loads of is money.
2) Buy one from the little guys
There are smaller (relatively) operations like http://www.pogolinux.com/ that basically save you the legwork of sourcing compatible components when building a server from scratch. You can configure your server and pay a bit more for the convenience. Not a bad route to take.
3) Build your own from scratch
You’ll do all of the research yourself and you’ll buy every component individually. The chance for error is high. You might choose incompatible components, parts that don’t fit your chassis, or completely wrong equipment. But you’ll have the most control, the biggest bang for your buck, and the most fun/misery.
If you haven’t guessed already, this post is about option 3. There are very valid points against option 3, but so what. In the end, if you’re successful, you’ll have made the best choice in my opinion because you’ll have saved the most money and learned a lot in the process.
The first two decisions you need to make are the chassis and the motherboard. It’s not as simple as it might seems at first.
When choosing a chassis, it’s important to consider your overall strategy for your servers. Will you be building a powerhouse with the intention of virtualizing the hardware, or will you be building many single purpose devices instead? In our case, we were planning on virtualizing a single powerful box with an expansion plan of adding additional powerful boxes and virtualizing those. Because of that, we chose a 2U chassis so that we could fit more equipment. If your strategy is to build several less powerful boxes instead, you’re probably better off with many 1U chassis’s to make the best use of your rack space.
You should also know your disk requirements before choosing a chassis. 1U boxes typically only hold around 4 disks, while a 2U might hold 10-12 disks and so on. You should also consider any expansion cards you might need such as RAID controllers, NIC’s, whatever else because a 1U is going to have some very restrictive space to work with.
When choosing your motherboard, the primary considerations are the number of sockets, the socket type (which determines processor support), and RAM capacity / DIMM slots. Before comparing motherboards, it’s obviously helpful to know what processor(s) you’d like to go with in your server. From there you can compare models and hunt down the features that are important to you, such as multiple integrated NIC’s, IP based management, chipsets, and so forth.
Once you choose the motherboard, pay close attention to the supported components (RAM and CPU especially). Some have very specific requirements for compatibility.
There are what seem like infinite combinations of components when building a server but there are some basic requirements that every one will have:
If you’re planning on setting up your disks in a RAID configuration, you’ll probably want to add a proper RAID controller to that list.
We built our server using parts sourced from NewEgg.com because we’ve been long time customers and have nothing but positive things to say about their service. The total cost came in at about $4,500 for the server equipment (leaving out some networking and power devices). The exact list of components that we used is:
We’ve got the 4 Corsair Neutron’s set up in a RAID 10 configuration on the 3ware controller card and the 2 Seagates in RAID 1 using the motherboard’s Intel based RAID controller. Amazingly the system powered up on the first try and we were off and running.
Although I’m very pleased with the results of our first production server build there are many opportunities for improvement. The three primary items that are bothering me are:
1)RAID controller card
For some crazy reason I thought it would be fine to use a 4 port RAID controller card for the 4 SSD drives that would make up the primary array. The extended drives would be attached to the motherboard SATA controller. Dumb. Now if I need to extend that primary array I need to buy a new RAID controller and try to rebuild the array on it, or replace the existing drives with larger ones and rebuild.
An 8 port RAID controller would have served us much better.
2) Separate RAID array for Database VM
Having all of the VM’s run on the same disk array could end up being a performance issue when it comes to I/O heavy applications powered by SQL servers. If that becomes the case, having a separate disk array for the hungry application could make a big difference at a low cost.
With the current setup we’re not totally boned, we can move non-priority VM disks to the extended drive array to free up I/O on the SSD’s, or we can slap a couple more disks in the box and create a new array for the database VM’s.
3) More powerful processors
I chose the motherboard and processor models mainly as a cost saving measure. The price/performance ratio for the Intel E5645 processor is fantastic, but they are significantly outpaced by the latest chips. Consequently, I chose a motherboard which supports this processor but tops out with the E5500/E5600 series. That means there is really no room for improvement in the CPU space.
Seeing as the server is loaded up with 96GB of RAM, it’s likely that we’ll exhaust the CPU resources before the rest of the system. In hindsight, I probably should have stepped up to the B2 socket and grabbed a pair of Intel E5-2430 chips. While those processors are on the lower end of the E5 spectrum, it would leave room to grow into the mighty 8-core processors down the line.
You live and you learn. This was a really interesting and fun experience for our build team. In the end, we made some mistakes, but overall we came out on top with a massive performance gain and impressive cost savings. When this server pays for itself in the spring of 2014, you can bet we’ll be on the lookout to build the next one bigger, better, and smarter.
If you’re on the fence about building your own server, it’s not something to take lightly. There are a lot of subtleties to the project and a lot of research is required. For those who take the leap, it will most likely be a genuinely rewarding experience.