When it comes to the hardware you are going to use
to virtualize your infrustructure on, you basically have two choices: buy or
build it yourself. Obviously you can run a few machines on a 6 core desktop
machine you put 16 Gb of memory into,
but that’s only suitable for testing purposes -- not production use.
If your company or client has the money then I would definitely say buy prebuilt hardware certified for your choice of wmware or hyper-v (read HP or IBM) backed by an enterprise class SAN. Most midsize to small companies do not have the budget to accommodate those kinds of infrastructure purchases. That leaves us with building our own hypervisor from parts. Luckily not as complicated as it sounds but there are a few gotchas to keep in mind along the way. ( I will be documenting the project we are working on at work towards that end in this post as time permits)
If your company or client has the money then I would definitely say buy prebuilt hardware certified for your choice of wmware or hyper-v (read HP or IBM) backed by an enterprise class SAN. Most midsize to small companies do not have the budget to accommodate those kinds of infrastructure purchases. That leaves us with building our own hypervisor from parts. Luckily not as complicated as it sounds but there are a few gotchas to keep in mind along the way. ( I will be documenting the project we are working on at work towards that end in this post as time permits)
Case: SUPERMICRO
CSE-836TQ-R800B Black 3U Rackmount Server Case $809.99
At $810, this might seem a bit steep for a 3U
case, but keeping in mind that it has 16 hot swap 3.5” SATA/SAS bays with
backplane and dual/redundant 800W power supplies, it’s a good investment. All
the fans and rail mount kit (screwless) come included as well. Over a nice
case.
Motherboard: SUPERMICRO MBD-X9DR3-F-O $469.99
There are comparable Intel motherboards for dual
CPU Ivy Bridge Xeon chips, but if you are buying a Supermicro case you’ll
probably get easier installation. This MB uses the same Intel C600 series
chipsets that the other ones do. It has 2 6gb SATA ports, 4 3gb SATA ports, and
2 SCU ports (8087 SAS plugs) for a total of 14 SATA connections.
Memory: Kingston
4GB 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10600) ECC Registered Server Memory Model
KVR1333D3D8R9S/4GHB $279.92 for
32Gb (we intend to expand it to 64Gb
later)
RAID card:
Areca ARC-1223-8I $470.00
Yes, I supposed you could stick with RAID 10 and
the onboard Intel SCU chipset, but you will get better performance out of the
Areca card. In our case, we are using 1 SATA MB connection for the DVD drive, 2
SATA MB connections for the 2 drive boot mirror (software raid) leaving us room
to build a 8 drive RAID5 or RAID6 array for VM file storage. The 6 drive bays
leftover can be populated with larger slower drives and used for disk backup
purposes or what have you.
NIC card: Intel
E1G44HTBLK 10/ 100/ 1000Mbps PCI-Express 2.0 Server Adapter I340-T4 $230
The Supermicro motherboard comes with 2 gb Ethernet
ports, but since we have a Cisco router that supports 802.5ad link aggregation
we bonded the 4 ports on this PCIe 4x card to get decent bandwidth for all the
VM’s that are going to run on this server.
CPUs: 2 Intel Xeon E5-2620 Sandy Bridge-EP 2.0GHz $420 each
Sandy Bridge technology. 12 cores plus hyperthreading, turbo boost,
what more can you ask for? Incidentally, the E5 models below 2620 are cheaper
but do not have turbo boost or hyperthreading, so I would say this is the bare
minimum for CPU you want to go with.
Grand Total (excluding tax and shipping): $2,819.98
- not bad at all for what it’s capable of.
Obviously given the price of hard drives right now
your final cost will go up but not by a ridiculous amount if you stick with 3.5”
SATA drives. (If you have the money for a server full of 2.5” SAS enterprise
quality hard drives or SSD array, you probably would be going with an HP or IBM
prebuilt server)
Notes on installation:
The motherboard went into the case pretty
smoothly. All the cables worked out except for 1 of the case fans, which we had
to buy a separate fan power cable extension for. Memory no issues. CPU – make sure
you get the exact (fanless) heatsink or it won’t fit.
The included SATA/SAS breakout cables included
were all of sufficient length to attach to the backplane. The bottom SATA
connectors of the backplane are a little hard to reach with big hands.
Windows Server 2008 R2 will not boot from the SCU.
At least we couldn’t figure out how to get it to, even using the Intel SCU
array drivers included with the cd. You can install to/boot from a SATA raid 1
no problems though, so we went with 2x500Gb boot drives connected to the MB
SATA ports.
You will have to install the Intel NIC drivers
before the Ethernet connections will work in 2008 R2 (no big deal, you can
download the latest online from Intel)
We had a moment of panic when (after installing
everything and doing windows updates) starting up a CentOS 5.2 VM put the
server into an infinite blue screen of death crash loop. There is a hotfix
available from Microsoft that cleared up the issue, but that made it crystal
clear the Hyper-V is only half baked when it comes to Linux support. (At least
that’s how it feels coming from a VMWare setup where almost everything just
works for any guest O/S)
I’ll be updating this entry as we move over our
production workload to the server. I’ll probably do a separate entry for the
Veeam replication & backup solution that is necessitated by not using an
enterprise grade SAN solution to store your VM’s on iSCSI targets.
If you have any issues or questions feel free to
drop me a line.






