How to build a VMware ESXi 5.0 whitebox for your labMay 29th, 2012 | Posted by in Cisco | Lab | Lync
I can say that my ESXi server is the heart of my lab, so before I got everything, I was doing some research to get the best deal. The easiest way to get it built, is to follow the VMware hardware guide to get a supported server, but it means a higher price and maybe, pointless features for our purpose like redundant PSUs. On the other hand, you will, unlike me, save some headaches with the supported path. I like challenges!
In my case, I wanted something small with support for VM Direct Path and of course, all the hardware embedded on the motherboard. Based on those prerequisites, I found the Asrock Z68 Pro3-M, a micro-ATX form factor motherboard.
– Motherboard: Asrock Z68 Pro3-M (Graphics and NIC on-board)
– CPU: Intel Core i5-2500 (note that is the non-K version, to take advantage of the virtualization features)
– RAM: 4x8Gb G.Skill Ares
– Case: Cooler Master Elite 342 MicroATX
– PSU: Tacens Radix V 450W
– HD: 1x Samsung SSD 830 128Gb – 4x2TB Seagate ST2000DL003
– Extra: Intel pci NIC.
I boot ESXi 5.0 from a USB pen drive and without doing anything, everything is recognized, the NIC Realtek RTL8111E included.
Currently I haven’t deployed any RAID with the hard disks yet, I bought it with the purpose of tricking somehow ESXi to run FreeNAS as a VM and opt for a RAID5 to use it as data store through iSCSI. Activating VMCI in each VM, should perform well but I haven’t try it yet. Each disk is attached as a single datastore with all the VMs spread to improve IOPS.
Any virtualization system is RAM hungry so I went for a 32GB kit (the max capacity of the motherboard) to meet my objective.
The CPU is not a problem, maybe you can go for a lower speed, is entirely up to you. I went for an Intel mainly because CUCM and AMD are not good friends.
For the network configuration, I added another NIC just to separate the DMZ and LAN networks, however with a VLAN capable switch it is not strictly necessary. Load balance traffic would also be a good reason to have a second NIC, but considering that most of the huge traffic is performed among the VMs, the best option is to activate VMCI in each VM to get amazing throughputs, getting rid of the network layer, which add overhead to the communication.
I have deployed a Cisco environment and a Lync environment which I will explain in great detail in the next entries. I have never run all the VMs at the same time because I don’t need it at the moment, but one day I’ll look into it.
To avoid unnecessary power consumption, I just power up the server when I am going to do something. If I am not at home, and I decide to check something through my VPN, I just need to leave the pfSense server power up to use Wake-on-LAN and send the magic packet to start the ESXi host.
After that huge text, I will post some pictures in a few days to ease up the reading.
If you want to know more, be it a specific detail or whatever, don’t hesitate to drop a comment or an email.