Digging deep into Unified Communications from Spain.
Header

How to build a VMware ESXi 5.0 whitebox for your lab

May 29th, 2012 | Posted by Jaime Diez in Cisco | Lab | Lync

I can say that my ESXi server is the heart of my lab, so before I got everything, I was doing some research to get the best deal. The easiest way to get it built, is to follow the VMware hardware guide to get a supported server, but it means a higher price and maybe, pointless features for our purpose like redundant PSUs. On the other hand, you will, unlike me, save some headaches with the supported path. I like challenges!

In my case, I wanted something small with support for VM Direct Path and of course, all the hardware embedded on the motherboard. Based on those prerequisites, I found the Asrock Z68 Pro3-M, a micro-ATX form factor motherboard.

Hardware list:

– Motherboard: Asrock Z68 Pro3-M (Graphics and NIC on-board)

– CPU: Intel Core i5-2500 (note that is the non-K version, to take advantage of the virtualization features)

– RAM: 4x8Gb G.Skill Ares

– Case: Cooler Master Elite 342 MicroATX

– PSU: Tacens Radix V 450W

– HD: 1x Samsung SSD 830 128Gb – 4x2TB Seagate ST2000DL003

– Extra: Intel pci NIC.

Considerations

I boot ESXi 5.0 from a USB pen drive and without doing anything, everything is recognized, the NIC Realtek RTL8111E included.

Currently I haven’t deployed any RAID with the hard disks yet, I bought it with the purpose of tricking somehow ESXi to run FreeNAS as a VM and opt for a RAID5 to use it as data store through iSCSI. Activating VMCI in each VM, should perform well but I haven’t try it yet. Each disk is attached as a single datastore with all the VMs spread to improve IOPS.

Any virtualization system is RAM hungry so I went for a 32GB kit (the max capacity of the motherboard) to meet my objective.

The CPU is not a problem,  maybe you can go for a lower speed, is entirely up to you. I went for an Intel mainly because CUCM and AMD are not good friends.

For the network configuration, I added another NIC just to separate the DMZ and LAN networks, however with a VLAN capable switch it is not strictly necessary. Load balance traffic would also be a good reason to have a second NIC, but considering that most of the huge traffic is performed among the VMs, the best option is to activate VMCI in each VM to get amazing throughputs, getting rid of the network layer, which add overhead to the communication.

I have deployed a Cisco environment and a Lync environment which I will explain in great detail in the next entries. I have never run all the VMs at the same time because I don’t need it at the moment, but one day I’ll look into it.

To avoid unnecessary power consumption, I just power up the server when I am going to do something. If I am not at home, and I decide to check something through my VPN, I just need to leave  the pfSense server power up to use Wake-on-LAN and send the magic packet to start the ESXi host.

After that huge text, I will post some pictures in a few days to ease up the reading.

If you want to know more, be it a specific detail or whatever, don’t hesitate to drop a comment or an email.

You can follow any responses to this entry through the RSS 2.0 You can leave a response, or trackback.

4 Responses

  • Hey Jaime,

    Nice post, I thought I would mention to you and your readers, if you need to re-build your VMware lab quickly, check out the automated build we put together called the AutoLab on labguides.com

    Cheers,
    Nick Marshall

  • Carlos says:

    Hi Jaime,

    Great blog and post! I’m also planning to to become a ccie voice and would like to build a home lab as you did. I already got the VG’s for the HQ,BR1,2 and PSTN but are still running on some older ESXi hardware server..
    What is the point of using the Samsung SSD in the ESXi whitebox setup?
    Good luck in your journey to become a ccie voice 🙂

    Thanks,
    Carlos

    • j.diez says:

      Hi Carlos,

      Thank you for your comment. I am using a SSD because most of the VMs that I am running are kept there, so I want to avoid any I/O bottle neck. Of course this is not strictly necessary, you can use SATA disks but then, you should try to spread the VMs among the disks as much as possible.

      Cheers,



Leave a Reply

Your email address will not be published. Required fields are marked *


*