Home Lab Build: vSpecialistLabsv2 Hardware

Choosing your home lab hardware can and will be a trade-off between many factors. Capacity, speed / performance, cost, intended use and physical space available all play a part in considerations for what you want vs. what you can get away with for your home lab. There are also some other factors that are maybe less obvious to consider too, like noise, power draw and what your wife says you can put in the home office! As I said before, it’s all a trade-off, but here’s what I decided on.

1) Compute Power.

For the compute power, I went for a split approach. I’ve always advocated running ESXi on bare metal tin where you can (although you can run it as a nested VM given the right hardware), so I decided to split my compute power between physical and virtual ESXi. Why? Flexibility and cost. Being able to move instances around the lab will be useful in the future, but ultimately it comes down to cost. Physical hardware is expensive and consumes power. So, I went for:

  • 2 x HP ProLiant MicroServer N36L servers, each with dual-core AMD processors and a local 250GB drive and an ESXi-compatible USB key fitted internally. I picked these up really cheaply (£45 each) from a popular auction site, and for the money you can’t really go wrong. They may only have 2 cores and a maximum of 8GB RAM (2 x 4GB), but they punch above their weight in home labs and offer good expansion options for local storage and via PCI, a DVD drive, USB and eSATA ports and on-board gigabit LAN. (The updated bigger brother of this server is the ProLiant N40L, which HP and resellers are almost constantly offering cashback offers on!)
  • ServersPlus Business PC. Again, here you can’t really go wrong. The linked version is the updated version (slightly more expensive too) of the one I got, but essentially the highlights are: 16GB RAM (max. 32GB supported by the motherboard I have), Intel i7-2700k with 8 logical processors and OEM Windows 7 64-bit for £600. Upgrades to the standard spec I got were: 1) Maxing the RAM with a 32GB kit (8Gb x 4 DDR3 DIMMs) from Crucial UK for £340, 2) 1 x OCZ Vertex 3 64GB SSD for OS / applications + 1 x OCZ Vertex 4 128GB SSD for ‘local’ VMs, both from eBuyer for £55 and £90 respectively, and 3) a dual-SSD drive caddy from Overclockers for about £5.

2) Networking.

For networking, I decided to take the easy route. I already have a decent home router that connects us to the interweb, so I decided to piggyback off this for the lab. The router has 4 x 1GB LAN ports, so extending this was easy. The only addition to the networking was:

  • HP ProCurve 1410-8G switch, £45 from eBuyer. This is a dumb layer-2 8-port gigabit switch that’s essentially plug-and-play. Simply plug-in the cables from the various devices and away you go. No configuration necessay either connecting it to the router – router port 1 to switch port 1 via a standard cat-5 cable is all you need. All the rest of the kit in the lab connects to this switch, then goes to the router as needed. (More on this in a lab networking post). To be honest, this will be first thing I will upgrade – simply from a capacity and management perspective when I want to start labbing vLANs etc, but it does fine for now.

3) Storage.

I went a bit mad on the storage specification for the lab, going for capacity and decent performance. In the end, I went for:

  • QNAP TS-459 Pro 2 (about £650). This is a good but expensive choice, but ultimately worth the investment. Highlights of the specs include official VMware HCL support, multiple 1GB NICs, multipath support, iSCSI support, replication and hosts of other features. I installed 2 x 3TB Seagate Barracuda SATA 7.2k and 2 x 2TB Seagate Barracuda SATA 7.2k drives to make a total raw capacity of 10TB. Drives were a maximum of £100 each, but fluctuate in price wildly, so if you are looking for drives, go with what’s best and cheapest at the time! (More on this in a lab storage post).

4) Accessories.

Just having a computer, switch and a NAS is never the full story when putting it all together. To complete the hardware section, here is a list of ‘bits’ I also purchased or used for the lab:

  • Eizo CE210W monitor. One I already had and am re-using, but a good one because it has multiple DVI inputs.
  • A USB hub, to connect multiple devices to the PC via the desk (I hate training wires).
  • A DVI switch (because the monitor doesn’t have enough DVI inputs!
  • Multiple DVI-VGA cables (a couple of pounds each from a popular auction site).
  • Cat-5 patch cables. DON’T go to PC World to buy these (you don’t need to pay £8.99 or something for a single 1m patch cable) – get them online and cheap instead.
  • Spare keyboard and mouse. If you don’t have a decent KVM, having a spare keyboard and mouse means you can quickly move around physical kit without much grief. Make sure it’s USB though…
  • Freecom 1TB USB external drive. Useful for one touch backups, and for moving big stuff between hardware – especially useful during set-up.

So, there you have it – a quick run down of the hardware I’ll be using in the new vSpecialistLabsv2 set-up. As I go on, I’ll add more posts on the configuration of all this.

Any questions, please ask!

Jeremy loves all things technology! Has been in IT for years, loves Macs (but doesn't preach to others about their virtues), loves virtualization (and does shout about it's virtues), and sometimes skis, bikes and directs amateur plays!

Comments

  1. Calypso Craig says

    Very jealous of your setup! I just bought the N40L last week and when my fat tax return arrives, I’ll purchase a second.

    I am curious though as to why you opted for a mix of Intel and AMD procs?

  2. jeremyjbowman says

    Hi there,

    To be honest, it was mostly a decision around cost – I took what I could get at the time at a best price. It also dovetails nicely with some design principles though, i.e. separating the management tier from the compute / production tier. As most of my work is around cloud, what I’m planning is separate clusters, one managing the cloud tier and the other providing the compute resource for any tenants. In this model, it doesn’t matter too much what’s underneath the cloud compute as it’s abstracted anyway. The other benefit of having the powerful workstation is I can virtualise ESXi and nest VMs within workstation. VMware Inception anyone?

    Cheers.
    Jeremy.

  3. Craig M says

    Cool cool. I have a laptop that is similar in spec (2 x quad i7, 16GB) and use it for the wonderful nested ESXi, but I was reluctant to go for an AMD because I thought I needed like CPU for SRM testing (just realised I was wrong) and cluster my nested ESXi and N40L. But that’s just a minor inconvenience compared to the awesomeness the N40L provides :)

    Though, if only it did pass-through!

Leave a Reply

Your email address will not be published. Required fields are marked *