Homelab: Compute

A little while ago, I wrote up some goals for the Homelab. The idea behind these goals was not so much to build a lab that compares in power to a normal DC, but to build something that can match in the way its configured. This is for my own continued education as well as for use in Demo’s of products to customers.

So lets take a look at the one of the areas mentioned in more specific terms. Compute. In my Goals post, I listed the following items as goals to look for. I must admit, I already had a product family in mind by the time I made these goals, just not a specific model chosen. Never the less, lets revisit that list.

Compute:

  • 1U rack height
  • 4 or more Cores
  • Dual NIC’s
  • 1-2 PCIe slots
  • Greater than 32GB RAM Max
  • IPMI (Dedicated or Shared NIC)

 

What Matters Most?:

In the case of my lab, it matters more to be able to match a configuration than it does to have the most powerful one ever. As an Engineer at heart and a Pre-Sales Systems Engineer, its important to work through configurations for accuracy to how customers deploy their infrastructure. More specifically, how customers deploy ESXi. Everything from the vmkernel IP’s to advanced settings on processor performance.No as most people would consider, I could have done this in a nested environment. Nested is a great idea when you are studying for test or wanting to do functional testing, but long term it has its limits. Nested was too underpowered and giant dual socket, 6-8 core servers were too power hungry! Also as a side note, this was going to be in my office and noise also needed to be taken into consideration quite a bit.

 

Core Count:

I decided that a 4-core machine with a single socket was plenty powerful. Take a look at all of the Intel NUC blog posts and you’ll see that they have what it takes in power, but are a little light on RAM. Spoiler alert: the only time i’ve hit almost 100% CPU was during the vRA deployment. Even now, I have one box at 40% and two at 20%. 4-Cores are more than enough, even without right sizing the VM’s down.

 

Network Connectivity:

As most labs go, 1G Networking is sufficient. Its hard to saturate a 1G switch in a homelab by most peoples standards. When I started looking at hardware, I looked at the prices of 10G networking and it put me off. At the time, 10G network switches would have put me back about $1500, and those only have 8-12 ports. Though the cost has come down since then, it was insane. My plan was to buy hosts with PCI slots available for 10G cards later. My major concern here was that I needed to make sure that any server purchased, had two physical NICs onboard. I’ll hand it to William Lam for building that into the NUC’s, but this needed to be clean, built in and ready to go with minimal firmware work.

Now at the same time I was looking, Supermicro had just announced their X10SDV line of motherboards. These are Embedded Xeon boards with 10G Ethernet or SFP+ built in. Let me repeat that really quick… 10G Networking built into the motherboard! That was potentially going to save me about $200 down the line, or in my case cost me about $200 more now vs later if I bought them. This lends towards my goal of upgradability. I could start with 1G network switches now and only need to swap the switches and cables later to upgrade. tempting. It wasn’t necessary but would definitely get me closer to “real world server config” as possible.

 

Memory:

Downside to the Intel NUC’s is the 32GB or RAM. RAM will always be the bottleneck in the DC and its no different in the homelab. I needed to make sure that any servers I run could handle more than 32GB. Its a common complaint amounts the vExperts and its one I wanted to avoid. When I started, I really liked the Shuttle PC’s. Their form factor has been well known for years and they have made some advances in max RAM capacity. Also to note is that most of those with higher limits also had dual NIC’s, so that was a plus. When the Supermicro boards came out though, they blew the competition out of the water. 128GB of RAM max capacity in 4 DIMM slots. Downside there is cost. 32GB DIMMs at the time cost about $250 per DIMM, ouch! Altogether still needed to make sure I wasn’t limited.

 

OOB Management:

In a previous role, I used both iDRAC and IPMI. I was leaning towards IPMI, only because to get something with iDRAC, I would have had to sacrifice  for noise levels and power consumption. That wasn’t going to happen and there are plenty of boards out with IPMI now. An added bonus to going with IPMI was all of the open source central management solutions out there. In the past I’ve used xCAT, developed by IBM/Lenovo engineers and made available as open source to their server users. It gave me a CLI for managing a whole datacenter’s worth of hardware and uploading firmware to hosts from a single point of management. Ideally, I would want to do the same here. If IPMI is shared with a network port or the board has a dedicated port, that didn’t matter. What matters was getting ISO’s to hosts via the network and not using any KVM equipment.

 

End Result?:

When I started looking into what I would use for compute, the X10SDV’s were still just a marketing promise. While working on a couple ideas for whitebox configs, Supermicro went from marketing to production. For a short time, I considered doing an open-air deployment and doing something like the Ikea Helmer Render Farm or something closer to how some larger DC’s place motherboard on rack trays (this was something that seemed particularly interesting to me).

In the end I went with the SYS-5018D-FN8T (or X10SDV-TP8F Motherboard). I went with this specifically for:

  • 35W TDP
    • This plays into low power and how quiet the server runs.
  • 6x Dedicated 1G NIC’s
    • If you were going to start with multiple 1G NIC’s to put off 10G networking as long as possible
  • 2x 10G SFP+ ports
    • BOOM! Favorite reason #1
  • 128GB RAM Max
    • Favorite reason #2
  • Small form factor
    • Comes in the same 10″ deep server chassis that I was already looking at for whitebox configs.
  • PCI Expandability
    • This is more for future use, looking at a Supermicro HCL storage controller.
  • Dedicated IPMI
    • also… Supermicro has Central Management tools in abundance. Depending on how you want to access or what you want to manage.

 

The server is a little more expensive than I was initially interested in, but meets my power and noise requirements. Noise and power were huge concerns for me, when originally looking at used servers on Ebay. I had to also consider adding a new line to the office, given that the main circuit breaker was full from when the house was built, it would be run from the main box outside. So the SYS-5018D-FN8T helps conserve power which really worked for me, but still provided upgradability that I knew I would need. I had started with two servers initially, and 64GB of RAM each (2x 32GB DIMMs, with an internal discount for Kingston). Then held out a little longer for a 3rd node for HA and potentially vSAN later on. All in all I love these servers. They are doing a great job and holding up really well.

When building the homelab, just remember to think about what you truly want out of it. I chose low power, cooling management over just functional testing, especially since I use this to show customers. Depending on your personal “business value” that could be different.

 

As a side note:

I attempted to purchase all of the same parts for 5018D-FN8T, and its not worth the time and effort to assemble over the small saving. Buy the completed system and only add the active cooler fan if you think you will put a little extra stress on the machines.

Homelab: Goals

I’ve been meaning to do a write up on designing my homelab. In my last job, I had access to hardware and some essential networking bits, but now that I’m a Field SE, i’m in a different situation. I have access to internal tools and nested deployments (otherwise called PODs), as well as some Hands-On-Labs deployments. These are great for doing quick demo’s but for continued education purposes, the consensus among SE’s is that “Nothing beats a homelab”.

Continue reading “Homelab: Goals”