Recently I decided to try and throw together a one stop shop for managing frequently performed actions in my homelab. The goal was to enable me to faster create VMs, test software, rip them down, and rebuild. This is where I started with LabMan - HomeLab Manager. It is very alpha. Very very alpha. It consists of 2 components: a Rails application that handles background jobs and the web UI, and an agent written in Go that runs on the servers and checks them in while providing some general info such as installed packages.
HAProxy is my homelab loadbalancer of choice due to it’s versatility and general ease of configuration. Whether it’s HTTP or just plan TCP traffic I want to land within my lab, a few tweaks to an HAProxy config is all it takes. However, as I deploy more and more random services, which I want available from the internet, having to remote into my various HAProxy ingress servers becomes a pain. Also, since I like to have isolated HAProxy instances depending on what I’m doing, yet again having to remote into boxes to make changes becomes even more tiresome.
From nothing to OpenShift in a bunch of steps!
Complete with vCenter 6.5 support
Current network diagram You can view the current HomeLab diagram here: For those who have never seen this before: yes, I did run fiber out to my shed. I live in Minnesota and last winter I took advantage of the ridiculously cold winters we have and housed some servers out there. Major changes Here is a list of big changes made in the current revision: Dedicated pfSense firewall was replaced with virtual Cisco ASAv firewall Replacing the firewall means I’m connecting my cable modem to my Cisco 3750 core switch Also replaced the virtual pfSense firewall on my dedicated server in Atlanta with a pair of active/standby ASAv firewalls Upgraded host-to-storage networking to 10Gb which included adding 3x10Gb interfaces to my FreeNAS box Moved from NFS to iSCSI for VMware datastore share in order to achieve a network configuration where I can direct connect ESXi hosts to FreeNAS while still having a single common datastore (with working vmotions, DRS, etc) Added an HP DL380 G7 to the lab (2x Xeon L5630, 96GB RAM, 4x 300GB 10k drives) Started migrating hosted projects from Kubernetes to OpenShift Origin Wish list Of course these additions sparked some additions to my wish list:
Perhaps the most frustrating thing about vCenter 6 is the hard 8GB of RAM requirement to simply install the application on a Windows server. To a lot of people, including myself, carving out 8GB of RAM VM in a home lab just to run vCenter for a host or two is a big ask. But beyond that, even if you’re using a physical computer that has exactly 8GB of installed RAM, if any of that RAM goes to an integrated video card, and thus doesn’t appear available to Windows, you’ll be blocked from running the VCenter installer as your system will have slightly less than a full 8GB of RAM.