Numerous blogs exist that detail out how to install ESX and vCenter, etc. I don’t need to add one more to the ever growing list of community experts, nor do I want to invalidate any of them. Besides I utilize a bunch of these blogs for my own purposes, in addition to using them in my day job.
You will see though, that my home lab build deviates from a normal infrastructure architectural reference that you would likely see within your own environment. I try to incorporate architectural best practices where I can, however, the first layer of this design has quite a few caveats. Continue reading “Building the lab 1: Starting from scratch”→
The only thing constant is change. Change is the backbone of any IT organization. New widgets, software, and hardware seem to come out daily. Our job as IT professionals is to try and stay aware of these new products. However, while we try and stay ‘cutting-edge’ and ahead of all this change, we always seem to fall behind at some point. What we ought to try and do though, is not fall so far behind that we lose sight of the pack. Thus, we become obsolete and are expendable.
Recently, I went to a vCloud Director 9.x Design Workshop. Yes, my friends — vCloud Director is not DEAD. While the software is primarily for Service Providers, it is still a mighty tool that allows many IT groups the ability to rapidly deploy internal, isolated, “pods”. This training got me to thinking, ‘why am I not using vCD in my lab?’
That’s why, once again, I am updating my homelab. Over the last few years, I’ve torn down and rebuilt my lab numerous times. This has wound up taking weeks and months of time to reset back up — just to test something. It seems most often, the rebuild wastes so much time. This time around, I’m going to explore rebuilding my lab around vCloud Director 9.x.
Over the years, I have gone from a full 42U rack with Dell PowerEdge servers that consume massive amount of power, cooling, and my personal manpower to maintain. This hurt my wallet (as well as my time) — a lot, which also caused numerous problems with finance (aka: the wife). A while ago, I replaced the Dell PowerEdge servers with a Supermicro Super Server. This has been working out great for me. As a matter of fact, this past year I have made a few hardware modifications to the lab. I wound up running out of space and had to upgrade the hard drives in my synology box from (5) 2TB drives to (5) 3TB Drives. To expand the capabilities, additional hardware was acquired: A new Intel NUC was added as a payload target, and another Supermicro Super Server was obtained at the end of the year (Merry Christmas, right?).
Further blog posts will detail my rebuild journey. I fully intend on sharing what I learn.
C.R.I.B. – Stands for Computer Room In a Box. This is the name I have given my homelab. I’ve used my C.R.I.B. to educate myself, experiment with things, and demo products to my customers.
As you’ve read in previous posts, my homelab has evolved over the years. Currently, I run one physical Supermicro Super Server attached to a Synology DS 1512+ Array, connected to a gigabit switch. Many of my friends and co-workers have asked how I run everything on one physical box — which I am going to call pESX. I’ve tried to explain it and draw it out on a whiteboard. However, until you’ve seen it drawn out, the explanation gets confusing — unless you’re familiar with nested virtualization.
I utilize nested virtualization to expand the capabilities of the C.R.I.B. without additional physical hardware. If you are unfamiliar with nested virtualization, it is the ability to run ESX as a virtual machine — which I will call vESX. (William Lam has written numerous articles on how to do it. Just google “William Lam” and “nested virtualization” if you want more info.) The entire CRIB is accessible from my home network — which is a lifesaver, as I do not have to work in my office. I can access it via VPN or the couch.
A pfSense virtual machine (GATEWAY) was implemented to act as firewall and router to the entire virtualized environment, including the nested layer. The pESX is on the base network with a vmkernel attached to the pfSense Virtual Machine to allow for manipulation and modification of the firewall rules and network configuration. All traffic in and out of the virtual environment will pass through the pfSense VM. The firewall in the pfSense provides isolation as well as communication between the various networks.
All of the infrastructure virtual machines sit on the first virtualization layer – this is considered as the “Management Cluster”. However, this cluster, is only made up of the one physical ESX Host (pESX). Normally, we would want multiple hosts for HA redundancy. (But this is a lab, and I’m on a budget.) The vESX Virtual machines sit in this layer as well. The vESX VMs have a direct connection to the base network for access to the iSCSI storage Array. vESX make up the ESX Compute Resources of the “Payload Cluster”. These clusters; the Management Cluster and the Payload Cluster, are a VMware architectural design best practice. The infrastructure is made up of your basic VMs; a Domain Controller (DC), a SQL Server (SQL), the Management vCenter (vCSA6), a Log Insight Server (LOG), and a VMware Data Protection VM (VDP) for backups. In addition to these VMs, the vRealize Automation (vRA) VMs and Payload vCenter (PAYVC01) also sit in the management cluster. This self-service portal deploys to a vCenter (PAYVC01) endpoint controlling the compute resources of the Payload Cluster.
The Payload Cluster is made of three virtual ESX Hosts (vESX) and provide the various resources; network, CPU, RAM, & Storage, for consumption by vRA or other platform products. There is an Ultimate Deployment Appliance (UDA) VM providing the ability to deploy scripted ESX images. This provides the ability to quickly rebuild the hosts, if needed.
This is just the base. I am in process of deploying NSX into the environment to provide the ability to deploy multi-machine blueprints within vRA. In addition, I intend on exploring SRM integration with vRA.
Just this week, VMware released vRealize Automation 7.0.1 (vRA). It contains many bug fixes and some enhancements to the vRA platform. I was excited for it to come out and was anxious to perform an upgrade in my home lab.
I will advise caution and planning in any upgrade of your environment. But I would stress heavily on the planning. You should know your dependencies before you attempt an upgrade, and always. ALWAYS, read the Release notes before you start the upgrade process.
The following process is for a simple vRA instance. This is the Proof Of Concept build, sometimes referred to as a “Lab” or “Sandbox” build. However, these steps can be modified for a fully distributed vRA instance.
Here is how I upgraded my lab.
1) Take snapshots of the vRA Cafe Appliance, IaaS VM, and SQL VM.
2) Shutdown the vRA Services
SSH into the vRA Cafe Appliance and shutdown the vco-server, vcac-server, apache2, and the rabbitmq-server services.
Run the below commands to stop the above listed services:
#service vcac-server stop
#service apache2 stop
#service rabbitmq-server stop
#service vco-server stop
You can check that the services have stopped using the status syntax: #service vco-server status
Log into the IaaS Virtual Machine and stop the below listed vRA services.
Woot! Ok, that was cheesy, I know. However, my home lab has undergone significant improvement, redirection, and an overall evolution. And that calls for celebration! So Woot it up!
Over the last two years, I went from individual rack mounted Dell PowerEdge 1950s and Synology storage, to a four blade Dell c6100 Cloud Server chassis, to rented COLO Space from OVH, and now to the all new mobile lab. You are probably asking yourself “why all the changes?” Well, it grew from a need to be more energy efficient, to a random act of god (lightning strike), and eventually to money.
While the Dell Cloud Server averaged approx 700Watts fully loaded keeping my electric bill down, my new Mobile Lab will average less than 120Watts. (I’m not exactly sure how much lower as I haven’t put the meter on it yet.) OVH was a very good colo – especially for the price. I just don’t want to pay them rent any longer. The Synology will remain the in-home media server/backup location for the family and no longer used by the lab.
It doesn’t have to be. I have a VPN to my home, so I can connect to it remotely. Plus, the new server has the ability for remote control, so I can power it on/off – if needed, without the need of the “break/fix” wife. But with the size of the server, I can take this bad boy on the road. The downside to that is TSA, theft, and potential travel damage. The upside to it is being able to plug it up and demo on the spot. In addition, when there are power or internet outages preventing me from connecting to my home from the hotel or the customer’s workplace; I could still work in the lab — without that dependency.
What makes up this mobile lab?
Over the last year, several co-workers and I have been discussing building small labs. We had several ideas for what we wanted to use, but we were also very particular in what we wanted. We pinned requirements and started searching. One friend decided to run an Intel NUC, another is utilizing a Gigabit server, and one more suggested the hardware I am using now.
Our requirements (or I should say, my requirements) were the following:
It had to have a small footprint.
It had to be energy efficient, low power, and put out little to no heat and noise.
It had to be able to run 64GB of RAM or more.
It had to be inexpensive.
With that list, you would think oh – and Intel NUC or a MAC MINI. But neither of those can use more than 16GB of RAM. Once you install vCenter, a nested ESX host, and a couple of VMs, you are out of RAM. In addition, the NUC is the only one that is inexpensive. However, you will have to purchase a few of them to be able to do anything worthwhile within a lab environment.
This brings us to the setup of the mobile lab. All of my storage, I salvaged from previous equipment that I have had lying around from my earlier versions of my lab.
The unit is tiny compared to many other options with the same featuresets. I can pack it into my backpack along with the power supply, a couple of ethernet cables, a small five port switch and still have room for a bag of Reese’s Pieces. 🙂
The mobile lab is based on the SuperMicro Super Server Mini-ITX series of motherboards. It is the foundation of the entire lab. Within this base, is a complete nested vSphere 6 environment running almost a full VMware SDDC stack (I’m not running NSX).
As time allows, I will detail the buildout of this lab in the future.