Building the lab 1: Starting from scratch

…..again.

Numerous blogs exist that detail out how to install ESX and vCenter, etc. I don’t need to add one more to the ever growing list of community experts, nor do I want to invalidate any of them. Besides I utilize a bunch of these blogs for my own purposes, in addition to using them in my day job.

You will see though, that my home lab build deviates from a normal infrastructure architectural reference that you would likely see within your own environment. I try to incorporate architectural best practices where I can, however, the first layer of this design has quite a few caveats.
Continue reading “Building the lab 1: Starting from scratch”

Change…

The only thing constant is change. Change is the backbone of any IT organization. New widgets, software, and hardware seem to come out daily. Our job as IT professionals is to try and stay aware of these new products. However, while we try and stay ‘cutting-edge’ and ahead of all this change, we always seem to fall behind at some point. What we ought to try and do though, is not fall so far behind that we lose sight of the pack. Thus, we become obsolete and are expendable.

Recently, I went to a vCloud Director 9.x Design Workshop. Yes, my friends — vCloud Director is not DEAD. While the software is primarily for Service Providers, it is still a mighty tool that allows many IT groups the ability to rapidly deploy internal, isolated, “pods”. This training got me to thinking, ‘why am I not using vCD in my lab?’

That’s why, once again, I am updating my homelab. Over the last few years, I’ve torn down and rebuilt my lab numerous times. This has wound up taking weeks and months of time to reset back up — just to test something. It seems most often, the rebuild wastes so much time. This time around, I’m going to explore rebuilding my lab around vCloud Director 9.x.

homelab
Home Lab

Over the years, I have gone from a full 42U rack with Dell PowerEdge servers that consume massive amount of power, cooling, and my personal manpower to maintain. This hurt my wallet (as well as my time) — a lot, which also caused numerous problems with finance (aka: the wife). A while ago, I replaced the Dell PowerEdge servers with a Supermicro Super Server. This has been working out great for me. As a matter of fact, this past year I have made a few hardware modifications to the lab. I wound up running out of space and had to upgrade the hard drives in my synology box from (5) 2TB drives to (5) 3TB Drives. To expand the capabilities, additional hardware was acquired: A new Intel NUC was added as a payload target, and another Supermicro Super Server was obtained at the end of the year (Merry Christmas, right?).

Further blog posts will detail my rebuild journey. I fully intend on sharing what I learn.

C.R.I.B. The Logical Layout of My HomeLab

C.R.I.B. – Stands for Computer Room In a Box. This is the name I have given my homelab. I’ve used my C.R.I.B. to educate myself, experiment with things, and demo products to my customers.

As you’ve read in previous posts, my homelab has evolved over the years. Currently, I run one physical Supermicro Super Server attached to a Synology DS 1512+ Array, connected to a gigabit switch. Many of my friends and co-workers have asked how I run everything on one physical box — which I am going to call pESX. I’ve tried to explain it and draw it out on a whiteboard. However, until you’ve seen it drawn out, the explanation gets confusing — unless you’re familiar with nested virtualization.

I utilize nested virtualization to expand the capabilities of the C.R.I.B. without additional physical hardware. If you are unfamiliar with nested virtualization, it is the ability to run ESX as a virtual machine — which I will call vESX. (William Lam has written numerous articles on how to do it. Just google “William Lam” and “nested virtualization” if you want more info.) The entire CRIB is accessible from my home network — which is a lifesaver, as I do not have to work in my office. I can access it via VPN or the couch.

CRIB
Computer Room In a Box
A pfSense virtual machine (GATEWAY) was implemented to act as firewall and router to the entire virtualized environment, including the nested layer. The pESX is on the base network with a vmkernel attached to the pfSense Virtual Machine to allow for manipulation and modification of the firewall rules and network configuration. All traffic in and out of the virtual environment will pass through the pfSense VM. The firewall in the pfSense provides isolation as well as communication between the various networks.

All of the infrastructure virtual machines sit on the first virtualization layer – this is considered as the “Management Cluster”. However, this cluster, is only made up of the one physical ESX Host (pESX). Normally, we would want multiple hosts for HA redundancy. (But this is a lab, and I’m on a budget.) The vESX Virtual machines sit in this layer as well. The vESX VMs have a direct connection to the base network for access to the iSCSI storage Array. vESX make up the ESX Compute Resources of the “Payload Cluster”. These clusters; the Management Cluster and the Payload Cluster, are a VMware architectural design best practice. The infrastructure is made up of your basic VMs; a Domain Controller (DC), a SQL Server (SQL), the Management vCenter (vCSA6), a Log Insight Server (LOG), and a VMware Data Protection VM (VDP) for backups. In addition to these VMs, the vRealize Automation (vRA) VMs and Payload vCenter (PAYVC01) also sit in the management cluster. This self-service portal deploys to a vCenter (PAYVC01) endpoint controlling the compute resources of the Payload Cluster.

The Payload Cluster is made of three virtual ESX Hosts (vESX) and provide the various resources; network, CPU, RAM, & Storage, for consumption by vRA or other platform products. There is an Ultimate Deployment Appliance (UDA) VM providing the ability to deploy scripted ESX images. This provides the ability to quickly rebuild the hosts, if needed.

This is just the base. I am in process of deploying NSX into the environment to provide the ability to deploy multi-machine blueprints within vRA. In addition, I intend on exploring SRM integration with vRA.

How to upgrade a simple vRA 7.0 instance to vRA 7.0.1

Just this week, VMware released vRealize Automation 7.0.1 (vRA). It contains many bug fixes and some enhancements to the vRA platform. I was excited for it to come out and was anxious to perform an upgrade in my home lab.

I will advise caution and planning in any upgrade of your environment. But I would stress heavily on the planning. You should know your dependencies before you attempt an upgrade, and always. ALWAYS, read the Release notes before you start the upgrade process.

The following process is for a simple vRA instance. This is the Proof Of Concept build, sometimes referred to as a “Lab” or “Sandbox” build. However, these steps can be modified for a fully distributed vRA instance.

Here is how I upgraded my lab.

1) Take snapshots of the vRA Cafe Appliance, IaaS VM, and SQL VM.

2) Shutdown the vRA Services
     SSH into the vRA Cafe Appliance and shutdown the vco-server, vcac-server, apache2, and the rabbitmq-server services.

  1. Run the below commands to stop the above listed services:
    • #service vcac-server stop
    • #service apache2 stop
    • #service rabbitmq-server stop
    • #service vco-server stop


    You can check that the services have stopped using the status syntax: #service vco-server status

  2. Log into the IaaS Virtual Machine and stop the below listed vRA services.
    • All VMware vCloud Automation Center agents
    • All VMware DEM Workers
    • VMware DEM Orchestrator
    • VMware vCloud Automation Center Manager Service


3) Download the vRealize Automation Appliance 7.0.1 Update Repository Archive ISO.

4)Upload the ISO to a datastore, and mount the iso to the vRA Cafe Appliance’s CDRom.

VM Settings
5) Open a browser and log into the vRA Cafe. Then Navigate to the “Update Tab” –>> “Settings”.

6) Change the Update Repository to “Use CDRom Updates”. Click on “Save Settings”.

Use CDRom Updates
Use CDRom Updates

7) Select the “Status Tab”.

8) Click on “Check For Updates”.

Check For Updates
Check For Updates

9) An update should be found (as shown in the photo above). Click on “Install Updates”.

10) Wait for the update to complete. This took approx 30 minutes for my lab.

Install Updates
Install Updates

11) Once the updates complete, you will be notified to reboot the vRA Cafe Appliance.
Reboot Notice
Reboot Notice

12) Once the vRA Cafe Appliance has completed the reboot, log back into the vRA VAMI and verify the version.
Updated Version
Updated Version

This completes the vRA Cafe Appliance upgrade. Now it is time to focus on the IaaS Server.

13) Open a console or RDP session into the IaaS Server and log into the machine with the vRA Administrator Service Account.

14) Open a web browser and browse to the vRA Cafe Installer page. “https://[vRA Appliance FQDN]:5480/installer”

15) Download the “DBUPGRADE SCRIPTS”.

16) Verify the Java Path in the Environmental variables.

Java Path
Java Path

17) Open the File Explorer and browse to the folder where you downloaded the “DBUPGRADE.zip” scripts file. Extract the DBUpgrade.zip file.

18) Open an elevated Command Prompt.

19) Change the directory to the location of the DBUpgrade Extraction Folder.

20) NOTE: Verify that the vRA Administrator Service Account has the SQL sysadmin role enabled.

21) Run the following command to update the SQL Database:
      # dbupgrade -S sql.dwarf.lab -d vra -E -upgrade

Replace sql.dwarf.lab with the FQDN of your SQL server.

DBUpgrade Script
DBUpgrade Script

The process may take a few minutes to complete.

22) Return to the vRA Cafe Installer page. “https://[vRA Appliance FQDN]:5480/installer”. Download “IaaS_Setup”.

23) Browse to the downloaded file in File Explorer. Right-Click the file, and “Run as Administrator”.

vRA 7.0.1 IaaS Installation - 1
vRA 7.0.1 IaaS Installation – 1

vRA 7.0.1 IaaS Installation – 2
vRA 7.0.1 IaaS Installation – 2

vRA 7.0.1 IaaS Installation – 3
vRA 7.0.1 IaaS Installation – 3

24) Select “Upgrade”
vRA 7.0.1 IaaS Installation – 4
vRA 7.0.1 IaaS Installation – 4

vRA 7.0.1 IaaS Installation – 5
vRA 7.0.1 IaaS Installation – 5

25) Fill in the Blanks.
vRA 7.0.1 IaaS Installation – 6
vRA 7.0.1 IaaS Installation – 6

NOTE: For the SQL Connection. If you are not using SSL, uncheck the option to “Use SSL for Database Connection”; else you will experience the following error.
vRA 7.0.1 IaaS Installation – 7
vRA 7.0.1 IaaS Installation – 7

26) For my lab, I had to remove the SSL connection between the IaaS Server and the SQL Database Server.
vRA 7.0.1 IaaS Installation – 8
vRA 7.0.1 IaaS Installation – 8

vRA 7.0.1 IaaS Installation – 9
vRA 7.0.1 IaaS Installation – 9

vRA 7.0.1 IaaS Installation – 10
vRA 7.0.1 IaaS Installation – 10

vRA 7.0.1 IaaS Installation – 11
vRA 7.0.1 IaaS Installation – 11

27) The upgrade installation will take some time to complete. I recommend going and grabbing a drink. The process took approx 30 mins for me for it to complete.
vRA 7.0.1 IaaS Installation – 12
vRA 7.0.1 IaaS Installation – 12

28) The upgrade finishes.
vRA 7.0.1 IaaS Installation – 13
vRA 7.0.1 IaaS Installation – 13

29) Click finish and reboot the IaaS Server.

30) When the server comes back online. Log back in and verify that all vRA services have restarted.

vRA Services
vRA Services

31) Log back into the vRA Cafe Appliance and check all Services are returned to “Registered”.
Cafe Services
Cafe Services

If everything happened without any issues, then you have successfully upgraded vRA from 7.0 to 7.0.1. Go log into your portal and check it out!

vRA Portal Login
vRA Portal Login

Project: Home Lab v4.0 – The Home Lab goes MOBILE!!

Woot! Ok, that was cheesy, I know. However, my home lab has undergone significant improvement, redirection, and an overall evolution. And that calls for celebration! So Woot it up!

SuperMicro Super Server X10SDV-TLN4F
SuperMicro Super Server X10SDV-TLN4F
Over the last two years, I went from individual rack mounted Dell PowerEdge 1950s and Synology storage, to a four blade Dell c6100 Cloud Server chassis, to rented COLO Space from OVH, and now to the all new mobile lab. You are probably asking yourself “why all the changes?” Well, it grew from a need to be more energy efficient, to a random act of god (lightning strike), and eventually to money.

While the Dell Cloud Server averaged approx 700Watts fully loaded keeping my electric bill down, my new Mobile Lab will average less than 120Watts. (I’m not exactly sure how much lower as I haven’t put the meter on it yet.) OVH was a very good colo – especially for the price. I just don’t want to pay them rent any longer. The Synology will remain the in-home media server/backup location for the family and no longer used by the lab.

Why mobile?
It doesn’t have to be. I have a VPN to my home, so I can connect to it remotely. Plus, the new server has the ability for remote control, so I can power it on/off – if needed, without the need of the “break/fix” wife. But with the size of the server, I can take this bad boy on the road. The downside to that is TSA, theft, and potential travel damage. The upside to it is being able to plug it up and demo on the spot. In addition, when there are power or internet outages preventing me from connecting to my home from the hotel or the customer’s workplace; I could still work in the lab — without that dependency.

What makes up this mobile lab?
Over the last year, several co-workers and I have been discussing building small labs. We had several ideas for what we wanted to use, but we were also very particular in what we wanted. We pinned requirements and started searching. One friend decided to run an Intel NUC, another is utilizing a Gigabit server, and one more suggested the hardware I am using now.

Our requirements (or I should say, my requirements) were the following:

  • It had to have a small footprint.
  • It had to be energy efficient, low power, and put out little to no heat and noise.
  • It had to be able to run 64GB of RAM or more.
  • It had to be inexpensive.

  • With that list, you would think oh – and Intel NUC or a MAC MINI. But neither of those can use more than 16GB of RAM. Once you install vCenter, a nested ESX host, and a couple of VMs, you are out of RAM. In addition, the NUC is the only one that is inexpensive. However, you will have to purchase a few of them to be able to do anything worthwhile within a lab environment.

    This brings us to the setup of the mobile lab. All of my storage, I salvaged from previous equipment that I have had lying around from my earlier versions of my lab.

    Measurements: 7.5"x7.5"x3"
    Measurements: 7.5″x7.5″x3″

    HOME LAB v4.0 (Mobile Lab)
    Motherboard:
          SuperMicro Super Server Motherboard X10SDV-TLN4F
         (1) Intel Xeon D-1540 8-Core 2Ghz CPU
         UPDATE: 128GB RAM (Max)
         (2) 10Gb NICs
         (2) 1Gb NICs
         (1) IPMI Port
    Case:
         Travla C299 Mni-ITX Case
    Internal Storage:
         (1) 250GB Samsung 840 EVO SSD
         (1) 1TB Western Digital 5400 SATA III HD Blue
    External Storage:
         (1) 2TB Seagate Backup Plus Slim External USB 3 Drive
         (1) SanDisk Cruzer Fit CZ33 32GB USB 2.0 Low-Profile Flash Drive
    Software:
         Hypervisor: vSphere ESXi 6.0 U1b build 3380124
         Backups: VMware Data Protection 6

    The unit is tiny compared to many other options with the same featuresets. I can pack it into my backpack along with the power supply, a couple of ethernet cables, a small five port switch and still have room for a bag of Reese’s Pieces. 🙂

    The mobile lab is based on the SuperMicro Super Server Mini-ITX series of motherboards. It is the foundation of the entire lab. Within this base, is a complete nested vSphere 6 environment running almost a full VMware SDDC stack (I’m not running NSX).

    As time allows, I will detail the buildout of this lab in the future.