Building the lab 4: Stand up vCloud Director

First, I would like to declare: “vCloud Director is NOT dead!” I can say emphatically, this product did not die, never died, and I don’t believe that it is going to die! It is still actively being developed by VMware.

With this clarified, let’s move on to getting vCD stood up. Again, I followed along with the wonderful guide from Sysadmin Tutorial.

This guide has a very good walk-through for standing up vCloud Director 8.0 for a Proof of Concept (it also works well for 9.0). There are multiple steps that break out each milestone of the installation/deployment. You could follow along each part, as I did. Along the way, I will point out the various things that I did or changed for my environment.

Part One is self explanatory. The walkthrough shows you how to set up a SQL database. Yes, MS SQL is still supported with vCD 9.0. While you may want to migrate or move to a PostGreSQL Database, this guide sets you up for MS SQL. (I will cover how to setup PostGreSQL and migrate the database sometime in the future. You may need or want this down the road when you get ready to upgrade.)

Part Two – setting up a RabbitMQ server, I skipped. Why do you ask? Well, the answer is selfish. My environment is small and is designed for one thing – quick deployment and stand up of an SDDC environment for play and discovery. Unlike many vCD environments that can be found in the wild, I will not be interfacing or integrating with any outside services. Nor will I be standing up mulitple cells. So I have no need of a RabbitMQ server at this time. You and your environment may very well need one.

Part Three of this guide is very good. I like how they dig into the certificate creation and the details of what to do with them. This portion of the walkthrough also includes how to create the cert with a Microsoft CA server. These are details that I would like to see VMware include in their documentation. This is one area that plagues many installations as certificates always seem to be problematic and having a good walkthrough would really go a long way.

Once you complete these steps, you are ready to configure vCloud Director for consumption. Like all VMware products, you should have a good idea of how or what you want to do. Setting this up to play with is one thing. But if you are trying to utilize it beyond “how do I install it?”, then you need to have an idea of what you are trying to accomplish. If you haven’t taken the time to do this, you should.

For me, as I said previously – I want to stand up vCloud Director to be a mechanism where I can quickly deploy full SDDC environments to manipulate and play with. I want to utilize these environments to learn, discover, and grow my skillset. I do not want to destroy and rebuild my lab environment every time I have a different scenario I want to test. My goal is to ‘mimic’ the Hands On Lab environment. Ambitious? Yes.

I’m going to stop here as the next Part of the SysAdmin Tutorial walkthrough was already covered when I stood up NSX in “Building the lab 3: NSX”. Before I continue with the SysAdmin Tutorial on and kick off Part 5, I want to set up more storage.

Building the lab 3: NSX

Now that vCenter is installed and configured, I am ready to move onto the installation of NSX. NSX for vCloud Director (vCD) is a tad simpler installation than implementing a standard NSX deployment for vSphere. Luckily, the good folks over at SysadminTutorials have a most excellent walkthrough on NSX for vCloud Director. Networking is my Achilles heel, so I struggle with it. When I write about networking, I will try and detail out areas that are confusing to me.

For my environment, I have installed the following components:

  • vSphere 6.5 (vCenter & ESX 6.5)
  • NSX 6.3.5


TIP JAR

I followed the Sysadmin Tutorial to perform the NSX installation in my lab. This tutorial was spot on (even for version 6.3.5), however, there are some things to note regarding the installation for my environment.

Placement: Remember, in my environment the vCenter manages the compute cluster. The NSX Manager will be installed on the management host next to the vCenter server. When I deploy the NSX controllers, each controller will be installed in the compute cluster — not the management host (as the tutorial suggests).
NSX Controller IP Pool: For me, I consider the NSX Controllers an extension of the management plane. I also realized that I would only be installing two controllers. This goes against best practice and the recommended ‘three’ from VMware. Therefore, the IP Pool I created for my controllers was a pool of two IP addresses. During the install, I assigned a controller to each host within the compute cluster.
VXLAN IP Pool: When configuring VXLAN (steps 32-36), I again only created a pool of two IP Addresses for each of my ESX hosts within the compute cluster. Since these are VMKernel NICs on the ESX hosts, I kept them on the management network.
MTU Size: I cannot stress enough how important this is. If you can create Jumbo Frames throughout the environment, you will be saving yourself from heartache. The MTU setting that is absolutely required for NSX is 1600. But if you are going to implement jumbo frames, go all the way and give it 9000.


In my experience, I’ve seen this be the issue that killed connectivity and created fragmentation where it didn’t need to be, among other things. On one of my previous engagements, the customer utilized an encrypted Active Directory. During a domain join, I would have machines throw errors. When we troubleshot, what we found was that the encrypted traffic could not be fragmented. The packet size was 1538, MTU was 1500 on their network. This authentication packet was tossed out every single time preventing the machines from joining the domain. This is just one example where this has shown its’ ugly face. My recommendation: check from end-to-end that your MTU is set appropriately.





After the installation of NSX, this is what my environment looks like. The green is to indicate that the vCenter is managing the Compute Resources. As you can see, it is a simple installation so far.

Up next, I will build a CentOS machine and install the vCloud Director Cell.

Change…

The only thing constant is change. Change is the backbone of any IT organization. New widgets, software, and hardware seem to come out daily. Our job as IT professionals is to try and stay aware of these new products. However, while we try and stay ‘cutting-edge’ and ahead of all this change, we always seem to fall behind at some point. What we ought to try and do though, is not fall so far behind that we lose sight of the pack. Thus, we become obsolete and are expendable.

Recently, I went to a vCloud Director 9.x Design Workshop. Yes, my friends — vCloud Director is not DEAD. While the software is primarily for Service Providers, it is still a mighty tool that allows many IT groups the ability to rapidly deploy internal, isolated, “pods”. This training got me to thinking, ‘why am I not using vCD in my lab?’

That’s why, once again, I am updating my homelab. Over the last few years, I’ve torn down and rebuilt my lab numerous times. This has wound up taking weeks and months of time to reset back up — just to test something. It seems most often, the rebuild wastes so much time. This time around, I’m going to explore rebuilding my lab around vCloud Director 9.x.

homelab
Home Lab

Over the years, I have gone from a full 42U rack with Dell PowerEdge servers that consume massive amount of power, cooling, and my personal manpower to maintain. This hurt my wallet (as well as my time) — a lot, which also caused numerous problems with finance (aka: the wife). A while ago, I replaced the Dell PowerEdge servers with a Supermicro Super Server. This has been working out great for me. As a matter of fact, this past year I have made a few hardware modifications to the lab. I wound up running out of space and had to upgrade the hard drives in my synology box from (5) 2TB drives to (5) 3TB Drives. To expand the capabilities, additional hardware was acquired: A new Intel NUC was added as a payload target, and another Supermicro Super Server was obtained at the end of the year (Merry Christmas, right?).

Further blog posts will detail my rebuild journey. I fully intend on sharing what I learn.