NSX Manager SSL Certificate

I am in the process of yet another homelab rebuild. (Yep, it’s that time again.) During this process, I have wiped the entire lab and restarting from scratch. 

A new vCenter 6.7 U3 appliance has been deployed and installed and the focus has been moved onto the deployment and setup of NSX Datacenter for vSphere v6.4.6 (formerly known as NSX-V). The deployment of the appliance was textbook, this article will focus on something that to me seemed really odd – the application, or lack thereof, of the placing of the SSL Certificate. 

For this environment and scenario, I am utilizing a linux based Certificate Authority — not a Microsoft Certificate Authority. This particular CA does not accept the individual product CSR in creating the individual certificate for the individual product, therefore I created a PKCS12 SSL Chain Cert for NSX Manager. This is not the issue I am writing about.

However, i discovered that when I went to go import the PKCS12 cert, NSX Manager would fail to replace the built-in self-signed certificate – even though it showed that the certificate was successfully uploaded. (Yes, subsequent reboots still did not change the status.) This is the issue, and the reasoning for this article.

I figured there had to be a way to import this cert via command line somehow. (Unfortunately, google did not supply me this method.) I reached out to a few of my NSX colleagues who suggested I look at implementing the cert via the NSX API.

Just for reference, here are the links I used:

NOTE: While this should not be needed, proceed with caution. 

I’m not one for digging into the API, and therefore, I was hesitant. But hey, this is my lab, and it’s here for my destruction… er, learning. I would recommend that you do not attempt this type of work ‘laissez-faire’. 

On page 166 of the NSX 6.4 API doc, I found what I was looking for. The doc provided the command that I needed to run in order to import the cert via the API.

While you can utilize postman, or another API manipulation tool to run this command, I did it in the MAC terminal on my machine (with a small aid from postman). To take a shortcut here, instead of going through the rigamarole of trying to get the authorization token via command line, I used postman to retrieve it. I then ran the following command to force the import using the NSX API:

After the certificate was imported, I then rebooted the appliance and checked the status of the certificate.  

All was well. 

Building the lab 5: Storage I

This part of my homelab rebuild will touch on something interesting… storage options. Knowing that my lab is going to be a couple levels of nesting, I wanted to look at the different options that are out there.

For years, I have had and use a Synology DS 1512+ storage array. This thing has been running like gangbusters. It serves multiple purposes from backups & fileshares, to NFS storage for the virtual environment.

Over the years, I have upgraded the drives from 1TB to 2TB, to 3TB. Because of this, I have a few drives lying around not in use. I thought that maybe I could spin up a FreeNAS or OpenFiler box for iSCSI within the environment. By creating differing storage arrays, I could introduce Storage Policies within the environment for potential feature consumption down the road.

As I explored the various options out there, I discovered many simulators from various vendors: 3PAR, EMC, NetApp. In addition to these, you have the free options as mentioned above: OpenFiler, FreeNAS, etc. But I also stumbled across this jewel….. XPEnology.

I’m sure you are wondering — What is XPEnology?
Xpenology is a bootloader for Synology’s operating system which is called DSM (Disk Station Manager) and is what they use on their NAS devices. DSM is running on a custom Linux version developed by Synology. Its optimized for running on a NAS server with all of the features you often need in a NAS device. Xpenology creates the possibility to run the Synology DSM on any x86 device like any pc or self-built NAS.

You read that right, it is possible to run XPEnology on bare metal. XPEnology runs as a faux Synology NAS.

Now, before you continue, you should know this. XPEnology is not supported or owned by Synology, micronauts, or anyone in their right mind. The links provided are ONLY for experimentation purposes and should not be used in any production or development environment. It is very possible you could lose all data and put yourself in jeopardy. If you need reliable, dependable storage, then buy a Synology NAS.

PROCEED AT YOUR OWN RISK!

Alex Lopez at ThinkVirtual has a good write up on how to create a Synology Storage VM
https://ithinkvirtual.com/2016/04/30/create-a-synology-vm-with-xpenology/

Alex’s write up was based on Erik Bussink’s build, found here.
https://www.bussink.ch/?p=1672

The original XPEnology website walkthrough on how to create a Synology Storage VM.
http://xpenology.me/installing-dsm-5-1-vmware-esxi5-5u1/

The original Xpenology website has become a ghost-town. I’m not sure if it is being maintained, or if the original creator(s) just don’t have the time to update it any longer. The last updates came out around DSM 5.2-5644.5 (so a while ago). However, the XPEnology forums will provide all kinds of glorious information from the wealth of knowledge within the community.

Additionally, you can get more information from this new XPEnology info site. They also have a pretty good walk-through for a storage VM. The video tutorial even covers how to setup ESXi 5.1 (http://xpenology.org/installation/).

I chose to build on baremetal.
While having a storage VM is great, I think having XPEnology on baremetal is even better. As you read and research how to do this, you are going to discover that it involves grabbing files stashed all over the internet — files ranging from a bootloader to PAT files. Make sure that you read EVERYTHING. I reutilized some hardware and some of my old synology drives and built a XPEnology server on bare metal.

I booked marked this site (https://github.com/XPEnology/JunLoader/wiki/Installing-DSM-6.0-Bare-Metal) as it provides a pretty good walkthrough on how to create a bootable USB drive for the XPEnology OS. I also found this one (http://blog.pztop.com/2017/07/24/Make-Xpenology-boot-loader-1.02b-for-DSM-6.1-on-Ubuntu/). For those of you, like myself, who are on a MAC…. you may need this nugget (https://xpenology.com/forum/topic/1753-create-a-bootable-usb-on-os-x/).

Again, I would like to say, if you need reliable and dependable storage, go purchase a real storage array.

Building the lab 4: Stand up vCloud Director

First, I would like to declare: “vCloud Director is NOT dead!” I can say emphatically, this product did not die, never died, and I don’t believe that it is going to die! It is still actively being developed by VMware.

With this clarified, let’s move on to getting vCD stood up. Again, I followed along with the wonderful guide from Sysadmin Tutorial.

This guide has a very good walk-through for standing up vCloud Director 8.0 for a Proof of Concept (it also works well for 9.0). There are multiple steps that break out each milestone of the installation/deployment. You could follow along each part, as I did. Along the way, I will point out the various things that I did or changed for my environment.

Part One is self explanatory. The walkthrough shows you how to set up a SQL database. Yes, MS SQL is still supported with vCD 9.0. While you may want to migrate or move to a PostGreSQL Database, this guide sets you up for MS SQL. (I will cover how to setup PostGreSQL and migrate the database sometime in the future. You may need or want this down the road when you get ready to upgrade.)

Part Two – setting up a RabbitMQ server, I skipped. Why do you ask? Well, the answer is selfish. My environment is small and is designed for one thing – quick deployment and stand up of an SDDC environment for play and discovery. Unlike many vCD environments that can be found in the wild, I will not be interfacing or integrating with any outside services. Nor will I be standing up mulitple cells. So I have no need of a RabbitMQ server at this time. You and your environment may very well need one.

Part Three of this guide is very good. I like how they dig into the certificate creation and the details of what to do with them. This portion of the walkthrough also includes how to create the cert with a Microsoft CA server. These are details that I would like to see VMware include in their documentation. This is one area that plagues many installations as certificates always seem to be problematic and having a good walkthrough would really go a long way.

Once you complete these steps, you are ready to configure vCloud Director for consumption. Like all VMware products, you should have a good idea of how or what you want to do. Setting this up to play with is one thing. But if you are trying to utilize it beyond “how do I install it?”, then you need to have an idea of what you are trying to accomplish. If you haven’t taken the time to do this, you should.

For me, as I said previously – I want to stand up vCloud Director to be a mechanism where I can quickly deploy full SDDC environments to manipulate and play with. I want to utilize these environments to learn, discover, and grow my skillset. I do not want to destroy and rebuild my lab environment every time I have a different scenario I want to test. My goal is to ‘mimic’ the Hands On Lab environment. Ambitious? Yes.

I’m going to stop here as the next Part of the SysAdmin Tutorial walkthrough was already covered when I stood up NSX in “Building the lab 3: NSX”. Before I continue with the SysAdmin Tutorial on and kick off Part 5, I want to set up more storage.

Building the lab 3: NSX

Now that vCenter is installed and configured, I am ready to move onto the installation of NSX. NSX for vCloud Director (vCD) is a tad simpler installation than implementing a standard NSX deployment for vSphere. Luckily, the good folks over at SysadminTutorials have a most excellent walkthrough on NSX for vCloud Director. Networking is my Achilles heel, so I struggle with it. When I write about networking, I will try and detail out areas that are confusing to me.

For my environment, I have installed the following components:

  • vSphere 6.5 (vCenter & ESX 6.5)
  • NSX 6.3.5


TIP JAR

I followed the Sysadmin Tutorial to perform the NSX installation in my lab. This tutorial was spot on (even for version 6.3.5), however, there are some things to note regarding the installation for my environment.

Placement: Remember, in my environment the vCenter manages the compute cluster. The NSX Manager will be installed on the management host next to the vCenter server. When I deploy the NSX controllers, each controller will be installed in the compute cluster — not the management host (as the tutorial suggests).
NSX Controller IP Pool: For me, I consider the NSX Controllers an extension of the management plane. I also realized that I would only be installing two controllers. This goes against best practice and the recommended ‘three’ from VMware. Therefore, the IP Pool I created for my controllers was a pool of two IP addresses. During the install, I assigned a controller to each host within the compute cluster.
VXLAN IP Pool: When configuring VXLAN (steps 32-36), I again only created a pool of two IP Addresses for each of my ESX hosts within the compute cluster. Since these are VMKernel NICs on the ESX hosts, I kept them on the management network.
MTU Size: I cannot stress enough how important this is. If you can create Jumbo Frames throughout the environment, you will be saving yourself from heartache. The MTU setting that is absolutely required for NSX is 1600. But if you are going to implement jumbo frames, go all the way and give it 9000.


In my experience, I’ve seen this be the issue that killed connectivity and created fragmentation where it didn’t need to be, among other things. On one of my previous engagements, the customer utilized an encrypted Active Directory. During a domain join, I would have machines throw errors. When we troubleshot, what we found was that the encrypted traffic could not be fragmented. The packet size was 1538, MTU was 1500 on their network. This authentication packet was tossed out every single time preventing the machines from joining the domain. This is just one example where this has shown its’ ugly face. My recommendation: check from end-to-end that your MTU is set appropriately.





After the installation of NSX, this is what my environment looks like. The green is to indicate that the vCenter is managing the Compute Resources. As you can see, it is a simple installation so far.

Up next, I will build a CentOS machine and install the vCloud Director Cell.

Building the lab 2: vCenter Server Appliance

I’m sure that this comes as old news to some. As a Consultant in Professional Services, you would be surprised to know that the Windows vCenter platform is still alive and kicking in many production environments. (More than what I would expect.)

The vCenter Server Appliance has been out since vSphere 5.5 (2013-ish). VMware has been trying to drop the Windows platform version of vCenter for some time. For the longest time, the component that caused the hold out to drop Windows was the VMware Update Manager (VUM). Now that VUM is fully implemented within the webclient, there should be no reason to expect another version of the Windows vCenter platform. (That is purely my opinion and NOT one from VMware.) Personally, I was surprised to see the vSphere 6.5 release with a Windows platform. So, like me, if you are to believe that this is the expectation, then the writing is on the wall — it is time to switch to the vCenter Server Appliance (vCSA).

In preparation for my rebuild, one of my design decisions was to utilize the VCSA. There were other factors that I had to consider as well.

  • 1) Do I follow best practice and have a management vCenter?
  • 2) Embedded or external PSC?
  • 3) What deployment size do I choose for the appliance?
  • 4) Install on Local or Shared Storage?
  • 5) 1 or more NICs?
    For me, my answers were:

  • 1) No. I would only have a payload vCenter. Since I only had the one host, there was no need to install and configure a management vCenter. If I wind up purchasing a second NUC, then I may consider installing and setting up a cluster. For now, installing more than one vCenter was allowing consumption of resources that I didn’t have.
  • 2) & 3) Because resources were a premium, I chose embedded and tiny for my deployment type and size. Additionally, I will not be creating anything that will need a multi-site or complex configuration. This vCenter is strictly for managing payload resources for vCloud Director. So nothing fancy to be done here.
  • 4) & 5) Initially, I installed and ran with the one built-in NIC that the NUC provides. I also initially had the VCSA vm on shared storage. And initially, this did not become a problem. Over time though, once I started building out vCD and uploading ISOs and OVAs, this became a problem. My recommendation would be to always go with Shared Storage, however, in this case I had to move the VMs to local storage. I also went ahead and purchased a second NIC using the USB Startech NIC that William Lam has recommended on a number of occasions. If I do obtain another NUC, I will be moving back to shared storage. And yes, I will be installing another Star Tech USB NIC.

I would also add that the Star Tech nic was super simple to install. However, if you discover that you run into trouble, read the comments from the page I linked. I found the tip from twuhabro to be something I needed to do.

The installation and configuration of the vCSA has been covered by so many people. Too many in fact. If you are looking for how I installed the vCSA, then use this website. I did. This is the same method I chose to follow during my installation.

VMware also has provided access to a nice Product Walkthrough Repository. This site provides you a nice step-by-step walk through of many of our products for lab installations. I say “lab installations”, because these walk-throughs are simple installs, not something I would spin up for a production environment.

My friend, Steve Flanders put together a small list of gotchas to consider when installing the vCSA 6.5 in your environment. You can check out his list here.

I would be remiss, if I did not mention that you can find a plethora of content online to learn more about vSphere and vCenter. Click here for VMware’s education courses.

TIP JAR

Now, let’s talk about vCenter Tips.

Passwords: Today, many people are trying their best to be security conscious and coming up with some off the wall password combinations. Bravo! I commend you! However, while you are trying your best to be security conscious. You also need to know the limitations of what you can and cannot do. During the installation of the vCSA, the maximum size of password you can use Out Of The Box (OOTB) is 20 characters. This can be adjusted to 32, but OOTB you are stuck with 20. My recommendation… use something easy for the install, adjust the size and change afterwards. Additionally, while you can rock special characters in that amazing password of yours, if they aren’t ASCII characters, then you’ll have trouble. Specifically the trouble revolves around the System Admin account (administrator@vsphere.local). You can use this website to double-check your special characters to see if they would be acceptable.
Password Policy: Staying with the password. Once you start setting up your Identity Source, you will notice a password policy configuration screen. By default and OOTB, the password policy will modify the System Admin password in 90 days. This can be adjusted to match your password policies within your company. Personally, I don’t want my System Admin account to be affected by this policy. Additionally, I would rather the Active Directory manage password policies as well, therefore, I would rather disable this. Unfortunately, I cannot. Therefore, I set it to the maximum configuration possible — 9999. Which turns out to be 27.3 Years.
vCSA 6.5 Password Policy
vCSA 6.5 Password Policy
  Again, staying with Password Policies. In the vCSA Virtual Appliance Management Interface (VAMI), the root account has a password policy that is applied out of the box as well. This policy affects the root account in 365 days. Again, my personal opinion is to disable this. If you want to change this, then log into the VAMI as root. Go to Administration (located in the left menus), then under the section Password Expiration Settings select NO for Root Password Expires. Submit and Log out.
vCSA VAMI Password Policy
vCSA VAMI Password Policy
User Account Locked Out: God forbid that you have run into the issue where the root account got locked out. If you are receiving the error message “User locked out due to failed logins”, then this could be that your password policy has gone into affect. I will admit that in the past, with the first versions of SSO; this was a bitch. But VMware has felt our pain and made it a little easier to get this issue resolved. This KB article can be used to help you get into the vCSA if you have been locked out.
Sysprep: For those of you (like myself) who still need to sysprep some OLD Guest OSes like Windows 2003, then you may need these two links.
Sysprep File Locations
SCP files to vCSA
Start/Stop Services: Since the vCSA is a Photon based Appliance, I forget all the time how to start and stop services within SSH or CLI. If you are like me, then bookmark this website. Eventually, we will remember how to do it the next time.
HA Datastore Heartbeats: In a production environment, you should not see this warning. However, in smaller labs, this may be more prevalent that you want. vCenter will flash a warning that “The number of heartbeat datastores for host is 1, which is less than required: 2”. I hate this this warning. It’s as bad as the “Suppress SSH” warning. However, you can easily add a setting, and this goes away. Just jump over to this KB article to see how to do it.

Compute vCSA
Compute vCSA
In summary, there really is nothing fancy to setting up vCenter to manage the payload resources for my vCloud Director lab. Once vCenter was stood up, I created a Datacenter, a cluster, and added hosts to the cluster. I’m ready to install NSX.

My next post will explore the NSX installation for vCloud Director.