PSA: vSphere 5.5 End Of General Support

end of road
End of the Road for vSphere 5.5
Please take a note of this PSA.
I got a note in my email this week to help spread the word and remind people that the end will be here quicker than you realize. Don’t forget to plan ahead and upgrade to the latest version. From my experience, I have not come across an environment that an upgrade is as simple as it sounds, so please do plan ahead.

The End of General Support for vSphere 5.5 is September 19, 2018. To maintain your full level of support and subscription, VMware recommends upgrading to vSphere 6.5, or newer. VMware has extended the general support for vSphere 6.5 to a full five years from date of release, which means the general support for vSphere 6.5 will end November 15, 2021. For more information on the benefits of upgrading and how to upgrade, visit the VMware vSphere Upgrade Center.

If you would like assistance in moving to a newer version of vSphere, VMware’s vSphere Upgrade Service is available. This service delivers a comprehensive guide to upgrading your virtual infrastructure. It includes recommendations for planning and testing the upgrade, the actual upgrade itself, validation guidance and rollback procedures. For more information contact your Technical Account Manager or visit VMware Professional Services.

In the event you are unable to upgrade before the End of General Support (EOGS) and are active on Support and Subscription, you have the option to purchase extended support in one year increments for up to two years beyond the EOGS date. The price of Extended Support is $300,000 per product per year. Visit VMware Extended Support for more information.

Technical Guidance for vSphere 5.5 is available until September 19, 2020 primarily through the self-help portal. During the Technical Guidance phase, VMware does not offer new hardware support, server/client/guest OS updates, new security patches or bug fixes unless otherwise noted. For more information, visit VMware Lifecycle Support Phases.

VMware KB Article #51491

I discovered this site that has a growing automated list of additional VMware products and their EOL dates from Virten.net. You may want to bookmark it and check on.
VMware’s Official Page is here.

Building the lab 3: NSX

Now that vCenter is installed and configured, I am ready to move onto the installation of NSX. NSX for vCloud Director (vCD) is a tad simpler installation than implementing a standard NSX deployment for vSphere. Luckily, the good folks over at SysadminTutorials have a most excellent walkthrough on NSX for vCloud Director. Networking is my Achilles heel, so I struggle with it. When I write about networking, I will try and detail out areas that are confusing to me.

For my environment, I have installed the following components:

  • vSphere 6.5 (vCenter & ESX 6.5)
  • NSX 6.3.5


TIP JAR

I followed the Sysadmin Tutorial to perform the NSX installation in my lab. This tutorial was spot on (even for version 6.3.5), however, there are some things to note regarding the installation for my environment.

Placement: Remember, in my environment the vCenter manages the compute cluster. The NSX Manager will be installed on the management host next to the vCenter server. When I deploy the NSX controllers, each controller will be installed in the compute cluster — not the management host (as the tutorial suggests).
NSX Controller IP Pool: For me, I consider the NSX Controllers an extension of the management plane. I also realized that I would only be installing two controllers. This goes against best practice and the recommended ‘three’ from VMware. Therefore, the IP Pool I created for my controllers was a pool of two IP addresses. During the install, I assigned a controller to each host within the compute cluster.
VXLAN IP Pool: When configuring VXLAN (steps 32-36), I again only created a pool of two IP Addresses for each of my ESX hosts within the compute cluster. Since these are VMKernel NICs on the ESX hosts, I kept them on the management network.
MTU Size: I cannot stress enough how important this is. If you can create Jumbo Frames throughout the environment, you will be saving yourself from heartache. The MTU setting that is absolutely required for NSX is 1600. But if you are going to implement jumbo frames, go all the way and give it 9000.


In my experience, I’ve seen this be the issue that killed connectivity and created fragmentation where it didn’t need to be, among other things. On one of my previous engagements, the customer utilized an encrypted Active Directory. During a domain join, I would have machines throw errors. When we troubleshot, what we found was that the encrypted traffic could not be fragmented. The packet size was 1538, MTU was 1500 on their network. This authentication packet was tossed out every single time preventing the machines from joining the domain. This is just one example where this has shown its’ ugly face. My recommendation: check from end-to-end that your MTU is set appropriately.





After the installation of NSX, this is what my environment looks like. The green is to indicate that the vCenter is managing the Compute Resources. As you can see, it is a simple installation so far.

Up next, I will build a CentOS machine and install the vCloud Director Cell.

Building the lab 2: vCenter Server Appliance

I’m sure that this comes as old news to some. As a Consultant in Professional Services, you would be surprised to know that the Windows vCenter platform is still alive and kicking in many production environments. (More than what I would expect.)

The vCenter Server Appliance has been out since vSphere 5.5 (2013-ish). VMware has been trying to drop the Windows platform version of vCenter for some time. For the longest time, the component that caused the hold out to drop Windows was the VMware Update Manager (VUM). Now that VUM is fully implemented within the webclient, there should be no reason to expect another version of the Windows vCenter platform. (That is purely my opinion and NOT one from VMware.) Personally, I was surprised to see the vSphere 6.5 release with a Windows platform. So, like me, if you are to believe that this is the expectation, then the writing is on the wall — it is time to switch to the vCenter Server Appliance (vCSA).

In preparation for my rebuild, one of my design decisions was to utilize the VCSA. There were other factors that I had to consider as well.

  • 1) Do I follow best practice and have a management vCenter?
  • 2) Embedded or external PSC?
  • 3) What deployment size do I choose for the appliance?
  • 4) Install on Local or Shared Storage?
  • 5) 1 or more NICs?
    For me, my answers were:

  • 1) No. I would only have a payload vCenter. Since I only had the one host, there was no need to install and configure a management vCenter. If I wind up purchasing a second NUC, then I may consider installing and setting up a cluster. For now, installing more than one vCenter was allowing consumption of resources that I didn’t have.
  • 2) & 3) Because resources were a premium, I chose embedded and tiny for my deployment type and size. Additionally, I will not be creating anything that will need a multi-site or complex configuration. This vCenter is strictly for managing payload resources for vCloud Director. So nothing fancy to be done here.
  • 4) & 5) Initially, I installed and ran with the one built-in NIC that the NUC provides. I also initially had the VCSA vm on shared storage. And initially, this did not become a problem. Over time though, once I started building out vCD and uploading ISOs and OVAs, this became a problem. My recommendation would be to always go with Shared Storage, however, in this case I had to move the VMs to local storage. I also went ahead and purchased a second NIC using the USB Startech NIC that William Lam has recommended on a number of occasions. If I do obtain another NUC, I will be moving back to shared storage. And yes, I will be installing another Star Tech USB NIC.

I would also add that the Star Tech nic was super simple to install. However, if you discover that you run into trouble, read the comments from the page I linked. I found the tip from twuhabro to be something I needed to do.

The installation and configuration of the vCSA has been covered by so many people. Too many in fact. If you are looking for how I installed the vCSA, then use this website. I did. This is the same method I chose to follow during my installation.

VMware also has provided access to a nice Product Walkthrough Repository. This site provides you a nice step-by-step walk through of many of our products for lab installations. I say “lab installations”, because these walk-throughs are simple installs, not something I would spin up for a production environment.

My friend, Steve Flanders put together a small list of gotchas to consider when installing the vCSA 6.5 in your environment. You can check out his list here.

I would be remiss, if I did not mention that you can find a plethora of content online to learn more about vSphere and vCenter. Click here for VMware’s education courses.

TIP JAR

Now, let’s talk about vCenter Tips.

Passwords: Today, many people are trying their best to be security conscious and coming up with some off the wall password combinations. Bravo! I commend you! However, while you are trying your best to be security conscious. You also need to know the limitations of what you can and cannot do. During the installation of the vCSA, the maximum size of password you can use Out Of The Box (OOTB) is 20 characters. This can be adjusted to 32, but OOTB you are stuck with 20. My recommendation… use something easy for the install, adjust the size and change afterwards. Additionally, while you can rock special characters in that amazing password of yours, if they aren’t ASCII characters, then you’ll have trouble. Specifically the trouble revolves around the System Admin account (administrator@vsphere.local). You can use this website to double-check your special characters to see if they would be acceptable.
Password Policy: Staying with the password. Once you start setting up your Identity Source, you will notice a password policy configuration screen. By default and OOTB, the password policy will modify the System Admin password in 90 days. This can be adjusted to match your password policies within your company. Personally, I don’t want my System Admin account to be affected by this policy. Additionally, I would rather the Active Directory manage password policies as well, therefore, I would rather disable this. Unfortunately, I cannot. Therefore, I set it to the maximum configuration possible — 9999. Which turns out to be 27.3 Years.
vCSA 6.5 Password Policy
vCSA 6.5 Password Policy
  Again, staying with Password Policies. In the vCSA Virtual Appliance Management Interface (VAMI), the root account has a password policy that is applied out of the box as well. This policy affects the root account in 365 days. Again, my personal opinion is to disable this. If you want to change this, then log into the VAMI as root. Go to Administration (located in the left menus), then under the section Password Expiration Settings select NO for Root Password Expires. Submit and Log out.
vCSA VAMI Password Policy
vCSA VAMI Password Policy
User Account Locked Out: God forbid that you have run into the issue where the root account got locked out. If you are receiving the error message “User locked out due to failed logins”, then this could be that your password policy has gone into affect. I will admit that in the past, with the first versions of SSO; this was a bitch. But VMware has felt our pain and made it a little easier to get this issue resolved. This KB article can be used to help you get into the vCSA if you have been locked out.
Sysprep: For those of you (like myself) who still need to sysprep some OLD Guest OSes like Windows 2003, then you may need these two links.
Sysprep File Locations
SCP files to vCSA
Start/Stop Services: Since the vCSA is a Photon based Appliance, I forget all the time how to start and stop services within SSH or CLI. If you are like me, then bookmark this website. Eventually, we will remember how to do it the next time.
HA Datastore Heartbeats: In a production environment, you should not see this warning. However, in smaller labs, this may be more prevalent that you want. vCenter will flash a warning that “The number of heartbeat datastores for host is 1, which is less than required: 2”. I hate this this warning. It’s as bad as the “Suppress SSH” warning. However, you can easily add a setting, and this goes away. Just jump over to this KB article to see how to do it.

Compute vCSA
Compute vCSA
In summary, there really is nothing fancy to setting up vCenter to manage the payload resources for my vCloud Director lab. Once vCenter was stood up, I created a Datacenter, a cluster, and added hosts to the cluster. I’m ready to install NSX.

My next post will explore the NSX installation for vCloud Director.

Bypass HSTS

While I am an avid supporter of encryption and security, I do have to admit… getting the dreaded HSTS error within the Chrome browser sucks. When this happens, one of the quickest easiest “hacks” (tips!) to bypass it is to just type “badidea” on your keyboard. And voila! HSTS is bypassed.

I learned of this trick from the following blog post by Scott Helme. He details how to do this, but more importantly why it’s not really a good idea. If you are getting an HSTS error, then there is obviously something wrong with your SSL Certificates and you should investigate.

Bypass HSTS
Bypass HSTS

Building the lab 1: Starting from scratch

…..again.

Numerous blogs exist that detail out how to install ESX and vCenter, etc. I don’t need to add one more to the ever growing list of community experts, nor do I want to invalidate any of them. Besides I utilize a bunch of these blogs for my own purposes, in addition to using them in my day job.

You will see though, that my home lab build deviates from a normal infrastructure architectural reference that you would likely see within your own environment. I try to incorporate architectural best practices where I can, however, the first layer of this design has quite a few caveats.
Continue reading “Building the lab 1: Starting from scratch”