The error presents itself with two buttons: Reload and Details. The picture displayed shows the details of the error that I got. You can hit reload, attempt to log back in, and rinse and repeat the process. Or you can select Details, where you can only ‘close’ the error and repeat the process.
A quick check with Professor Google finds this is an issue mentioned in the VMware Communities. Luckily, the entry had an answer attached. (Thanks, “rshell”). It’s not a fix, but it is an answer.
I can’t explain why the error occurs, but I can explain what causes it to pop up. This error occurs when you land on the logon page, enter credentials, and then hit on the keyboard instead of clicking on the button. If you click the login button, you have no errors and vSphere loads normally.
C.R.I.B. – Stands for Computer Room In a Box. This is the name I have given my homelab. I’ve used my C.R.I.B. to educate myself, experiment with things, and demo products to my customers.
As you’ve read in previous posts, my homelab has evolved over the years. Currently, I run one physical Supermicro Super Server attached to a Synology DS 1512+ Array, connected to a gigabit switch. Many of my friends and co-workers have asked how I run everything on one physical box — which I am going to call pESX. I’ve tried to explain it and draw it out on a whiteboard. However, until you’ve seen it drawn out, the explanation gets confusing — unless you’re familiar with nested virtualization.
I utilize nested virtualization to expand the capabilities of the C.R.I.B. without additional physical hardware. If you are unfamiliar with nested virtualization, it is the ability to run ESX as a virtual machine — which I will call vESX. (William Lam has written numerous articles on how to do it. Just google “William Lam” and “nested virtualization” if you want more info.) The entire CRIB is accessible from my home network — which is a lifesaver, as I do not have to work in my office. I can access it via VPN or the couch.
A pfSense virtual machine (GATEWAY) was implemented to act as firewall and router to the entire virtualized environment, including the nested layer. The pESX is on the base network with a vmkernel attached to the pfSense Virtual Machine to allow for manipulation and modification of the firewall rules and network configuration. All traffic in and out of the virtual environment will pass through the pfSense VM. The firewall in the pfSense provides isolation as well as communication between the various networks.
All of the infrastructure virtual machines sit on the first virtualization layer – this is considered as the “Management Cluster”. However, this cluster, is only made up of the one physical ESX Host (pESX). Normally, we would want multiple hosts for HA redundancy. (But this is a lab, and I’m on a budget.) The vESX Virtual machines sit in this layer as well. The vESX VMs have a direct connection to the base network for access to the iSCSI storage Array. vESX make up the ESX Compute Resources of the “Payload Cluster”. These clusters; the Management Cluster and the Payload Cluster, are a VMware architectural design best practice. The infrastructure is made up of your basic VMs; a Domain Controller (DC), a SQL Server (SQL), the Management vCenter (vCSA6), a Log Insight Server (LOG), and a VMware Data Protection VM (VDP) for backups. In addition to these VMs, the vRealize Automation (vRA) VMs and Payload vCenter (PAYVC01) also sit in the management cluster. This self-service portal deploys to a vCenter (PAYVC01) endpoint controlling the compute resources of the Payload Cluster.
The Payload Cluster is made of three virtual ESX Hosts (vESX) and provide the various resources; network, CPU, RAM, & Storage, for consumption by vRA or other platform products. There is an Ultimate Deployment Appliance (UDA) VM providing the ability to deploy scripted ESX images. This provides the ability to quickly rebuild the hosts, if needed.
This is just the base. I am in process of deploying NSX into the environment to provide the ability to deploy multi-machine blueprints within vRA. In addition, I intend on exploring SRM integration with vRA.
Just this week, VMware released vRealize Automation 7.0.1 (vRA). It contains many bug fixes and some enhancements to the vRA platform. I was excited for it to come out and was anxious to perform an upgrade in my home lab.
I will advise caution and planning in any upgrade of your environment. But I would stress heavily on the planning. You should know your dependencies before you attempt an upgrade, and always. ALWAYS, read the Release notes before you start the upgrade process.
The following process is for a simple vRA instance. This is the Proof Of Concept build, sometimes referred to as a “Lab” or “Sandbox” build. However, these steps can be modified for a fully distributed vRA instance.
Here is how I upgraded my lab.
1) Take snapshots of the vRA Cafe Appliance, IaaS VM, and SQL VM.
2) Shutdown the vRA Services
SSH into the vRA Cafe Appliance and shutdown the vco-server, vcac-server, apache2, and the rabbitmq-server services.
Run the below commands to stop the above listed services:
#service vcac-server stop
#service apache2 stop
#service rabbitmq-server stop
#service vco-server stop
You can check that the services have stopped using the status syntax: #service vco-server status
Log into the IaaS Virtual Machine and stop the below listed vRA services.
Recently, I have upgraded my homelab – yet again. There will be an oncoming post about the hardware I chose and how it is set up. This time, my lab is mobile. I can take it with me on the road or leave it at home.
One of the features that I wanted to be able to capture and utilize was VMware Data Protection (vDP) to backup my Infrastructure and Important VMs. However, there was a small hurdle that I needed to overcome – storage – and how do I make it mobile. VDP requires – at minimum, a 2TB datastore for backup storage. Normally, you would utilize your storage array and carve out some space for the backups; I wanted to see if I could do something different – something radical. I wanted to use my USB3 USB drive. Continue reading “How to Mount a USB Drive as an ESXi Datastore”→
There is a movement to change the best practice regarding the bios power settings on an ESXi host. In the past, you would set the power settings in the BIOS to max and be done with it. You did this because previous versions of vSphere did not work well with the various C-States of the processor (ie., When the workload on the host would drop, the CPU would drop to a lower power setting. When it would go to wake and draw the CPU back to full power, the host would crap all over itself.).
vSphere 5.5 introduced a new feature that allows VMware to better utilize the C-States of some of the newer processors. Thus allowing the CPUs to change power and speed states without affecting the performance or behavior of the Host. (YEAH!!)
I know we are happy about this, but not so fast…
This link describes the behavior and a couple of performance benefits from the change — however, it should also be noted, that this change is dependent on the type of workloads in your environment. If you have a heavy IO Intense workload or a time sensitive workload, you may still want to have that host on max power, high performance, etc.