Unhandled Exception when logging into ESXi Host client

Unhandled Execption Error
Ran into this weird error after powering up the C.R.I.B. and logging into the ESXi Host to boot up VMs. This error popped up while using Safari, Chrome, and Firefox.

Unhandled Exception (1)
Cause: Error: [$rootScope:inprog] http://errors.angularjs.org/1.3.2/$rootScope/inprog?p0=%24digest

The error presents itself with two buttons: Reload and Details. The picture displayed shows the details of the error that I got. You can hit reload, attempt to log back in, and rinse and repeat the process. Or you can select Details, where you can only ‘close’ the error and repeat the process.

A quick check with Professor Google finds this is an issue mentioned in the VMware Communities. Luckily, the entry had an answer attached. (Thanks, “rshell”). It’s not a fix, but it is an answer.

I can’t explain why the error occurs, but I can explain what causes it to pop up. This error occurs when you land on the logon page, enter credentials, and then hit on the keyboard instead of clicking on the button. If you click the login button, you have no errors and vSphere loads normally.

VMware VirtualCenter Server Service hung on “starting”

Discovered something interesting earlier today. I went to go work in vcenter and found that it was unresponsive. Thinking that the machine had recently “autoinstalled” a patch and rebooted. I first thought that maybe the vcenter service hadn’t started. I opened services.msc and found that the VMware VirtualCenter Server Service stuck on “starting”. I had no way to stop, or restart it. I rebooted the machine and hoped that the ‘universal’ fix would work — no gas.

Continue reading “VMware VirtualCenter Server Service hung on “starting””

SVMotion does not rename files – Duncan Epping

Duncan Epping posted just a few days ago on his blog – YellowBricks.com of an issue with files not being renamed when you svmotion a VM. This is a royal pain.

Here’s a scenario of why this is aggravating. You have a VM named “TestVM”. When it was created, it created the folder “TestVM” within a datastore with the VM’s folder, the files were labeled “TestVM.vmdk”, “TestVM.vmx”, etc. If this VM was decommissioned, and you wound up reusing it at later time. Some admins would just rename the VM within vcenter and change the hostname of the virtual machine. Unfortunately, in the storage layer, the VM would still be listed as “TestVM”. This can be confusing if you are having to do some cleanup in your datastores. You would come across a folder labeled “TestVM” and not know what VM this belongs to — without going through each VM or running a powershell script to identify it. Like I said, a royal pain.

In the past, you could svmotion the VM to another datastore, and the svmotion process would rename the files. Unfortunately, this got left out of vSphere 5.0. Duncan’s blog gives a fix for this so that you can get back to renaming files with svmotion.

Link: http://www.yellow-bricks.com/2013/01/25/storage-vmotion-does-not-rename-files/

Maximum Switchover Timeout

I recently ran into an issue where I was having to svmotion some rather large VMs (1-2TBs) that stretched over multiple datastores. During the svmotion, the VMs would time out at various percentages presenting this error.
svmotion

Consulting with Prof. G (Google) presented a VMware KB Article: 1010045. That article states; “This timeout occurs when the maximum amount of time for switchover to the destination is exceeded. This may occur if there are a large number of provisioning, migration, or power operations occurring on the same datastore as the Storage vMotion. The virtual machine’s disk files are reopened during this time, so disk performance issues or large numbers of disks may lead to timeouts.” Yep, this was me. I was having to svmotion VMs from one datastore to another during a vsphere 5 upgrade.

The KB Article discusses adding a timeout value, called “fsr.maxSwitchoverSeconds” to the VM’s VMX file to prevent the timeout.

To modify the fsr.maxSwitchoverSeconds option using the vSphere Client:

1.) Open vSphere Client and connect to the ESX/ESXi host or to vCenter Server.
2.) Locate the virtual machine in the inventory.
3.) Power off the virtual machine.
4.) Right-click the virtual machine and click Edit Settings.
5.) Click the Options tab.
6.) Select the Advanced: General section.
7.) Click the Configuration Parameters button.

Note: The Configuration Parameters button is disabled when the virtual machine is powered on.

8.) From the Configuration Parameters window, click Add Row.
9.) In the Name field, enter the parameter name:

fsr.maxSwitchoverSeconds

10.) In the Value field, enter the new timeout value in seconds (for example: 150).
(I chose a value of 200.)
11.) Click the OK buttons twice to save the configuration change.
12.) Power on the virtual machine.

From personal experience, this was a homerun. It resolved my problem.

Unable to access file since it is locked

I came across this error earlier today and spent the entire day troubleshooting it.

So I wanted to share with you what my resolution was. I was getting this error on a MS cluster setup within my environment. I was working with a Systems Engineer; we were converting an existing MS cluster from using vmdks for the quorum and msdtc shared drives to RDMs. Once Node A was configured, I attached the drives and powered up Node B. Node B would make it to 95% and then provide the error (shown above).

I checked and double-checked my settings. Second SCSI controller set to LSI Logic Parrallel Controller Type with Physical SCSI Bus Sharing selected (VMs were on separate hosts). SCSI IDs set to SCSI 1:1, 1:2, and 1:3 for the new RDM Drives. The RDMs set to virtual compatibility mode.

After numerous attempts of re-configuring as I was doubting that I did it right. I consulted Google – which lead me to an older white paper (one for ESX 3.5). This wasn’t too bad, as the steps are pretty much the same for our ESX 4.0 Update 2 environment. Since the white paper verified that I was doing the right thing, I dug deeper into Google for an answer. I came across this vmware KB Article (#10051) that provides an indepth detail of the error. So now I’m worried. I start troubleshooting according to the KB article, and it did allow me to find something rather odd. Even though the VM was physically sitting in one Datastore, the ESX host believed it to be in another datastore. A quick svmotion corrected this, but it was REALLY odd.

Almost on the verge of giving up, I consult with a co-worker. He starts going down the list of tasks to do that is eerily familiar – they were the same steps from the white paper. Once they were triple-checked, he asks me the one question that I’ve asked the SE three or four times previously. Has the cluster been completely unconfigured. Turns out the cluster was completely removed on Node A, but Node B had not been evicted. Once the cluster cleanup was done on Node B, the two nodes booted up fine with the RDMs attached without an error. Eight hours later, the cluster is reconfigured and running fine.

The moral of the story. Rule #1 is always right. Do not trust what the user tells you. 🙂