Saturday, January 19, 2013

Powering on a datastore migrated virtual machine in vSphere 4 with RDM throws the error: “Thin/TBZ disks cannot be opened in multiwriter mode..”

Problem

Environment Information:

vCenter: 4.1.0, Build 345043
ESXi: 4.1, Build 348481

You need to move a virtual machine that is a part of a Microsoft Clustering Services (MSCS) to a new datastore and since this virtual machine has RDM (Raw Device Mapping) hard disks, you shutdown the virtual machine and use the Migrate… option to move the files:

image

clip_image002

clip_image002[4]

clip_image002[6]

clip_image002[8]

Although the migration successfully completes without errors, you notice that once you attempt to power on the virtual machine:

image

… you receive the following error:

Reason: Thin/TBZ disks cannot be opened in multiwriter mode..

Cannot open the disk '/vmfs/volumes/50f88947-7d389596-e168-0025b500001a/Some-Cluster-02/Some-Cluster-02_1.vmdk' or one of the snapshot disks it depends on.

VMware ESX cannot open the virtual disk "/vmfs/volumes/50f88947-7d389596-e168-0025b500001a/Some-Cluster-02/Some-Cluster-02_1.vmdk" for clustering. Verify that the virtual disk was created using the thick option.

image

What’s strange is that when you view the configuration of the virtual machine, you see that the RDM drives have now become Virtual Disks:

imageimage

This is different than the original configuration prior to migrating this virtual machine as shown here

imageimage

Solution

**Note that the demonstration in this post is for the virtual machine cluster node 2 that originally references cluster node 1 for the RDM files.

I’m not sure if this is a bug but the migration option appears to replace RDM mappings with actual virtual desktops that are not accessible.  To correct this problem, begin by removing the bad disks from the virtual machine:

image

**Note that you can delete the disks with the “Remove from virtual machine and delete files from disk.” but if you want to be absolutely safe, proceed with just removing and not deleting the actual files then perform a post cleanup task when the VM is back up.

With the bad hard disks removed, proceed with re-adding the RDM drives back to the virtual machine:

image

clip_image002[10]

clip_image002[12]

clip_image002[14]

Browse to the cluster node 1 and add each of the RDM disks back to node 2:

image

Remember to add the cluster shared disks on different SCSI controllers than the system drive:

clip_image002[16]clip_image002[18]

imageimage

image

Also remember to change the SCSI Bus Sharing setting from None to Physical:

imageimage

Once the RDM disks have been properly added back, you should now be able to power on the virtual machine:

image

No comments: