I'm currently writing a document that I will publish soon on building a low cost lab with VI3. While I was doing some research on VMFS partition alignment, I found that I was unable to create (or delete) partitions on the /dev/sda disk that ESX is installed on. I could make the changes, but as I tried to write the changes to disk, fdisk would come back with the following error:
SCSI disk error : host 1 channel 0 id 0 lun 0 return code = be0000
I/O error: dev 08:00, sector 40355342
Device busy for revalidation (usage=8)
I know that I could just use the VI client to create this partition, but as I'm busy writing this fairly detailed document, I wanted to show potential readers how to create such an aligned local VMFS3 partition using fdisk and vmkfstools on the /dev/sda device.
Then I also had the question in my mind... "If the VI client can do it, why can't I?"
Anyway, after playing around with ESX a little, I found a fix. Use with caution though!
You need to remove the lock that ESX has on the device! The esxcfg-advcfg command can do this.
Before running fdisk, execute the following command at the service console:
esxcfg-advcfg -s 0 /Disk/PreventVMFSOverwrite
With PreventVMFSOverwrite now switched off, you can use fdisk to write the partition changes without the error. You will still get "device or resource busy" but it will write the changes.
After you have saved the partition changes using fdisk, run the following commands:
esxcfg-advcfg -s 1 /Disk/PreventVMFSOverwrite
I've seen cases where a newly extended VMFS datastore fluctuates between the old size and the new size. Virtual Machines running on th newly extended datastore prevents the correct size of the datastore from displaying.
I found the solution to the problem in KB Article 1002558
Products: VMware ESX and ESXi
The solution is:
- Shut down or VMotion virtual machines running on the extended datastore to the ESX host that created the extent.
- On all other ESX hosts other than the one that created the datastore, run vmkfstools -V to re-read the volume information.
- Power on or VMotion the Virtual Machines back to the original ESX hosts.
This is not only on VI3 hosts but on:
VMware ESX 2.0.x
VMware ESX 2.1.x
VMware ESX 2.5.x
VMware ESX 3.0.x
VMware ESX 3.5.x
VMware ESXi 3.5.x Embedded
VMware ESXi 3.5.x Installable
I had a problem in one of my DRS/HA Clusters where one ESX 3.0.2 host was unable to see an existing VMFS volume although the volume was in use by other ESX hosts in the cluster, and we confirmed that the SAN LUN zoning was done correctly.
I did have the option in the VI client to add more storage, and then select the LUN, but that would reformat the LUN and I would loose all my existing VM's on that volume.
The fix to this issue was to reset the LUN with "Resignaturing"...