Adding physical drives to VMware ESXi

I built a new lab environment at home, using VMWare ESXi 5.0, which is a very nice product, if we expect the windows-only GUI 1GB HDD needed to install bloatware. You can do pretty much anything from there, except something that looks so important that I wonder why it’s not on the windows GUI: mapping local disks to VMs.

I made this little post as a reminder for myself rather than a full tutorial. You can get more info on http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi, on which this post is based.

In a nutshell:

  1. log on vmware ESXi as root.
  2. locate the name of your fs in /vmfs/devices/disks/, i.e. “/vmfs/devices/disks/t10.ATA_____Hitachi_HDT725025VLA380_______________________VFL104R6CNYSZW
  3. go to where you want to copy it. I suggest you create a directory in a datastore for this, like “/vmfs/volumes/datastore1/harddisks/
  4. vmkfstools -z /vmfs/devices/disks/t10.ATA_____Hitachi_HDT725025VLA380_______________________VFL104R6CNYSZW Hitashi250.vmdk
  5. In your VM, use “attach existing virtual disk” and browse the harddisks directory on datastore.
  6. On linux, you will need “rescan-scsi-bus” to have you new hard disk detected.
  7. Profit

6 Replies to “Adding physical drives to VMware ESXi”

  1. You don’t really need to do all that to do raw device mappings.
    Its actually built into the vSphere Client interface, but simply blocked for all local storage (although there’s plenty of hardware that works with it)

    In esxi 5, goto:

    Configuration > Software > Advanced Settings > RdmFilter and there you can disable the filter so LUN’s can be RDM’d directly from the “add disk” dialogue.
    (not supported on all hardware, but it does work on all LSI IT flashable cards (SAS2008 and the likes)

    I’m guessing that for 4.x it is the same procedure.

    For ESX 3.5, you can use the procedure outlined in the following KB:

    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014513

  2. Thanks for the tip, I’ll try this next time I add a new disk on the ESX. I found quite strange that this wasn’t available from the vSphere interface

  3. Yeah, I first stumbled on the same article you referenced, then went snooping around VMWare’s KB’s.

    I can understand why they hide it a little bit, since on unsupported hardware, it can be disasterous. And their focus is on SAN/NAS storage anyway.

    But it is working just fine for me.

    470MB/s write and 530MB/s reads on a ZFS system without ZIL or L2ARC.

    Performance almost nearing my previous Gentoo based Software Raid6 setup. And I haven’t tweaked anything in ZFS yet (have SSD’s incoming for L2ARC/ZIL)

    And I also tried Nexento (now running clean latest OpenIndiana) and under VMWare w/Nexento performance was really bad, CPU load was always trough the roof, etc. With clean Nexento as the base OS, it wasn’t even half of what I’m getting with VMWare w/OpenIndiana+Napp-IT

    Only problem I have now is that VMWare is bitching about an unsupported NFS version :@

  4. Thank you guys for covering both ways to do it.
    The RdmFilter didn’t work for me, perhaps my h/w is not supported.
    The original one – copying device/disk as .vmdk – worked well.
    Thanks again!

Leave a Reply

Your email address will not be published. Required fields are marked *