| Looking to run NFS and VMFS ESX data stores on a single ESX host. | | Anybody else doing it? | | Gotchas?
We do it (total of 15 or so ESX hosts, 4 filers), though most of our ESX datastores are VMFS on LUNs from the filers. Works just as expected.
I haven't looked closely at relative performance, but we have a number of low use VMs where disk performance is almost irrelevant, and tend to put them on NFS datastores.
I like using NFS datastores: - snapshots let us mount old .vmdk files for "easy" file restores - doing it with a lun seems more of a pain - easy to adjust the size of a datastore on the filer - easier to create a machine - we lean towards LUN per VM, which means creating a new VM is more annoying
One inconvenience of NFS datastores is that they need to be added separately on each ESX host. I don't know if adding a datastore can be done from the service console shell, but we use the VI client, which is tedious. With LUNs, we configure them once on the filer, and then our ESX rescan scripts result in the storage being visible everywhere.
Hope that helps - cheers!
John
| I like using NFS datastores: | - snapshots let us mount old .vmdk files for "easy" file restores | - doing it with a lun seems more of a pain | - easy to adjust the size of a datastore on the filer | - easier to create a machine - we lean towards LUN per VM, which | means creating a new VM is more annoyingOne thing to remember is dedupe.
Don't forget dedupe.
Using thin provisioned luns with dedupe returns space to the netapp volume but not the datastore. You can use the free space to create new LUNs for use as new datastores (or use extents) but the VMware datastore does not see the free space.
With NFS, deduped blocks are visible at the datastore level as free space.
I've not heard of an interoperability issues when running both LUN and NFS datastores on the same host. Just make sure that you have a robust IP storage network IE plenty of NICs, etherchannels, VIFs, etc.
-Tim-
Hi Guys With LUNs, you have to at least rescan on the other hosts to see them.
As for command line, esxcfg-nas -a -o <filer_name_or_ip> -s /vol/something label Don't forget the label. That's the human-readable name under /vmfs/volumes symlinked to the UUID, and the name as you would see it in VC.
For the filer name or IP, be consistent across all ESX servers sharing the same datastore. The UUID, which is what is used internally for just about everything including VMotion validation checks, is derived from the filer name (hostname, FQDN or IP) and the share/export name (/vol/whatever). If you use IP on one host, hostname on another and FQDN on a third, even though they all resolve to the same IP, they will all have a different UUID, and be treated as different datastores and a lot of stuff like VMotion and VMware HA will break.
Share and enjoy!
Peter
-----Original Message----- From: John Sellens [mailto:jsellens@generalconcepts.com] Sent: Thursday, February 12, 2009 8:39 PM To: geraldv@stanford.edu; toasters@mathworks.com Subject: Re: NFS and VMFS on a single ESX host
| Looking to run NFS and VMFS ESX data stores on a single ESX host. | | Anybody else doing it? | | Gotchas?
We do it (total of 15 or so ESX hosts, 4 filers), though most of our ESX datastores are VMFS on LUNs from the filers. Works just as expected.
I haven't looked closely at relative performance, but we have a number of low use VMs where disk performance is almost irrelevant, and tend to put them on NFS datastores.
I like using NFS datastores: - snapshots let us mount old .vmdk files for "easy" file restores - doing it with a lun seems more of a pain - easy to adjust the size of a datastore on the filer - easier to create a machine - we lean towards LUN per VM, which means creating a new VM is more annoying
One inconvenience of NFS datastores is that they need to be added separately on each ESX host. I don't know if adding a datastore can be done from the service console shell, but we use the VI client, which is tedious. With LUNs, we configure them once on the filer, and then our ESX rescan scripts result in the storage being visible everywhere.
Hope that helps - cheers!
John