Oh, and when you add those other two nodes, you can non-disruptively move the volumes and the LIFs to the new nodes to scale your performance.
You could name the DataStores with the requiste info: NFS01_BOB NFS02_JIM NFS03_SAM NFS04_FP1
If you plan on using DEDUPE, do not use Clustered Datatstores (VM product) as this will move VMs around the datastores as it detects performance issues and will mess up deduplication that is already done.
--tmac
*Tim McCarthy* *Principal Consultant*
On Fri, Jul 17, 2015 at 3:21 PM, tmac tmacmd@gmail.com wrote:
Get and install and configure VSC (Virtual Storage Console 5.0!)
For each datastore: create a LIF for each datastore USE VSC! -> this will mount on all hosts automatically Using VSC, provision the volume. it will mount to all hosts. BEFORE USE: verify the Datastore mounted from the IP you expect. If not, unmount it and use the WEB CLIENT and mount it from the correct IP as needed. All of the VSC's could be mounted right from the top level of the SVM.
You may need to go back and create a new export-policy that limits the NFS exposure to only the ESXi hosts (for ROOT access!) Also use VSC to TUNE the ESXi hosts settings. (reboot usually required of the ESXi hosts for full effect)
This allows you place VMs anyplace easily. Should an issue arise, you can use "vol move" on the netapp to relocate volumes as needed for performace or capacity.
--tmac
*Tim McCarthy* *Principal Consultant*
On Fri, Jul 17, 2015 at 3:06 PM, John Stoffel john@stoffel.org wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
Create base volume, let's call it '/datastore1'
Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters