Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT
8.2p# on my main two node cluster (soon to be four nodes) I wanted
your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX
hosts in the cluster and manually mount them. Which is doable, but
maybe not scalable over time as we'd like to isolate groups and apps
into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use
junction mounts to add more volumes to that name space, does it make
sense to do:
1. Create base volume, let's call it '/datastore1'
2. Create sub-volumes of various sizes and performance levels and
mount them to:
/datastore1/bob
/datastore1/jim
/datastore1/sam
....
3. When we spin up VMs and asign the data stores, you only need to
drill down into the correct area, bob, jim or sam and put the data
there.
4. I want to add new FlashPool volume, so I create it and junction
mount it to /datastore1/fp1
5. I don't have to add any mounts to the ESX hosts, they just see more
growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my
through put to just a single IP address for traffic, unless I spread
out the load by having multiple /datastore#/ volumes spread across the
nodes of the cluster, and various volumes junction mounted to each of
these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for
a more scalable, manageable method.
THanks,
John
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters