John,
Your choice comes down to what type of proliferation (LIFs, mount points) you’d rather cope with. A LIF/datastore approach is no more scalable than dealing with mount management of multiple datastores. Make your tradeoff carefully, after balancing your needs for workload grouping/isolation, data mobility, ease of management, etc. Take another look at your motivation to use separate volumes. Make sure they are driven by deduplication opportunities or in support of vastly different workload requirements, and not some arbitrary desire to isolate them. DataONTAP 8.3.1 brings into this discussion the long awaited SVM DR, which may be yet another factor to consider when thinking about workload grouping, but that’s for another thread...
Francis Kim | Engineer 510-644-1599 x334 | fkim@berkcom.commailto:fkim@berkcom.com
BerkCom | www.berkcom.comhttp://www.berkcom.com/ NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:28 PM, tmac <tmacmd@gmail.commailto:tmacmd@gmail.com> wrote:
With cDOT, best to create a LIF for each Datastore, not just one per node. If you ever need/want to move a volume, you will run the (high) risk of leaving the path to the storage in a non-optimal (non-direct) path forcing it to traverse the cluster backend network.
With the separate LIF, you can move the volume, then modify and re-home the LIF to match wherever the volume ends up.
--tmac
Tim McCarthy Principal Consultant
On Fri, Jul 17, 2015 at 3:23 PM, Francis Kim <fkim@berkcom.commailto:fkim@berkcom.com> wrote: That’ll work just fine. Just create a LIF on each of your two nodes( and add more as you add two more nodes) for future volume creation purposes. It’ll be more manageable to deal with LIF scaling to four nodes than all the volumes you want to create for isolation/grouping. Don’t forget to set up LS mirrors so you don’t lose your junction path tree if you should ever lose an SVM root volume.
Francis Kim | Engineer 510-644-1599 x334tel:510-644-1599%20x334 | fkim@berkcom.commailto:fkim@berkcom.com
BerkCom | www.berkcom.comhttp://www.berkcom.com/ NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:06 PM, John Stoffel <john@stoffel.orgmailto:john@stoffel.org> wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
1. Create base volume, let's call it '/datastore1'
2. Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
3. When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
4. I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
5. I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters