Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
1. Create base volume, let's call it '/datastore1'
2. Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
3. When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
4. I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
5. I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
On Fri, Jul 17, 2015 at 03:06:39PM -0400, John Stoffel wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
Create base volume, let's call it '/datastore1'
Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
Could you use VMware Host Profiles[1]? Does cost $$$, however. This is what our Compute team uses to do what you're describing.
Your other approach would be to use PowerCLI or esxcli. We've done this to make broad changes to LUNs on many ESX hosts.
Or maybe use the NetApp VMware plugin, give your VM guys aggregates and let them deal with the problem. :)
Can't comment on your proposed approach. Still on 7-mode. :)
Ray
[1] http://www.vmware.com/products/vsphere/features/host-profiles
Get and install and configure VSC (Virtual Storage Console 5.0!)
For each datastore: create a LIF for each datastore USE VSC! -> this will mount on all hosts automatically Using VSC, provision the volume. it will mount to all hosts. BEFORE USE: verify the Datastore mounted from the IP you expect. If not, unmount it and use the WEB CLIENT and mount it from the correct IP as needed. All of the VSC's could be mounted right from the top level of the SVM.
You may need to go back and create a new export-policy that limits the NFS exposure to only the ESXi hosts (for ROOT access!) Also use VSC to TUNE the ESXi hosts settings. (reboot usually required of the ESXi hosts for full effect)
This allows you place VMs anyplace easily. Should an issue arise, you can use "vol move" on the netapp to relocate volumes as needed for performace or capacity.
--tmac
*Tim McCarthy* *Principal Consultant*
On Fri, Jul 17, 2015 at 3:06 PM, John Stoffel john@stoffel.org wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
Create base volume, let's call it '/datastore1'
Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Oh, and when you add those other two nodes, you can non-disruptively move the volumes and the LIFs to the new nodes to scale your performance.
You could name the DataStores with the requiste info: NFS01_BOB NFS02_JIM NFS03_SAM NFS04_FP1
If you plan on using DEDUPE, do not use Clustered Datatstores (VM product) as this will move VMs around the datastores as it detects performance issues and will mess up deduplication that is already done.
--tmac
*Tim McCarthy* *Principal Consultant*
On Fri, Jul 17, 2015 at 3:21 PM, tmac tmacmd@gmail.com wrote:
Get and install and configure VSC (Virtual Storage Console 5.0!)
For each datastore: create a LIF for each datastore USE VSC! -> this will mount on all hosts automatically Using VSC, provision the volume. it will mount to all hosts. BEFORE USE: verify the Datastore mounted from the IP you expect. If not, unmount it and use the WEB CLIENT and mount it from the correct IP as needed. All of the VSC's could be mounted right from the top level of the SVM.
You may need to go back and create a new export-policy that limits the NFS exposure to only the ESXi hosts (for ROOT access!) Also use VSC to TUNE the ESXi hosts settings. (reboot usually required of the ESXi hosts for full effect)
This allows you place VMs anyplace easily. Should an issue arise, you can use "vol move" on the netapp to relocate volumes as needed for performace or capacity.
--tmac
*Tim McCarthy* *Principal Consultant*
On Fri, Jul 17, 2015 at 3:06 PM, John Stoffel john@stoffel.org wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
Create base volume, let's call it '/datastore1'
Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Hi John Frist, forget TR-3749 from a cDOT perspective. You want either TR-4068 or if using web client, TR-4333.
More comments below...
Share and enjoy!
Peter
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of John Stoffel Sent: Friday, July 17, 2015 12:07 PM To: toasters@teaparty.net Subject: cDOT and NFS Volumes for VMDK datastores
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
1. Create base volume, let's call it '/datastore1'
2. Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
3. When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there. PeterL> vSphere workflows will not let you drill down below datastore level to create VMs. You could automate this outside of the normal workflows, but now you're reinventing a very complicated wheel.
4. I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1 PeterL> Yeah, but if you have /datastore1 as the vSphere datastore, the normal workflows will never create VMs in the flashpool or any other sub-junctioned volumes. Pretty sure I spell this out in 4068/4333.
5. I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working. PeterL> That would be nice, but it doesn't work that way.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method. PeterL> FlexVols can get pretty big these days as long as the underlying aggr is big enough. If you resize one, vSphere will happily use the new space, without rescan, extents, etc.
THanks, John
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
That’ll work just fine. Just create a LIF on each of your two nodes( and add more as you add two more nodes) for future volume creation purposes. It’ll be more manageable to deal with LIF scaling to four nodes than all the volumes you want to create for isolation/grouping. Don’t forget to set up LS mirrors so you don’t lose your junction path tree if you should ever lose an SVM root volume.
Francis Kim | Engineer 510-644-1599 x334 | fkim@berkcom.commailto:fkim@berkcom.com
BerkCom | www.berkcom.comhttp://www.berkcom.com/ NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:06 PM, John Stoffel <john@stoffel.orgmailto:john@stoffel.org> wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
1. Create base volume, let's call it '/datastore1'
2. Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
3. When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
4. I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
5. I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
With cDOT, best to create a LIF for each Datastore, not just one per node. If you ever need/want to move a volume, you will run the (high) risk of leaving the path to the storage in a non-optimal (non-direct) path forcing it to traverse the cluster backend network.
With the separate LIF, you can move the volume, then modify and re-home the LIF to match wherever the volume ends up.
--tmac
*Tim McCarthy* *Principal Consultant*
On Fri, Jul 17, 2015 at 3:23 PM, Francis Kim fkim@berkcom.com wrote:
That’ll work just fine. Just create a LIF on each of your two nodes( and add more as you add two more nodes) for future volume creation purposes. It’ll be more manageable to deal with LIF scaling to four nodes than all the volumes you want to create for isolation/grouping. Don’t forget to set up LS mirrors so you don’t lose your junction path tree if you should ever lose an SVM root volume.
*Francis Kim *| Engineer 510-644-1599 x334 | fkim@berkcom.com
*BerkCom* | www.berkcom.com NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:06 PM, John Stoffel john@stoffel.org wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
Create base volume, let's call it '/datastore1'
Create sub-volumes of various sizes and performance levels and
mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
- When we spin up VMs and asign the data stores, you only need to
drill down into the correct area, bob, jim or sam and put the data there.
- I want to add new FlashPool volume, so I create it and junction
mount it to /datastore1/fp1
- I don't have to add any mounts to the ESX hosts, they just see more
growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
John,
Your choice comes down to what type of proliferation (LIFs, mount points) you’d rather cope with. A LIF/datastore approach is no more scalable than dealing with mount management of multiple datastores. Make your tradeoff carefully, after balancing your needs for workload grouping/isolation, data mobility, ease of management, etc. Take another look at your motivation to use separate volumes. Make sure they are driven by deduplication opportunities or in support of vastly different workload requirements, and not some arbitrary desire to isolate them. DataONTAP 8.3.1 brings into this discussion the long awaited SVM DR, which may be yet another factor to consider when thinking about workload grouping, but that’s for another thread...
Francis Kim | Engineer 510-644-1599 x334 | fkim@berkcom.commailto:fkim@berkcom.com
BerkCom | www.berkcom.comhttp://www.berkcom.com/ NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:28 PM, tmac <tmacmd@gmail.commailto:tmacmd@gmail.com> wrote:
With cDOT, best to create a LIF for each Datastore, not just one per node. If you ever need/want to move a volume, you will run the (high) risk of leaving the path to the storage in a non-optimal (non-direct) path forcing it to traverse the cluster backend network.
With the separate LIF, you can move the volume, then modify and re-home the LIF to match wherever the volume ends up.
--tmac
Tim McCarthy Principal Consultant
On Fri, Jul 17, 2015 at 3:23 PM, Francis Kim <fkim@berkcom.commailto:fkim@berkcom.com> wrote: That’ll work just fine. Just create a LIF on each of your two nodes( and add more as you add two more nodes) for future volume creation purposes. It’ll be more manageable to deal with LIF scaling to four nodes than all the volumes you want to create for isolation/grouping. Don’t forget to set up LS mirrors so you don’t lose your junction path tree if you should ever lose an SVM root volume.
Francis Kim | Engineer 510-644-1599 x334tel:510-644-1599%20x334 | fkim@berkcom.commailto:fkim@berkcom.com
BerkCom | www.berkcom.comhttp://www.berkcom.com/ NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:06 PM, John Stoffel <john@stoffel.orgmailto:john@stoffel.org> wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
1. Create base volume, let's call it '/datastore1'
2. Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
3. When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
4. I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
5. I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
vSphere 6 (vvols / PEs) and ONTAP 8.3 will eliminate the need for the "1 LIF per datastore” recommendation going forward.
In that (brave new) world, you can do 1 LIF per node and call it a day.
On Jul 17, 2015, at 4:10 PM, Francis Kim <fkim@BERKCOM.commailto:fkim@BERKCOM.com> wrote:
John,
Your choice comes down to what type of proliferation (LIFs, mount points) you’d rather cope with. A LIF/datastore approach is no more scalable than dealing with mount management of multiple datastores. Make your tradeoff carefully, after balancing your needs for workload grouping/isolation, data mobility, ease of management, etc. Take another look at your motivation to use separate volumes. Make sure they are driven by deduplication opportunities or in support of vastly different workload requirements, and not some arbitrary desire to isolate them. DataONTAP 8.3.1 brings into this discussion the long awaited SVM DR, which may be yet another factor to consider when thinking about workload grouping, but that’s for another thread...
Francis Kim | Engineer 510-644-1599 x334 | fkim@berkcom.commailto:fkim@berkcom.com
BerkCom | www.berkcom.comhttp://www.berkcom.com/ NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:28 PM, tmac <tmacmd@gmail.commailto:tmacmd@gmail.com> wrote:
With cDOT, best to create a LIF for each Datastore, not just one per node. If you ever need/want to move a volume, you will run the (high) risk of leaving the path to the storage in a non-optimal (non-direct) path forcing it to traverse the cluster backend network.
With the separate LIF, you can move the volume, then modify and re-home the LIF to match wherever the volume ends up.
--tmac
Tim McCarthy Principal Consultant
On Fri, Jul 17, 2015 at 3:23 PM, Francis Kim <fkim@berkcom.commailto:fkim@berkcom.com> wrote: That’ll work just fine. Just create a LIF on each of your two nodes( and add more as you add two more nodes) for future volume creation purposes. It’ll be more manageable to deal with LIF scaling to four nodes than all the volumes you want to create for isolation/grouping. Don’t forget to set up LS mirrors so you don’t lose your junction path tree if you should ever lose an SVM root volume.
Francis Kim | Engineer 510-644-1599 x334tel:510-644-1599%20x334 | fkim@berkcom.commailto:fkim@berkcom.com
BerkCom | www.berkcom.comhttp://www.berkcom.com/ NetApp | Cisco | Supermicro | Brocade | VMware
On Jul 17, 2015, at 12:06 PM, John Stoffel <john@stoffel.orgmailto:john@stoffel.org> wrote:
Guys,
I'm starting to re-read TR-3749 again, but since I'm running cDOT 8.2p# on my main two node cluster (soon to be four nodes) I wanted your opinions on how to provision NFS datastores.
Right now, when we add new DataStores, we have to goto all 16 ESX hosts in the cluster and manually mount them. Which is doable, but maybe not scalable over time as we'd like to isolate groups and apps into their own NFS volumes if possible.
So, knowing that cDOT allows me to setup a volume and then use junction mounts to add more volumes to that name space, does it make sense to do:
1. Create base volume, let's call it '/datastore1'
2. Create sub-volumes of various sizes and performance levels and mount them to:
/datastore1/bob /datastore1/jim /datastore1/sam ....
3. When we spin up VMs and asign the data stores, you only need to drill down into the correct area, bob, jim or sam and put the data there.
4. I want to add new FlashPool volume, so I create it and junction mount it to /datastore1/fp1
5. I don't have to add any mounts to the ESX hosts, they just see more growth in the /datastore1/ mount point and keep working.
So is this a dumb idea? Or not a supported idea? I know it limits my through put to just a single IP address for traffic, unless I spread out the load by having multiple /datastore#/ volumes spread across the nodes of the cluster, and various volumes junction mounted to each of these master /datastore#/ volumes.
Right now we just create new volumes and mount them, but I looking for a more scalable, manageable method.
THanks, John
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters