I'm just starting to test snapvault and I have a few questions about it's usage that I haven't found sufficient answers for in the documentation. I'm hoping someone might be able to help explain them to me.
* On the snapvault start command you can specify your source (primary) dataset by these three different options. 1. filer:/vol/volname/qtree_name - for a particular qtree 2. filer:/vol/volname/- - for non-qtree data in a volume 3. filer:/vol/volname - for contents of entire volume including all qtrees.
My question is, "Is there any difference between options 2 and 3, if I don't have any qtrees in any of my volumes?".
Second question is in regards to a note in the snapvault documentation that states:
"7.2.3 Volume to Qtree SnapVault When issuing the snapvault start command, you are not required to specify a qtree name for the source; however, this practice is not recommended. This type of relationship increases the performance of the SnapVault transfer; however, it also increases the amount of time it takes to perform a backup. Since you must specify a qtree for the SnapVault destination, an entire volume then resides in a qtree on the destination. When it's time for the restore via the Data ONTAP CLI, the entire contents of the qtree, which contains all the data from the source volume, is restored to a qtree on the SnapVault primary system. Once the data is restored, you must then manually copy the data back to the appropriate location."
As I've eluded to in my question above, we don't use any qtrees in our environment. Is the above note saying that when I do a snapvault restore that it's going to restore back into a qtree and I'll have to manually copy the contents out of the qtree back into the root dir of the volume before it will be back to the way it was before the restore? I'm hoping that's not what it's saying, though it sure sounds like it.
Thanks for any help on this, Romeo
On Fri, Jul 18, 2008 at 03:56:54PM -0400, Romeo Theriault wrote:
- On the snapvault start command you can specify your source (primary)
dataset by these three different options.
- filer:/vol/volname/qtree_name - for a particular qtree
- filer:/vol/volname/- - for non-qtree data in a
volume 3. filer:/vol/volname - for contents of entire volume including all qtrees.
My question is, "Is there any difference between options 2 and 3, if I don't have any qtrees in any of my volumes?".
My understanding was the option 2 only backs up non-qtree data. And option 3 only back up qtree data. We use option 2 mostly as we have very few qtrees.
As I've eluded to in my question above, we don't use any qtrees in our environment. Is the above note saying that when I do a snapvault restore that it's going to restore back into a qtree and I'll have to manually copy the contents out of the qtree back into the root dir of the volume before it will be back to the way it was before the restore? I'm hoping that's not what it's saying, though it sure sounds like it.
Yes that's exactly what it is saying. In our tests we just did a mv (on a Solaris NFS mount) to put everything in the qtree back up a level which is pretty much instantaneous.
If you don't have a license or a filer on which you can experiment, I'd recommend installing the simulator for both secondary and primary Snapvault instances and trying it all out. For us, it made more sense once we could play around than just reading the docs.
in our setup for volumes with no qtrees using /vol/volname/- or /vol/volname as the source made no difference it still backed up all volume data to the snapvault qtree. you still have the issue of a snapvault restore will result in a directory with the name of the vault qtree in your primary volume and you would have to mv that data out to its original location.
if you want to keep the "one click" restore with snapvault but still use individual flexvols you can create a somewhat redundant qtree but not bother with the quotas and other qtree overhead.
ie if you have an oracle raq cluster with 5 instances and you use individual data mounts for each sid mounted as /oradata/SID you could setup your primary volume with a single qtree as /vol/oradata_SID/oradata. yes it is redundant but overall not a huge deal and if one step snaprestore from vaults is that important it is required.
so then your primary to nearstore would look like this, assuming you group multiple db instances into a common vault. obviously if they have different retention and scheduling requirements they would be in unique vaults.
FILER:/vol/oradata_SID/oradata -> NEARSTORE:/vol/oradata_vault/oradata_SID FILER:/vol/oradata_SID2/oradata -> NEARSTORE:/vol/oradata_vault/oradata_SID2 ...
yeah, i don't like qtrees much either but using one without the hassle of quota management is something i can live with and doesnt limit my ability to shrink/grow or add extra management overhead.
--daniel
-- Daniel Leeds Manager, Storage Operations Edmunds, Inc. 1620 26th Street, Suite 400 South Santa Monica, CA 90404
310-309-4999 desk 310-430-0536 cell
-----Original Message----- From: owner-toasters@mathworks.com on behalf of Jeff Bryer Sent: Fri 7/18/2008 1:49 PM To: Romeo Theriault Cc: toasters@mathworks.com Subject: Re: snapvault start question
On Fri, Jul 18, 2008 at 03:56:54PM -0400, Romeo Theriault wrote:
- On the snapvault start command you can specify your source (primary)
dataset by these three different options.
- filer:/vol/volname/qtree_name - for a particular qtree
- filer:/vol/volname/- - for non-qtree data in a
volume 3. filer:/vol/volname - for contents of entire volume including all qtrees.
My question is, "Is there any difference between options 2 and 3, if I don't have any qtrees in any of my volumes?".
My understanding was the option 2 only backs up non-qtree data. And option 3 only back up qtree data. We use option 2 mostly as we have very few qtrees.
As I've eluded to in my question above, we don't use any qtrees in our environment. Is the above note saying that when I do a snapvault restore that it's going to restore back into a qtree and I'll have to manually copy the contents out of the qtree back into the root dir of the volume before it will be back to the way it was before the restore? I'm hoping that's not what it's saying, though it sure sounds like it.
Yes that's exactly what it is saying. In our tests we just did a mv (on a Solaris NFS mount) to put everything in the qtree back up a level which is pretty much instantaneous.
If you don't have a license or a filer on which you can experiment, I'd recommend installing the simulator for both secondary and primary Snapvault instances and trying it all out. For us, it made more sense once we could play around than just reading the docs.
Thank you Daniel and Jeff for your explanations of snapvault. Both were very helpful to me in understanding it better and in giving me some idea's on how me might best utilize it. I'm relieved a 'mv' will move the datafiles up out of the qtree quickly. I was afraid I would have to copy the datafile up out of the qtree which would have been a lengthly process on large volumes and would have put a big hamper on snapvault usage.
I like the idea of creating qtrees in production volumes and just not using the quotas etc... so snapvault restores put everything back to where the client expects. But we have many volumes already created without qtrees and it's not feasible to not start creating them and moving things around. So for us this will have to be something we move towards.
Also, the idea of putting all of Oracle's different partitions (datafiles, redologs, etc...) into one volume with different qtrees is a tempting one but we've been in the habit of splitting our redolog patitions across filers in our cluster for failover capabilities. So I don't think that will work for us.
But, again, thank you for your responses they were very helpful.
Romeo
On Fri, Jul 18, 2008 at 6:09 PM, Leeds, Daniel dleeds@edmunds.com wrote:
in our setup for volumes with no qtrees using /vol/volname/- or /vol/volname as the source made no difference it still backed up all volume data to the snapvault qtree. you still have the issue of a snapvault restore will result in a directory with the name of the vault qtree in your primary volume and you would have to mv that data out to its original location.
if you want to keep the "one click" restore with snapvault but still use individual flexvols you can create a somewhat redundant qtree but not bother with the quotas and other qtree overhead.
ie if you have an oracle raq cluster with 5 instances and you use individual data mounts for each sid mounted as /oradata/SID you could setup your primary volume with a single qtree as /vol/oradata_SID/oradata. yes it is redundant but overall not a huge deal and if one step snaprestore from vaults is that important it is required.
so then your primary to nearstore would look like this, assuming you group multiple db instances into a common vault. obviously if they have different retention and scheduling requirements they would be in unique vaults.
FILER:/vol/oradata_SID/oradata -> NEARSTORE:/vol/oradata_vault/oradata_SID FILER:/vol/oradata_SID2/oradata -> NEARSTORE:/vol/oradata_vault/oradata_SID2 ...
yeah, i don't like qtrees much either but using one without the hassle of quota management is something i can live with and doesnt limit my ability to shrink/grow or add extra management overhead.
--daniel
-- Daniel Leeds Manager, Storage Operations Edmunds, Inc. 1620 26th Street, Suite 400 South Santa Monica, CA 90404
310-309-4999 desk 310-430-0536 cell
-----Original Message----- From: owner-toasters@mathworks.com on behalf of Jeff Bryer Sent: Fri 7/18/2008 1:49 PM To: Romeo Theriault Cc: toasters@mathworks.com Subject: Re: snapvault start question
On Fri, Jul 18, 2008 at 03:56:54PM -0400, Romeo Theriault wrote:
- On the snapvault start command you can specify your source (primary)
dataset by these three different options.
- filer:/vol/volname/qtree_name - for a particular qtree
- filer:/vol/volname/- - for non-qtree data in a
volume 3. filer:/vol/volname - for contents of entire volume including all qtrees.
My question is, "Is there any difference between options 2 and 3, if I
don't
have any qtrees in any of my volumes?".
My understanding was the option 2 only backs up non-qtree data. And option 3 only back up qtree data. We use option 2 mostly as we have very few qtrees.
As I've eluded to in my question above, we don't use any qtrees in our environment. Is the above note saying that when I do a snapvault restore that it's going to restore back into a qtree and I'll have to manually
copy
the contents out of the qtree back into the root dir of the volume before
it
will be back to the way it was before the restore? I'm hoping that's not what it's saying, though it sure sounds like it.
Yes that's exactly what it is saying. In our tests we just did a mv (on a Solaris NFS mount) to put everything in the qtree back up a level which is pretty much instantaneous.
If you don't have a license or a filer on which you can experiment, I'd recommend installing the simulator for both secondary and primary Snapvault instances and trying it all out. For us, it made more sense once we could play around than just reading the docs.
-- Jeff Bryer bryer@sfu.ca Systems Administrator (778) 782-4935 IT Infrastructure, Simon Fraser University