If you are in a Windows environment, look at the Data ONTAP Powershell toolkit for creating your scripts. Will give you full API access via RPC without the need for rsh.
Keep in mind, when you cascade your snapvaults through your snapmirror destination, you will keep an extra set of snapshots locked on the source.
The way I do it is setup all the snapvault schedules to not have a trigger time. Instead I run scripts through a product called Control-M from BMC Software. This way all of our backups are triggered by events rather than time.
On Fri, Oct 7, 2011 at 7:39 AM, Unnikrishnan KP
<krshnakp@gmail.com> wrote:
Hello all,
We are planing a snapvault backup of data from a snapmirror destination. Most of the volumes that are to be snapvaulted are luns in qtrees. The rest are nas data.
I was looking at different ways to do this and came up with the following conclusion for block data:
1. Run a snapvault update on the _recent copy. Schedule a snapvault snap sched on the secondary after the update. This require
a bit of scripting and schedules.
2. The best practises guide recommends using the SnapManager product for sv backups. Since this too require a level of
scripting; I am not sure what difference it would make. Also this requires rsh access to the filers which we do not permit.
As for the NAS volumes; the snapmirror source will need to be setup with a snavault snap sched schedule (something that happens before a snapmirror transfer). This snapshot will then be snapmirrored to the SM destination. The SV secondary snapvault schedule will have a snapvault -x schedule that happens after the SM transfers complete.
The other option would be to follow the same pattern as option 1 used for block data.
Just wondering if any of you have faced this issue and what was done in your environment? Any feedback on this will be apprecated.
Regards,
Unnikrishnan KP
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters