Perhaps I missed something, but does anyone know of any useful commands
for determining snapmirror performance? I'm currently running 5.3.5R2.
toaster1.iad> vol snapmirror status
Source Dest Status
toaster2.snv.xxxx:vol1 toaster1.iad.xxxx:vol2 Transferring (1% complete)
toaster1.snv.xxxx:vol1 toaster1.iad:xxxx:vol1 Transferring (0% complete)
This isn't exactly useful data, as they seem to almost always be at 0%,
though I rarely ever see 'mirror postponed' messages, which inclines me
to think that progress is being made in a timely manner. Updates are every
2 hours, and for two small-delta 60gv volumes, I figure it should be idle
a fair deal of the time.
I can watch recent snapshots show up on the snapmirror dest, but this is
the only indication I have that its working at all.
The main point of concern, is that Snapmirror dest filer has about 3300
miles between it and the source -- so we are much more susceptible to
quirks in the public network than one would encounter in a lab environment
or even corporate WAN.
The best data I could find was from the snapmirror ops guide, which
suggest looking at the snap list on the source filer --
SnapMirror generated Snapshots should have the name of the
destination SnapMirror volume appended with a counter. By
coordinating the snapmirror.conf file with the time stamps shown
in the snapshot listing, you should be able to determine whether
or not SnapMirror is generating Snapshots as scheduled
..this seems pitifully inadequate -- something akin to calculating your
nfs ops/s by dividing nfsstat -s by uptime whenever you wanted to know.
I've got syslog tweaked up to *.info, which hasn't given me much more than
statd reports.
Is there something better? Something that atleast shows some simple
counters like:
toasterN> vol snapmirror status -v
Volume Source
vol1 toaster1:vol1 Current status: idle
Updates since reconfig: 94127
Failed updates: 5
Postponed updates: 17
Last successful update: Jun 16 15:12 (45m ago)
1251k, 15 minutes (1.39 KB/s)
Average: 4912k, 22 minutes (3.70 KB/s)
This would actually give something that snapmirror.conf's could be tuned
with. Informative data to syslog would be good as well, use a local
facility for it, and for those of us with >1hr replications could actually
check progress via logs.
Given the price tag on this option, I made the obviously foolish
presumption that there would be sufficient tools to monitor progress and
performance, especially in the light of its branding as a 'Disaster Recovery'
solution..
Wishing I'd stuck to it and put the database on EMC w/ Oracle replication,
..kg..