Hi Jeff,You could also consider filer level throttling ( options replication ) and lock down the overall transfer rate either out or in bound. It's less granular but if you're concerned about impact to user traffic it's also the best way to catch multiple simultaneous mirroring jobs and prevent them from getting out of hand.
Colin BiebersteinHi Jeff,I had the same issue with 8.1.2P4. The way I ended up working around the issue was to do the following:1. Kick off a snapmirror initilaize w/out the -k option.2. Use the snapmirror throttle command to throttle the snapmirror initialize down. Snapmirror throttle adjusts the throttleing of snapmirrors currently in progress.3. (optional) if this will be an ongoing snapmirror relationship, update the snapmirror.conf file with the rate at which you would like to throttle the transfer going forward.Essentially, there was a brief moment when the snapmirror initialize was not throttled between steps 1 ad 2. Not the best solution but it worked.PhilOn Fri, Aug 2, 2013 at 6:36 PM, Jeff Cleverley <jeff.cleverley@avagotech.com> wrote:Sebastian,
What I got from NetApp today was change the snapmirror.volume.local_nwk_bypass.enable option to off. It is then supposed to be able to throttle using the -k argument. I haven't had a chance to try it out yet. I'll have to see what type of impact it appears to have on the network interface when these run.
JeffOn Thu, Aug 1, 2013 at 3:18 AM, Sebastian Goetze <spgoetze@gmail.com> wrote:
Hi Jeff,
just remember:
system=... changes priority of system vs. user in this volume
Use this if users are working on this vol and are impacted (e.g. the source vol).
Use the volume level priority to prioritize between volumes, either giving higher priority to volumes being used (recommended), or by giving lower priority to the SnapMirrored volumes.
But beware: if you have 50 vols, they will all be in the default bucket (prio Medium=50) with possibly lower combined prio (effectively prio=1 for every volume in the default bucket) than the 2 SnapMirror vols (source & destination, e.g. VeryLow=8).
Therefore you better use
priority set default option=value [option=value...]
The priority set default command manages the default priority policy, which is applied to volumes without any specific priority policy. The following options may be specified:
level Set the priority level for operations are sent to the volume when compared to other volumes. The value might be one of VeryHigh, High, Medium, Low, VeryLow, or a numeric value from 8 (VeryLow) to 92 ( VeryHigh). A volume with a higher priority level receives more resources than a volume with a lower priority level. The default value is Medium.
system
Set the relative priority for system-related operations (such as SnapMirror transfers) that are sent to the volume when compared to user operations that are sent to the volume. The valuemight be one of VeryHigh, High, Medium, Low, VeryLow, or a numeric value from 4 (VeryLow) to 96 (VeryHigh).
The default value is Medium.
Also be aware that system operations include also things like WAFL-tree updates (metadata...). So if you observe negative effects (e.g. in a high-file-count volume), give this volume a higher system priority or switch priority off when not needed.
HTH
Sebastian
On 01.08.2013 10:04, Jeff Cleverley wrote:
Sebastian,
I had not thought about the priority option. I'll see if it makes sense tomorrow. The slowness seems to affect access to the entire filer. Changing the system priority to low on the source and destination volumes might do the trick though. Thanks for the idea.
Jeff
On Thu, Aug 1, 2013 at 12:58 AM, Sebastian Goetze <spgoetze@gmail.com> wrote:
Hi Jeff,
AFAIK as previously mentioned, only network speed is affected.
But did you think of the priority command?
There you can change relative (!) priorities, e.g. system (-> SnapMirror) vs. user.
priority set volume prodvol level=high system=low
Set the priority scheduling policy for volume prodvol to high compared to other volumes. Also prioritize system operations for the volume low compared to user operations on the same volume. These options are enabled by this operation if priority on has been previous issued.
So you would set 'level=high' on the volumes where the users are impacted and 'system=low' and maybe "level=low" (if the source isn't the one where the users are impacted) to the volumes involved in the SnapMirror.
Don't forget to set 'priority on'.
And maybe 'priority off' after the snapmirror is through if you want to go back to the previous behavior.
HTH (Hi Oldtimers, still know this one? - Hope That Helps...)
Sebastian
On 01.08.2013 07:15, Jeff Cleverley wrote:
I did try different positions for the -k. It didn't seem to matter.
Jeff
On Wed, Jul 31, 2013 at 8:45 PM, steve klise <sklise@hotmail.com> wrote:
I am not sure, but you may want to change your syntax to put the -k before the -S; Not sure if that really matters, but this is what I found in one of the docs..
Good luck.
snapmirror update [-k n] -S source_system:source_volume
[dest_system:]dest_volume
Date: Wed, 31 Jul 2013 19:54:46 -0600
Subject: Snapmirror throttle not working
From: jeff.cleverley@avagotech.com
To: Toasters@teaparty.net
Greetings,
I'm running 8.1.2P4, 7-mode on some 6290s.à I need to do some snapmirrors to re-balance some disk space.à The -k option to throttle the transfer doesn't seem to be having any effect.à I've tried modifying the placement of the -k but it doesn't seem to matter.à I also tried to modify it after it was running and it doesn't seem to help either.à Here is the command I'm running:The source and destination are on the same file system.à Here is a cut of a sysstat 3 after starting it:
snapmirror initialize -S sm15_3 -k 10000 new_sm15_3
If I'm understanding correctly, this should be allowing 10MB/s.
àCPUàààà NFSààà CIFSààà HTTPàààà Netàà kB/sààà Diskàà kB/sààà Tapeàà kB/sà Cache
ààààààààààààààààààààààààààààààààà inààà outààà readà writeààà readà writeààà age
73%àààà 598àààààà 0àààààà 0ààà 2091 349080à 338016ààà 271àààààà 0ààààà 0àààà 7
à71%ààà 1019àààààà 0àààààà 0ààà 2046 319892à 324019à 13114àààààà 0ààààà 0àààà 0s
à71%ààà 2800àààààà 0àààààà 0ààà 3880 330527à 343528à 17379àààààà 0ààààà 0àààà 7
à69%ààà 1440àààààà 0àààààà 0ààà 3405 330279à 392647à 22343àààààà 0ààààà 0àààà 0s
à87%ààà 1614àààààà 0àààààà 0ààà 2128 320151à 607753 168553àààààà 0ààààà 0àààà 0s
à87%àààà 827àààààà 0àààààà 0ààà 5652 244701à 584436 371689àààààà 0ààààà 0àààà 0s
à91%àààà 897àààààà 0àààààà 0ààà 4242 344072à 680454 386373àààààà 0ààààà 0àààà 0s
As you can see, the disk read/write counts go way up.à This is causing some noticeable latency in the nfs access for clients.à While I really like the new hardware can pump data around, I need to be able to control it.
What am I doing wrong?
Thanks,
Jeff
--
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
--
Jeff Cleverley
Unix Systems Administrator
4380 Ziegler Road
Fort Collins, Colorado 80525
970-288-4611
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters