Well, that obviously doesn’t work then – the vol move causes the disk utilization on the destination aggregate to go up to 100% constantly and OCPM is sending me tons of noticiations of slow data processing nodes
due to replication, etc. – so I definitely need to be able to throttle this process but still haven’t found a way to do that :-/
Best,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
Reading documentation, it looks like
-bypass-throttling applies to internal throttling performed by Data ONTAP:
A volume move operation might take more time than expected because moves are designed to
occur nondisruptively in the background in a manner that preserves client access and overall
system performance.
For example, Data ONTAP throttles the resources available to the volume move operation.
IOW volume move is expected to not impact normal client activity. Do you observe any slowdown during volume move?
---
With best regards
Andrei
Borzenkov
Senior system engineer
FTS WEMEAI RUC RU SC TMS FOS
<image001.gif>
FUJITSU
Zemlyanoy Val Street, 9, 105 064 Moscow, Russian Federation
Tel.: +7 495 730 62 20 ( reception)
Mob.: +7 916 678 7208
Fax: +7 495 730 62 14
This communication contains information that is confidential, proprietary in nature and/or privileged. It
is for the exclusive use of the intended recipient(s). If you are not the intended recipient(s) or the person responsible for delivering it to the intended recipient(s), please note that any form of dissemination, distribution or copying of this communication
is strictly prohibited and may be unlawful. If you have received this communication in error, please immediately notify the sender and delete the original communication. Thank you for your cooperation.
Please be advised that neither Fujitsu, its affiliates, its employees or agents accept liability for any errors, omissions or damages caused by delays of
receipt or by any virus infection in this message or its attachments, or which may otherwise arise as a result of this e-mail transmission.
Tony,
thanks, QoS Policy on the volume does not seem to work – I just set a QoS Policy down to 10MBps but the transfer was still running with 200MBps+, so I’ve aborted it again.
I found a few websites talking about `option replication.throttle.enable` et al, but that doesn’t seem to apply to cDOT systems anymore.
The vol move uses Snapmirror in the background, AFAIK, so I was also checking snapmirror policies (only Default Policies available if you haven’t done anything with Snapmirror) and in the default policies,
the only thing I can configure there is the transfer priority:
*> snapmirror policy show -instance
Vserver: Cluster
SnapMirror Policy Name: DPDefault
Policy Owner: cluster-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Comment: Default policy for DP relationship.
Total Number of Rules: 0
Total Keep: 0
Rules: Snapmirror-label Keep Preserve Warn
-------------------------------- ---- -------- ----
- - - -
Vserver: Cluster
SnapMirror Policy Name: XDPDefault
Policy Owner: cluster-admin
Tries Limit: 8
Transfer Priority: normal
Ignore accesstime Enabled: false
Transfer Restartability: always
Comment: Default policy for XDP relationship with daily and weekly rules.
Total Number of Rules: 2
Total Keep: 59
Rules: Snapmirror-label Keep Preserve Warn
-------------------------------- ---- -------- ----
daily 7 false 0
weekly 52 false 0
2 entries were displayed.
Doesn’t seem to be the right place either…
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
I believe this is accomplished with the volume QoS policy tool, which is why you see the option to bypass throttling but not set an option on the operation itself.
I would have to test this in my lab environment to be sure 100% but I am pretty sure that is where you should be looking next.
Regards,
Anthony Bar
tbar@berkcom.com
Berkeley Communications
Hey there,
I did some research already but wasn’t able to find what I was looking for, so I’m trying a quick shot here:
Does anyone know if it’s actually possible to throttle a vol move on cDOT?
vol move start does not really list an option for that and once the move is running, there’s also no vol move modify or anything like that.
*> vol move start ?
(volume move start)
-vserver <vserver name> Vserver Name
[-volume] <volume name> Volume Name
[-destination-aggregate] <aggregate name> Destination Aggregate
[[-cutover-window] {30..300}] Cutover time window in seconds (default: 45)
[ -cutover-attempts {1..25} ] Number of Cutover attempts (default: 3)
[ -cutover-action {abort_on_failure|defer_on_failure|force|wait} ] Action for Cutover (default: defer_on_failure)
[ -perform-validation-only [true] ] Performs validation checks only (default: false)
[ -foreground {true|false} ] Foreground Process
[ -bypass-throttling {true|false} ] *Bypass Replication Engine Throttling
[ -skip-delta-calculation {true|false} ] *Skip the Delta Calculation
I’m currently migrating quite some big volumes from SAS to SATA across heads and the SATA aggregate is of course experiencing some lag now, so I’d love to throttle that a bit if possible.
Any idea?
Would a QoS policy on the souce volume help here or does NetApp internal stuff (like a vol move) override QoS quotas?
Best,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601