This cluster consists of two 8020s, source aggregate is 5x24x900GB 10k SAS, destination aggregate is 3x20x2TB 7k2 SATA + 12x200GB SSD configured as flashpool.

And the aggregate is not really chocking at 150-200MB/s, it’s just that the normal workload is also there while the vol move is running and I’m not having any issues with the normal workload and to my understanding, the vol move should be a background process with lower priority but in practice, the vol move even causes Back-to-back CPs (not frequently, but they’re there) – so it’s not really intelligent in what it does.

 

Here’s one block out of sysstat where you can also see the B2B CP:

 

CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s

                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out

35%   4650      0      0    4860  214524  46061  130840 182852       0      0     2     93%  100%  :f  100%      11    199      0       5  11874       0      0

53%   4366      0      0    4817  124716  58833  226307 229832       0      0     0s    93%   99%  Hn   92%       0    451      0      11  28282       0      0

47%   4693      0      0    4766  195086  37085  217188 311210       0      0     0s    94%  100%  :f   94%       0     72      1       2   5056       0      0

34%   6188      0      0    6250  186516  18516  122901 220192       0      0     0s    95%  100%  :f  100%      10     52      0       1   3077       0      0

33%   7581      0      0    7680  219926  31833   97235 125602       0      0     0s    96%  100%  :f  100%       0     99      0       2   6613       0      0

62%   4575      0      0    4621  199941  76095  231460 329464       0      0     4     96%   99%  Hs   81%       0     45      1       1   3023       0      0

43%   3459      0      0    3624  287009  41318  184443 308187       0      0     4     95%  100%  :f  100%     142     23      0       1    329       0      0

29%   2138      0      0    2151  218232  31139  113884 208768       0      0     0s    94%  100%  :f  100%       3     10      0       0    655       0      0

33%   3467      0      0    3544  241201  52691   81984  62936       0      0     1     96%  100%  :v   95%      10     66      1       2   4207       0      0

69%   3740      0      0    3746  176796  73816  306508 421728       0      0     1     94%   98%  Hf   97%       6      0      0       0      0       0      0

29%   2222      0      0    2222  181108  32002  133584 178744       0      0     0s    95%  100%  :f  100%       0      0      0       0      0       0      0

28%   2373      0      0    2374  187496  34496  107384 177232       0      0     1     96%  100%  :f  100%       0      0      1       0      0       0      0

38%   4597      0      0    4638  223817  45700  136612 215112       0      0     2     94%  100%  :f  100%      41      0      0       0      0       0      0

31%   4848      0      0    4858  183867  73667  110072 139284       0      0     2     92%  100%  :f  100%      10      0      0       0      0       0      0

58%   3216      0      0    3217   84722  45097  250230 376368       0      0     2     96%   99%  Bs   78%       0      0      1       0      0       0      0

37%   3248      0      0    3249  156565  37255  179684 280940       0      0     2     96%  100%  :f  100%       0      1      0       4      0       0      0

26%   3834      0      0    3836   90953  29907  122150 181119       0      0     2     94%  100%  :f  100%       0      2      0       0      0       0      0

29%   3866      0      0    3867  166494  62617   93992 130188       0      0     2     96%  100%  :f  100%       0      0      1       0      0       0      0

24%   2560      0      0    2568  149435  34297  107012 164876       0      0     2     97%  100%  :f  100%       8      0      0       0      0       0      0

23%   3115      0      0    3272  146957  38905   76392  89962       0      0     4s    95%  100%  :f  100%       1    156      0   10021      4       0      0

 

Best,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: ag@anexia.at

Web: http://www.anexia.at

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

Von: Francis Kim [mailto:fkim@BERKCOM.com]
Gesendet: Samstag, 26. September 2015 21:09
An: Alexander Griesser <AGriesser@anexia-it.com>
Cc: Jeffrey Mohler <jmohler@yahoo-inc.com>; Parisi, Justin <Justin.Parisi@netapp.com>; toasters@teaparty.net
Betreff: Re: AW: AW: AW: Vol Move Throttling in cDOT

 

You can certainly spread the word on various forums.

 

Curious.  What's your setup, other than that it's a two node switchless?  Controller class?  Source and destination aggregates?  Your destination aggregate choking at 150-200MB/s has me suspect it's probably a small SATA aggregate?

.


On Sep 26, 2015, at 12:00 PM, Alexander Griesser <AGriesser@anexia-it.com> wrote:

Tell me how and I’ll voice it loud and clear J

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: ag@anexia.at

Web: http://www.anexia.at

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

Von: Francis Kim [mailto:fkim@BERKCOM.com]
Gesendet: Samstag, 26. September 2015 20:59
An: Jeffrey Mohler <jmohler@yahoo-inc.com>
Cc: Parisi, Justin <Justin.Parisi@netapp.com>; Alexander Griesser <AGriesser@anexia-it.com>; toasters@teaparty.net
Betreff: Re: AW: AW: Vol Move Throttling in cDOT

 

Since 2008?!?!  Maybe a critical mass of vol move fans need to voice their complaint about this.

.


On Sep 26, 2015, at 11:26 AM, Jeffrey Mohler <jmohler@yahoo-inc.com> wrote:

I asked Product Management (Brooks) for this at Foresight, in like 2008, maybe 2009..going over the impact of not having it on user operations.

 

_________________________________

Jeff Mohler

Tech Yahoo, Storage Architect, Principal

(831)454-6712
YPAC Member

TW: @PrincipalYahoo

YM: Supra89ta

 

 

 

On Saturday, September 26, 2015 11:22 AM, "Parisi, Justin" <Justin.Parisi@netapp.com> wrote:

 

I would not recommend moving the cluster network to 1GB links. Unsupported and could affect overall operations of the cluster.

 

My understanding is that product management is aware of the ask for vol move throttling.

 

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Francis Kim
Sent: Saturday, September 26, 2015 11:51 AM
To: Alexander Griesser
Cc: toasters@teaparty.net
Subject: Re: AW: AW: Vol Move Throttling in cDOT

 

Your problem is that vol move is going too fast, pegging your target aggregate's disks.  

 

Since there seems to be no sanctioned way to slow down the vol move, I would suggest the following hacks.

 

Option 1:

Move your cluster network links to a couple of 1Gb links.  I assume your links are currently on 10GbE links.  You might even set the 1Gb ports to 1500 mtu.

 

Option 2:

Introduce bogus read workload to your source volume while the vol move is happening.  Maybe iometer with random reads. 

 

.


On Sep 25, 2015, at 11:53 PM, Alexander Griesser <AGriesser@anexia-it.com> wrote:

Hey Tony,

 

thanks for your efforts, greatly appreciated.

We’re using a switchless cluster here with 2x10G links which are not saturated as far as I can see, but how could I tell?

Sysstat on the destination controller clearly shows 100% disk utilization when I’m running the vol move, once I stop it, disk utilization goes down to 20% again, so it’s reproducible – the longest I tried to have it running was about 35 minutes since I was in the hope that it would settle down and would stop firing so hard on the destination aggregate, but after this period of time I had to stop it since the lag experienced was too much and I was getting latency issues on some clients.

 

I did not specify the –foreground flag while running the vol move, instead I was monitoring the progress using vol move show periodically and there I could see that it was trying to replicate within a range of 150-200MBps which was clearly above the set QoS policy – but I’ve read somewhere else during the last days that the QoS policies do only apply to client initiated workloads, not to system initiated workloads. That said, I was checking on the system defined QoS policy groups and found some interesting policies there, but am not sure if I could just create a new system-defined QoS policy group which would apply to the vol move (read snapmirror) operation here then and I’m not keen enough to modify existing system-defined QoS policy groups J

 

The sad thing is, that I thought I could take the burden of the storage tier migration off the client by simply moving the volume for him, but if there’s no way to throttle that process, I will have to present new LUNs on the new destination aggregate to the client and ask him to replicate the data on his own; therefore I can limit the available bandwidth with preset QoS policies on the volumes.

 

Best,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: ag@anexia.at

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

Von: Tony Bar [mailto:tbar@BERKCOM.com]
Gesendet: Freitag, 25. September 2015 22:16
An: Alexander Griesser <AGriesser@anexia-it.com>
Cc: toasters@teaparty.net
Betreff: Re: AW: Vol Move Throttling in cDOT

 

Alexander -

 

I've been in touch with NetApp about this and they're telling me that this shouldn't be happening unless you're using the -foreground flag on the command, and that it should never interfere with a workload that's already running on the destination aggregate as far as disk utilization.  What can happen though is if you don't have enough links for the cluster management network that the interfaces can get bogged down.  I guess the question would be whether what you're seeing is a network issue or a disk issue?  Can I ask how many cluster management connections you're currently using and do you possibly have free ports on the filer and your cluster management switches to add more links if the network links are getting saturated?

 

I'm a little suspicious of their explanation and like you, I believe there should be a way to set a throttle on vol move.  What they're saying however is that the flag to ignore throttling doesn't refer to a user definable setting, but instead an internal mechanism that's supposed to intelligently manage the process and automatically throttle.  The use case then for the ignore option is to ignore that internal mechanism and give vol move as much IO as possible. 

 

It's interesting that you brought this up though, and if NetApp takes notice perhaps they will consider implementing throttling as a user land variable so tuning becomes possible. 

 

Anthony Bar
tbar@berkcom.com
Berkeley Communications


On Sep 24, 2015, at 12:37 AM, Alexander Griesser <AGriesser@anexia-it.com> wrote:

Well, that obviously doesn’t work then – the vol move causes the disk utilization on the destination aggregate to go up to 100% constantly and OCPM is sending me tons of noticiations of slow data processing nodes due to replication, etc. – so I definitely need to be able to throttle this process but still haven’t found a way to do that :-/

 

Best,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: ag@anexia.at

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

Von: andrei.borzenkov@ts.fujitsu.com [mailto:andrei.borzenkov@ts.fujitsu.com]
Gesendet: Donnerstag, 24. September 2015 09:02
An: Alexander Griesser <
AGriesser@anexia-it.com>; Tony Bar <tbar@BERKCOM.com>
Cc:
toasters@teaparty.net
Betreff: RE: Vol Move Throttling in cDOT

 

Reading documentation, it looks like -bypass-throttling applies to internal throttling performed by Data ONTAP:

 

A volume move operation might take more time than expected because moves are designed to

occur nondisruptively in the background in a manner that preserves client access and overall

system performance.

For example, Data ONTAP throttles the resources available to the volume move operation.

 

IOW volume move is expected to not impact normal client activity. Do you observe any slowdown during volume move?

 

---

With best regards

 

Andrei Borzenkov

Senior system engineer

FTS WEMEAI RUC RU SC TMS FOS

<image001.gif>

FUJITSU

Zemlyanoy Val Street, 9, 105 064 Moscow, Russian Federation

Tel.: +7 495 730 62 20 ( reception)

Mob.: +7 916 678 7208

Fax: +7 495 730 62 14

Company details: ts.fujitsu.com/imprint

This communication contains information that is confidential, proprietary in nature and/or privileged.  It is for the exclusive use of the intended recipient(s). If you are not the intended recipient(s) or the person responsible for delivering it to the intended recipient(s), please note that any form of dissemination, distribution or copying of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please immediately notify the sender and delete the original communication. Thank you for your cooperation.

Please be advised that neither Fujitsu, its affiliates, its employees or agents accept liability for any errors, omissions or damages caused by delays of receipt or by any virus infection in this message or its attachments, or which may otherwise arise as a result of this e-mail transmission.

 

From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Alexander Griesser
Sent: Wednesday, September 23, 2015 3:13 PM
To: Tony Bar
Cc:
toasters@teaparty.net
Subject: AW: Vol Move Throttling in cDOT

 

Tony,

 

thanks, QoS Policy on the volume does not seem to work – I just set a QoS Policy down to 10MBps but the transfer was still running with 200MBps+, so I’ve aborted it again.

I found a few websites talking about `option replication.throttle.enable` et al, but that doesn’t seem to apply to cDOT systems anymore.

 

The vol move uses Snapmirror in the background, AFAIK, so I was also checking snapmirror policies (only Default Policies available if you haven’t done anything with Snapmirror) and in the default policies, the only thing I can configure there is the transfer priority:

 

*> snapmirror policy show -instance

 

                   Vserver: Cluster

    SnapMirror Policy Name: DPDefault

              Policy Owner: cluster-admin

               Tries Limit: 8

         Transfer Priority: normal

Ignore accesstime Enabled: false

   Transfer Restartability: always

                   Comment: Default policy for DP relationship.

     Total Number of Rules: 0

                Total Keep: 0

                     Rules: Snapmirror-label                 Keep Preserve Warn

                            -------------------------------- ---- -------- ----

                            -                                   - -           -

 

                   Vserver: Cluster

    SnapMirror Policy Name: XDPDefault

              Policy Owner: cluster-admin

               Tries Limit: 8

         Transfer Priority: normal

Ignore accesstime Enabled: false

   Transfer Restartability: always

                   Comment: Default policy for XDP relationship with daily and weekly rules.

     Total Number of Rules: 2

                Total Keep: 59

                     Rules: Snapmirror-label                 Keep Preserve Warn

                            -------------------------------- ---- -------- ----

                            daily                               7 false       0

                            weekly                             52 false       0

 

2 entries were displayed.

 

Doesn’t seem to be the right place either…

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: ag@anexia.at

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

Von: Tony Bar [mailto:tbar@BERKCOM.com]
Gesendet: Mittwoch, 23. September 2015 13:43
An: Alexander Griesser <
AGriesser@anexia-it.com>
Cc:
toasters@teaparty.net
Betreff: Re: Vol Move Throttling in cDOT

 

Alexander -

 

I believe this is accomplished with the volume QoS policy tool, which is why you see the option to bypass throttling but not set an option on the operation itself.  

 

I would have to test this in my lab environment to be sure 100% but I am pretty sure that is where you should be looking next.  

Regards,
Anthony Bar 
tbar@berkcom.com
Berkeley Communications

 

On Sep 23, 2015, at 4:26 AM, Alexander Griesser <AGriesser@anexia-it.com> wrote:

Hey there,

 

I did some research already but wasn’t able to find what I was looking for, so I’m trying a quick shot here:

Does anyone know if it’s actually possible to throttle a vol move on cDOT?

 

vol move start does not really list an option for that and once the move is running, there’s also no vol move modify or anything like that.

 

*> vol move start ?

  (volume move start)

    -vserver <vserver name>                                           Vserver Name

   [-volume] <volume name>                                            Volume Name

   [-destination-aggregate] <aggregate name>                          Destination Aggregate

  [[-cutover-window] {30..300}]                                       Cutover time window in seconds (default: 45)

  [ -cutover-attempts {1..25} ]                                       Number of Cutover attempts (default: 3)

  [ -cutover-action {abort_on_failure|defer_on_failure|force|wait} ]  Action for Cutover (default: defer_on_failure)

  [ -perform-validation-only [true] ]                                 Performs validation checks only (default: false)

  [ -foreground {true|false} ]                                        Foreground Process

  [ -bypass-throttling {true|false} ]                                 *Bypass Replication Engine Throttling

  [ -skip-delta-calculation {true|false} ]                            *Skip the Delta Calculation

 

I’m currently migrating quite some big volumes from SAS to SATA across heads and the SATA aggregate is of course experiencing some lag now, so I’d love to throttle that a bit if possible.

Any idea?

Would a QoS policy on the souce volume help here or does NetApp internal stuff (like a vol move) override QoS quotas?

 

Best,

 

Alexander Griesser

Head of Systems Operations

 

ANEXIA Internetdienstleistungs GmbH

 

E-Mail: ag@anexia.at

 

Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt

Geschäftsführer: Alexander Windbichler

Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601

 

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters

 

_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters