Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
DUMPing data is historically a slow process. There is much more involved than simply FTP'ing data to the NetApp.
The filesystem is walked inode by inode and archive bits are set among other things.
--> A highly fragmented data set will also slow down any potential dumps. --> Wide directories (large file counts in a single directory) significantly slow down dumps --> Small files slow down dumps.
*--> qtree dumps seem to be a little better for dump speed
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno s.eno@me.com wrote:
Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Thanks Tim!
The thought here was that NDMP was "built for speed" and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or "nice"-ing the NDMP process.
From: tmac [mailto:tmacmd@gmail.com] Sent: Wednesday, February 20, 2013 9:31 AM To: Christopher S Eno Cc: toasters@teaparty.net Subject: Re: NDMP speed question
DUMPing data is historically a slow process. There is much more involved than simply
FTP'ing data to the NetApp.
The filesystem is walked inode by inode and archive bits are set among other things.
--> A highly fragmented data set will also slow down any potential dumps.
--> Wide directories (large file counts in a single directory) significantly slow down dumps
--> Small files slow down dumps.
*--> qtree dumps seem to be a little better for dump speed
--tmac
Tim McCarthy
Principal Consultant
http://dl.dropbox.com/u/6874230/na_cert_dma_2c.jpg http://dl.dropbox.com/u/6874230/rhce.jpeg http://dl.dropbox.com/u/6874230/na_cert_ie-san_2c.jpg
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno s.eno@me.com wrote:
Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
NDMP has phases, and other work to do. I think with the info here, its far too early to declare NDMP at fault.
On Wed, Feb 20, 2013 at 7:27 AM, Christopher S Eno s.eno@me.com wrote:
Thanks Tim!****
The thought here was that NDMP was “built for speed” and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or “nice”-ing the NDMP process.****
*From:* tmac [mailto:tmacmd@gmail.com] *Sent:* Wednesday, February 20, 2013 9:31 AM *To:* Christopher S Eno *Cc:* toasters@teaparty.net *Subject:* Re: NDMP speed question****
DUMPing data is historically a slow process. There is much more involved than simply****
FTP'ing data to the NetApp.****
The filesystem is walked inode by inode and archive bits are set among other things.****
--> A highly fragmented data set will also slow down any potential dumps.*
--> Wide directories (large file counts in a single directory) significantly slow down dumps****
--> Small files slow down dumps.****
*--> qtree dumps seem to be a little better for dump speed****
--tmac****
*Tim McCarthy*****
*Principal Consultant*****
****
Clustered ONTAP Clustered ONTAP****
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4****
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014****
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno s.eno@me.com wrote:** **
Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters****
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Adding to my list:
Collecting and disseminating File History (which takes significantly longer the more files there are in the file system).
As a point of reference, I *cannot* do any NDMP backups on nearly all of my filesystems.
I have so many files that it takes over 8 hours to do the file history phase and the NDMP backup simply aborts.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 11:21 AM, Jeff Mohler speedtoys.racing@gmail.comwrote:
NDMP has phases, and other work to do. I think with the info here, its far too early to declare NDMP at fault.
On Wed, Feb 20, 2013 at 7:27 AM, Christopher S Eno s.eno@me.com wrote:
Thanks Tim!****
The thought here was that NDMP was “built for speed” and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or “nice”-ing the NDMP process.****
*From:* tmac [mailto:tmacmd@gmail.com] *Sent:* Wednesday, February 20, 2013 9:31 AM *To:* Christopher S Eno *Cc:* toasters@teaparty.net *Subject:* Re: NDMP speed question****
DUMPing data is historically a slow process. There is much more involved than simply****
FTP'ing data to the NetApp.****
The filesystem is walked inode by inode and archive bits are set among other things.****
--> A highly fragmented data set will also slow down any potential dumps.
--> Wide directories (large file counts in a single directory) significantly slow down dumps****
--> Small files slow down dumps.****
*--> qtree dumps seem to be a little better for dump speed****
--tmac****
*Tim McCarthy*****
*Principal Consultant*****
****
Clustered ONTAP Clustered ONTAP****
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4****
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014****
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno s.eno@me.com wrote:*
Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters****
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
If someone could get >5Gb/s (say) backing up a large iSCSI LUN on a dedicated volume then it'd be fairly easy to determine there isn't an inherent limit in using NDMP.
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: 20 February 2013 16:31 To: Jeff Mohler Cc: toasters@teaparty.net Subject: Re: NDMP speed question
Adding to my list:
Collecting and disseminating File History (which takes significantly longer the more files there are in the file system).
As a point of reference, I *cannot* do any NDMP backups on nearly all of my filesystems.
I have so many files that it takes over 8 hours to do the file history phase and the NDMP backup simply aborts.
--tmac
Tim McCarthy Principal Consultant
[Image removed by sender.] [Image removed by sender.] [Image removed by sender.]
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 11:21 AM, Jeff Mohler <speedtoys.racing@gmail.commailto:speedtoys.racing@gmail.com> wrote: NDMP has phases, and other work to do. I think with the info here, its far too early to declare NDMP at fault.
On Wed, Feb 20, 2013 at 7:27 AM, Christopher S Eno <s.eno@me.commailto:s.eno@me.com> wrote: Thanks Tim!
The thought here was that NDMP was "built for speed" and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or "nice"-ing the NDMP process.
From: tmac [mailto:tmacmd@gmail.commailto:tmacmd@gmail.com] Sent: Wednesday, February 20, 2013 9:31 AM To: Christopher S Eno Cc: toasters@teaparty.netmailto:toasters@teaparty.net Subject: Re: NDMP speed question
DUMPing data is historically a slow process. There is much more involved than simply FTP'ing data to the NetApp.
The filesystem is walked inode by inode and archive bits are set among other things.
--> A highly fragmented data set will also slow down any potential dumps. --> Wide directories (large file counts in a single directory) significantly slow down dumps --> Small files slow down dumps.
*--> qtree dumps seem to be a little better for dump speed
--tmac
Tim McCarthy Principal Consultant
Error! Filename not specified. Error! Filename not specified. Error! Filename not specified.
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno <s.eno@me.commailto:s.eno@me.com> wrote: Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- --- Gustatus Similis Pullus
To report this email as spam click herehttps://www.mailcontrol.com/sr/MZbqvYs5QwJvpeaetUwhCQ==.
Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom More information can be found at www.csr.com. Follow CSR on Twitter at http://twitter.com/CSR_PLC and read our blog at www.csr.com/blog
Stupid question, but isn't 10Gbe a lot faster with e.g. 9000 byte MTU? I thought I saw in the original post, a 1500 byte MTU:
From original post:
e1a: flags=0x5f4e867<UP,BROADCAST, RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
if the 1500 byte MTU can be changed to 9000 without interrupting service (depends on switch and other configuration, most likely), you might see a speed bump.
Cheers
Patrick
Likely a slight boost, but I am at the mercy of the network guys. I do what they tell me.
On Feb 20, 2013, at 1:17 PM, Patrick Giagnocavo xemacs5@gmail.com wrote:
Stupid question, but isn't 10Gbe a lot faster with e.g. 9000 byte MTU? I thought I saw in the original post, a 1500 byte MTU:
From original post:
e1a: flags=0x5f4e867<UP,BROADCAST, RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
if the 1500 byte MTU can be changed to 9000 without interrupting service (depends on switch and other configuration, most likely), you might see a speed bump.
Cheers
Patrick _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Just a lame question here, but how many and type of spindles are behind the volume you are dumping? If you only have a handful of drives, could be the culprit.
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Scott Eno Sent: Wednesday, February 20, 2013 10:36 AM To: Patrick Giagnocavo Cc: toasters@teaparty.net Subject: Re: NDMP speed question
Likely a slight boost, but I am at the mercy of the network guys. I do what they tell me.
On Feb 20, 2013, at 1:17 PM, Patrick Giagnocavo xemacs5@gmail.com wrote:
Stupid question, but isn't 10Gbe a lot faster with e.g. 9000 byte MTU? I thought I saw in the original post, a 1500 byte MTU:
From original post:
e1a: flags=0x5f4e867<UP,BROADCAST, RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
if the 1500 byte MTU can be changed to 9000 without interrupting service (depends on switch and other configuration, most likely), you might see a speed bump.
Cheers
Patrick _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
You know, another way to speed up backup is to change the backup blocking factor.
I think the default is 63 and if you are going over the network, it is certainly feasible to make it 126.
If I recall, there is an environment variable that can be set/modified to change the blocking factor.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 1:38 PM, Klise, Steve klises@sutterhealth.orgwrote:
Just a lame question here, but how many and type of spindles are behind the volume you are dumping? If you only have a handful of drives, could be the culprit.
-----Original Message----- From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of Scott Eno Sent: Wednesday, February 20, 2013 10:36 AM To: Patrick Giagnocavo Cc: toasters@teaparty.net Subject: Re: NDMP speed question
Likely a slight boost, but I am at the mercy of the network guys. I do what they tell me.
On Feb 20, 2013, at 1:17 PM, Patrick Giagnocavo xemacs5@gmail.com wrote:
Stupid question, but isn't 10Gbe a lot faster with e.g. 9000 byte MTU?
I thought I saw in the original post, a 1500 byte MTU:
From original post:
e1a: flags=0x5f4e867<UP,BROADCAST, RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
if the 1500 byte MTU can be changed to 9000 without interrupting service
(depends on switch and other configuration, most likely), you might see a speed bump.
Cheers
Patrick _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
The tests I performed were from an aggregate of 46 10k drives to another identical aggregate on the other controller. We used a 500g tar.gz file. Should be comparable to a lun, ya think?
1.5Gb was the steady transfer rate. 1500 mtu, of course.
---- Scott Eno s.eno@me.com
On Feb 20, 2013, at 12:13 PM, Darren Sykes Darren.Sykes@csr.com wrote:
If someone could get >5Gb/s (say) backing up a large iSCSI LUN on a dedicated volume then it’d be fairly easy to determine there isn’t an inherent limit in using NDMP.
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: 20 February 2013 16:31 To: Jeff Mohler Cc: toasters@teaparty.net Subject: Re: NDMP speed question
Adding to my list:
Collecting and disseminating File History (which takes significantly longer the more files there are in the file system).
As a point of reference, I *cannot* do any NDMP backups on nearly all of my filesystems.
I have so many files that it takes over 8 hours to do the file history phase and the NDMP backup simply aborts.
--tmac
Tim McCarthy Principal Consultant
<~WRD000.jpg> <~WRD000.jpg> <~WRD000.jpg>
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 11:21 AM, Jeff Mohler speedtoys.racing@gmail.com wrote: NDMP has phases, and other work to do. I think with the info here, its far too early to declare NDMP at fault.
On Wed, Feb 20, 2013 at 7:27 AM, Christopher S Eno s.eno@me.com wrote: Thanks Tim!
The thought here was that NDMP was “built for speed” and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or “nice”-ing the NDMP process.
From: tmac [mailto:tmacmd@gmail.com] Sent: Wednesday, February 20, 2013 9:31 AM To: Christopher S Eno Cc: toasters@teaparty.net Subject: Re: NDMP speed question
DUMPing data is historically a slow process. There is much more involved than simply FTP'ing data to the NetApp.
The filesystem is walked inode by inode and archive bits are set among other things.
--> A highly fragmented data set will also slow down any potential dumps. --> Wide directories (large file counts in a single directory) significantly slow down dumps --> Small files slow down dumps.
*--> qtree dumps seem to be a little better for dump speed
--tmac
Tim McCarthy Principal Consultant
Error! Filename not specified. Error! Filename not specified. Error! Filename not specified.
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno s.eno@me.com wrote: Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
To report this email as spam click here.
Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom More information can be found at www.csr.com. Follow CSR on Twitter at http://twitter.com/CSR_PLC and read our blog at www.csr.com/blog _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Yes, that should behave similarly and rules out a lot of the previous suggestions as to the possible cause.
When you say server to server (in your original post), do you mean using a server as an intermediary but reading and writing from the same place as with NDMP?
We’ve tested single stream performance on a variety of storage platforms and were surprised by some of the results.
Do you have any information about the NDMP TCP stream that could help here? Looking at the TCP window size, packet delivery reliability etc may give you some more clues. Even using NFS on a local LAN, it’s pretty easy to hinder performance due to client mount options as well as the networking stack configuration on the source and destination machines. I’m not sure how NDMP behaves on a high speed network, but it’d be interesting to look at a packet trace to see. I’d do it myself, but just don’t have the time.
From: Scott Eno [mailto:s.eno@me.com] Sent: 20 February 2013 20:29 To: Darren Sykes Cc: tmac; Jeff Mohler; toasters@teaparty.net Subject: Re: NDMP speed question
The tests I performed were from an aggregate of 46 10k drives to another identical aggregate on the other controller. We used a 500g tar.gz file. Should be comparable to a lun, ya think?
1.5Gb was the steady transfer rate. 1500 mtu, of course.
---- Scott Eno s.eno@me.commailto:s.eno@me.com
On Feb 20, 2013, at 12:13 PM, Darren Sykes <Darren.Sykes@csr.commailto:Darren.Sykes@csr.com> wrote: If someone could get >5Gb/s (say) backing up a large iSCSI LUN on a dedicated volume then it’d be fairly easy to determine there isn’t an inherent limit in using NDMP.
From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: 20 February 2013 16:31 To: Jeff Mohler Cc: toasters@teaparty.netmailto:toasters@teaparty.net Subject: Re: NDMP speed question
Adding to my list:
Collecting and disseminating File History (which takes significantly longer the more files there are in the file system).
As a point of reference, I *cannot* do any NDMP backups on nearly all of my filesystems.
I have so many files that it takes over 8 hours to do the file history phase and the NDMP backup simply aborts.
--tmac
Tim McCarthy Principal Consultant
<~WRD000.jpg> <~WRD000.jpg> <~WRD000.jpg>
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 11:21 AM, Jeff Mohler <speedtoys.racing@gmail.commailto:speedtoys.racing@gmail.com> wrote: NDMP has phases, and other work to do. I think with the info here, its far too early to declare NDMP at fault.
On Wed, Feb 20, 2013 at 7:27 AM, Christopher S Eno <s.eno@me.commailto:s.eno@me.com> wrote: Thanks Tim!
The thought here was that NDMP was “built for speed” and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or “nice”-ing the NDMP process.
From: tmac [mailto:tmacmd@gmail.commailto:tmacmd@gmail.com] Sent: Wednesday, February 20, 2013 9:31 AM To: Christopher S Eno Cc: toasters@teaparty.netmailto:toasters@teaparty.net Subject: Re: NDMP speed question
DUMPing data is historically a slow process. There is much more involved than simply FTP'ing data to the NetApp.
The filesystem is walked inode by inode and archive bits are set among other things.
--> A highly fragmented data set will also slow down any potential dumps. --> Wide directories (large file counts in a single directory) significantly slow down dumps --> Small files slow down dumps.
*--> qtree dumps seem to be a little better for dump speed
--tmac
Tim McCarthy Principal Consultant
Error! Filename not specified. Error! Filename not specified. Error! Filename not specified.
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno <s.eno@me.commailto:s.eno@me.com> wrote: Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- --- Gustatus Similis Pullus
To report this email as spam click herehttps://www.mailcontrol.com/sr/MZbqvYs5QwJvpeaetUwhCQ==.
Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom More information can be found at www.csr.comhttp://www.csr.com. Follow CSR on Twitter at http://twitter.com/CSR_PLC and read our blog at www.csr.com/bloghttp://www.csr.com/blog _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
"server to server" in my original post referred to using FTP to transfer data from one linux host to another linux host across the same 10GbE network. It's faster than NDMP dump from head "A" to head "B" over the same 10GbE network.
On Feb 21, 2013, at 4:59 AM, Darren Sykes Darren.Sykes@csr.com wrote:
Yes, that should behave similarly and rules out a lot of the previous suggestions as to the possible cause.
When you say server to server (in your original post), do you mean using a server as an intermediary but reading and writing from the same place as with NDMP?
We’ve tested single stream performance on a variety of storage platforms and were surprised by some of the results.
Do you have any information about the NDMP TCP stream that could help here? Looking at the TCP window size, packet delivery reliability etc may give you some more clues. Even using NFS on a local LAN, it’s pretty easy to hinder performance due to client mount options as well as the networking stack configuration on the source and destination machines. I’m not sure how NDMP behaves on a high speed network, but it’d be interesting to look at a packet trace to see. I’d do it myself, but just don’t have the time.
From: Scott Eno [mailto:s.eno@me.com] Sent: 20 February 2013 20:29 To: Darren Sykes Cc: tmac; Jeff Mohler; toasters@teaparty.net Subject: Re: NDMP speed question
The tests I performed were from an aggregate of 46 10k drives to another identical aggregate on the other controller. We used a 500g tar.gz file. Should be comparable to a lun, ya think?
1.5Gb was the steady transfer rate. 1500 mtu, of course.
Scott Eno s.eno@me.com
On Feb 20, 2013, at 12:13 PM, Darren Sykes Darren.Sykes@csr.com wrote:
If someone could get >5Gb/s (say) backing up a large iSCSI LUN on a dedicated volume then it’d be fairly easy to determine there isn’t an inherent limit in using NDMP.
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: 20 February 2013 16:31 To: Jeff Mohler Cc: toasters@teaparty.net Subject: Re: NDMP speed question
Adding to my list:
Collecting and disseminating File History (which takes significantly longer the more files there are in the file system).
As a point of reference, I *cannot* do any NDMP backups on nearly all of my filesystems.
I have so many files that it takes over 8 hours to do the file history phase and the NDMP backup simply aborts.
--tmac
Tim McCarthy Principal Consultant
<~WRD000.jpg> <~WRD000.jpg> <~WRD000.jpg>
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 11:21 AM, Jeff Mohler speedtoys.racing@gmail.com wrote: NDMP has phases, and other work to do. I think with the info here, its far too early to declare NDMP at fault.
On Wed, Feb 20, 2013 at 7:27 AM, Christopher S Eno s.eno@me.com wrote: Thanks Tim!
The thought here was that NDMP was “built for speed” and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or “nice”-ing the NDMP process.
From: tmac [mailto:tmacmd@gmail.com] Sent: Wednesday, February 20, 2013 9:31 AM To: Christopher S Eno Cc: toasters@teaparty.net Subject: Re: NDMP speed question
DUMPing data is historically a slow process. There is much more involved than simply FTP'ing data to the NetApp.
The filesystem is walked inode by inode and archive bits are set among other things.
--> A highly fragmented data set will also slow down any potential dumps. --> Wide directories (large file counts in a single directory) significantly slow down dumps --> Small files slow down dumps.
*--> qtree dumps seem to be a little better for dump speed
--tmac
Tim McCarthy Principal Consultant
Error! Filename not specified. Error! Filename not specified. Error! Filename not specified.
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno s.eno@me.com wrote: Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
To report this email as spam click here.
Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom More information can be found at www.csr.com. Follow CSR on Twitter at http://twitter.com/CSR_PLC and read our blog at www.csr.com/blog _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Ah, OK. That's probably not the best indicator then.
If you can test single stream performance over NFS to your client (we use a RAM disk to avoid local disk bottlenecks) then that'd be a fairer test.
From: Scott Eno [mailto:s.eno@me.com] Sent: 21 February 2013 11:50 To: Darren Sykes Cc: Toasters Subject: Re: NDMP speed question
"server to server" in my original post referred to using FTP to transfer data from one linux host to another linux host across the same 10GbE network. It's faster than NDMP dump from head "A" to head "B" over the same 10GbE network.
On Feb 21, 2013, at 4:59 AM, Darren Sykes <Darren.Sykes@csr.commailto:Darren.Sykes@csr.com> wrote:
Yes, that should behave similarly and rules out a lot of the previous suggestions as to the possible cause.
When you say server to server (in your original post), do you mean using a server as an intermediary but reading and writing from the same place as with NDMP?
We've tested single stream performance on a variety of storage platforms and were surprised by some of the results.
Do you have any information about the NDMP TCP stream that could help here? Looking at the TCP window size, packet delivery reliability etc may give you some more clues. Even using NFS on a local LAN, it's pretty easy to hinder performance due to client mount options as well as the networking stack configuration on the source and destination machines. I'm not sure how NDMP behaves on a high speed network, but it'd be interesting to look at a packet trace to see. I'd do it myself, but just don't have the time.
From: Scott Eno [mailto:s.eno@me.comhttp://me.com] Sent: 20 February 2013 20:29 To: Darren Sykes Cc: tmac; Jeff Mohler; toasters@teaparty.netmailto:toasters@teaparty.net Subject: Re: NDMP speed question
The tests I performed were from an aggregate of 46 10k drives to another identical aggregate on the other controller. We used a 500g tar.gz file. Should be comparable to a lun, ya think?
1.5Gb was the steady transfer rate. 1500 mtu, of course.
---- Scott Eno s.eno@me.commailto:s.eno@me.com
On Feb 20, 2013, at 12:13 PM, Darren Sykes <Darren.Sykes@csr.commailto:Darren.Sykes@csr.com> wrote: If someone could get >5Gb/s (say) backing up a large iSCSI LUN on a dedicated volume then it'd be fairly easy to determine there isn't an inherent limit in using NDMP.
From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: 20 February 2013 16:31 To: Jeff Mohler Cc: toasters@teaparty.netmailto:toasters@teaparty.net Subject: Re: NDMP speed question
Adding to my list:
Collecting and disseminating File History (which takes significantly longer the more files there are in the file system).
As a point of reference, I *cannot* do any NDMP backups on nearly all of my filesystems.
I have so many files that it takes over 8 hours to do the file history phase and the NDMP backup simply aborts.
--tmac
Tim McCarthy Principal Consultant
<~WRD000.jpg> <~WRD000.jpg> <~WRD000.jpg>
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 11:21 AM, Jeff Mohler <speedtoys.racing@gmail.commailto:speedtoys.racing@gmail.com> wrote: NDMP has phases, and other work to do. I think with the info here, its far too early to declare NDMP at fault.
On Wed, Feb 20, 2013 at 7:27 AM, Christopher S Eno <s.eno@me.commailto:s.eno@me.com> wrote: Thanks Tim!
The thought here was that NDMP was "built for speed" and should be taking all it can get from the 10GbE pipe. I was wondering if I was missing some hidden setting that was capping or throttling or "nice"-ing the NDMP process.
From: tmac [mailto:tmacmd@gmail.commailto:tmacmd@gmail.com] Sent: Wednesday, February 20, 2013 9:31 AM To: Christopher S Eno Cc: toasters@teaparty.netmailto:toasters@teaparty.net Subject: Re: NDMP speed question
DUMPing data is historically a slow process. There is much more involved than simply FTP'ing data to the NetApp.
The filesystem is walked inode by inode and archive bits are set among other things.
--> A highly fragmented data set will also slow down any potential dumps. --> Wide directories (large file counts in a single directory) significantly slow down dumps --> Small files slow down dumps.
*--> qtree dumps seem to be a little better for dump speed
--tmac
Tim McCarthy Principal Consultant
Error! Filename not specified. Error! Filename not specified. Error! Filename not specified.
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Wed, Feb 20, 2013 at 9:03 AM, Christopher S Eno <s.eno@me.commailto:s.eno@me.com> wrote: Hi,
In testing NDMP dumps over 10GbE, I am seeing it max out at 1.5Gb/s. Some here are thinking that NDMP should be using more of that pipe. Anyone know about, or have numbers of NDMP transfer rates over 10GbE? Any caps or thresholds in OnTap that keep NDMP from gobbling up resources on the controller? I have run tests over the 10GbE connection from controller to controller, 10k drives to 10k drives, sata drives to sata drives, and always 1.5Gb/s is the ceiling. FTP'ing data from server to server across the same 10GbE network shows rates of 2.5Gb/s or higher.
My NDMP options:
ndmpd.access all ndmpd.authtype challenge ndmpd.connectlog.enabled off ndmpd.data_port_range all ndmpd.enable on ndmpd.fh_node_retry_interval 250 ndmpd.ignore_ctime.enabled off ndmpd.maxversion 4 ndmpd.offset_map.enable on ndmpd.password_length 16 ndmpd.preferred_interface e1a (value might be overwritten in takeover) ndmpd.tcpnodelay.enable on ndmpd.tcpwinsize 65534
IF e1a configuration:
e1a: flags=0x5f4e867<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500 inet x.x.x.x netmask 0xffffff00 broadcast x.x.x.x partner e1a (not in use) ether 00:07:43:08:98:ae (auto-10g_sr-fd-up) flowcontrol none
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- --- Gustatus Similis Pullus
To report this email as spam click herehttps://www.mailcontrol.com/sr/MZbqvYs5QwJvpeaetUwhCQ==.
Member of the CSR plc group of companies. CSR plc registered in England and Wales, registered number 4187346, registered office Churchill House, Cambridge Business Park, Cowley Road, Cambridge, CB4 0WZ, United Kingdom More information can be found at www.csr.comhttp://www.csr.com. Follow CSR on Twitter at http://twitter.com/CSR_PLC and read our blog at www.csr.com/bloghttp://www.csr.com/blog _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
I know there is a lot of confusion around reallocate of vols and aggrs.. I know here are the steps for doing a reallocate on a vol..
* DOT 8.1px 7mode
* 6080 HA
* FCP
* FC/SAS aggr 64bit Steps:
* Add disks to aggr
* Break snapmirror
* Reallocate volumes (-p) yes, I have snapshots
* Reallocate aggr
* Resynch snapmirror
* Done
Here is one of the best explanations I have read out there with reallocation. Pictures help... http://www.theselights.com/2010/03/understanding-netapp-volume-and.html
Here is my question..
Why do I have to break the SM? Won't the resynch synch up the blocks on the SM destination? I am thinking of doing all the steps and just doing a reinitialize on the destination vols. It's about 7TB of volumes, but I don't want to if it's not needed.
I have not read any data on this, other then saying to break and resynch.. I have exchange jobs that I don't want to have to break for a day or two while I am doing reallocates. The value on the measure on some of the vols are 16, which I hear is not good but others have seen larger numbers.
Thanks all!