Have you tried a reallocate measure? What does a statit look like during the xfer?
Did it ever run properly?
Bert Kiers kiersb@xs4all.net wrote:
Hi,
We have a problem on one head in a 6070 fabric metro cluster where snapmirror is slow. The maximum speed is 9.6 MB/s no matter how high it is throttled. If throttle is set lower, then the lower value is the transfer speed. It was orginally diagnosed going over the network, but snapmirror from one volume to another on the same head is also slow. Vol copy and NFS writes to volumes in the same aggregate are ten times faster. Also, snapmirror on the partner in the FMC is fast.
The aggr has 84 15k RPM disks, raid size is 21, disks are 300 GB. I have tried destroying and recreating the aggr; also tried smaller aggr; same result. There is nothing special in /etc/messages or /etc/log/snapmirror.
For the incrowd, this is netapp case 2001426595, opened 21-Apr-2010, but there is no progress there, so I ask here.
Any ideas?
Grtnx, -- Bert Kiers XS4ALL UNIX systeembeheerder, suspected terrorist 1984 was not meant as a manual
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
On Mon, May 31, 2010 at 01:33:38PM -0400, Page, Jeremy wrote:
Have you tried a reallocate measure? What does a statit look like during the xfer?
The source volume: Tue Jun 1 13:57:56 CEST [filer-hm6e: wafl.reallocate.check.value:info]: Allocation measurement check on '/vol/test1' is 1.
The destination vol after breaking: Tue Jun 1 14:48:00 CEST [filer-hm6e: wafl.reallocate.check.value:info]: Allocation measurement check on '/vol/testcopy' is 1.
Statit shows all cpus are > 90 % idle. Busyest disk is 3 % busy with 3 IO/s. Mostly writing full stripes: 121.65 18 blocks per stripe size 18 19.85 1 blocks per stripe size 1 and the rest <2
Did it ever run properly?
Well. Yes and no.
This used to be a stand alone filer for several years. It must have been a fully functional snapmirror destination at one time but I don't remember. I am sure that during those years it was a fast snapmirror source and NFS server 24*7. Then we moved the head and disks to another location and converted two of these to a fabric metro cluster. It never worked after the conversion. But its partner is.
But someone from NetApp just mailed about options replication.throttle.* and that was the problem.
Was: replication.throttle.enable on replication.throttle.incoming.max_kbs 10000
And now with: replication.throttle.enable off replication.throttle.incoming.max_kbs 125000 problem is over.
We did not know about that option, surely did not set it and must have read over it several times.
Thanks anyway,