Are there any firewalls (or similar) between your two filers?
I've see cases where firewalls will drop long-running sessions.
So maybe it's not 83GB, but it's really N minutes?
Davin.
At 13:09, on Nov 7, 2007, "Mike Partyka" wrote:
> Yes, although it from the source logs you can see it doesn't hang in
> the "exact" same place it's always around 83G transferred. I
>> reconfigured the VSM to a QSM this morning since it's really just qtree
> tree1 in the flexvol and it hung even sooner, at the 65G mark. I'm
> trying to get downtime to run wafl_iron at this point. I just don't
> know what else to do.
>
> -----Original Message-----
> From: Mailing Lists [mailto:mlists@uyema.net]
> Sent: Wed 11/7/2007 12:06 PM
> To: Mike Partyka
> Cc: NetApp Toasters List
> Subject: Re: Snapmirror initialization aborts consistently
>
> It isn't clear whether you tried initializing again where it left off.
> Does it abort without continuing past 83gb?
>
> On Nov 6, 2007, at 2:50 PM, "Mike Partyka" <mpartyka(a)acmn.com> wrote:
>
> > Hello,
> >
> > I'm having a problem with Async SnapMirror where i have a 500G
> > flexvol on both source and destination. When i initialize it fails
> > at 83G consistently, I've destroyed the volume and rebuilt it
> > several times but the problem reoccurs each time. I've run the
> > source snapmirror logs relating to the failure through the syslog
> > translator and all it really says is that "this is a generic
> > snapmirror error on source", which just isn't very helpful.
> >
> > Here are the logs from the source:
> > Tue Nov 6 16:04:46 CST [pipeline_3:notice]: snapmirror: Network
> > communication error
> > Tue Nov 6 16:04:46 CST [snapmirror.src.err:error]: SnapMirror
> > source transfer from data to hci2:rcv_data : transfer failed.
> > Tue Nov 6 16:26:36 CST [pipeline_3:notice]: snapmirror: Network
> > communication error
> > Tue Nov 6 16:26:36 CST [snapmirror.src.err:error]: SnapMirror
> > source transfer from data to hci2:rcv_data : transfer failed.
> >
> > The source 3050a is running DOT 7.0.5 and the destination is running
> > DOT 7.0.6
> >
> > The volume options are identical as far as i can tell based on the
> > "vol options -v data_vol" command.
> >
> > The filers sites are connected via a GbE MAN, so bandwidth isn't the
> > problem.
> >
> > I've checked for errors on the ethernet interfaces on both ends and
> > none of the bad counter are not incrementing.
> >
> > Has anyone on the list experienced something like this? Or have any
> > troubleshooting advice?
> >
> > Thx
> >
> > Mike Partyka
> >