Gents,
Let me first explain the overall layout. I've got a D2D2T design. Both
primary and secondary filers are v-series, and snapvaulted. We have 16TB of
CIFS home dir data volumes backing up to tape via NDMP. And the tape backup
is on LTO-3 running at sustained throughput of 20mbps, (F**king slow, I
know).
Does anybody have experience how to speed up NDMP backup? At this point, the
LTO-3 is shoe-shining as the minimum sustained throughput on that drive is
40mbps. And also, does anybody have …
[View More]any experience with NDMP to tell us why
this is happening?
Thanks
[View Less]
I'm pretty sure that the aggr snap reserve is not needed for core dump -
this is written to the 10% overhead space (or to spare drives, depending
on how you've configured your filer), then when the filer comes back up
and the 'savecore' is performed, it is written to the /etc/crash
directory.
As for the aggr copy command, that is news to me (you see how often I've
used it).
The one thing you did overlook: restoring an aggregate - not individual
volume, mind you. This is the same …
[View More]concept as vol snap restore.
Glenn
________________________________
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Michael Schipp
Sent: Monday, December 18, 2006 5:54 PM
To: toasters(a)mathworks.com
Subject: Aggregrate snap reserve (again)
Hi all,
We have had a lot of talk since DOT 7 of this but just to clear it
up some (I hope).
aggregate snapshots are required if you use (default 5%);
* RAID SyncMirror and/or MetroCluster configurations.
* Aggr copy command
* Core dump
IS the core dump in on aggr0 snap space only?
So if a Filer/FAS has two aggregates and not using or needing SyncMirror
and the aggrcopy command is it safe to turn off the aggregate snap
reserve for aggr1 (still leaving snap reserve on for aggr0 for core
dumps)?
NetApp - are aggregates snap reserve used for any other purpose?
Thanks
Michael
[View Less]
Core dump does not go into aggr snap reserve. During panic, core is
dumped into a reserved area (the space chopped off by rightsizing) on
one or more non-broken disks (outside of WAFL, since WAFL code runs in
memory, which we might no longer trust depending on what went wrong and
we're dumping it). During reboot, savecore (see the man pages and
System Admin Guide) moves the core from reserved area to /etc/crash,
where it can be picked up by the user or other process.
I could be wrong about …
[View More]which disks it uses, but I'm sure it doesn't use
aggr snap reserve.
Peter
________________________________
From: Michael Schipp [mailto:mschipp@asi.com.au]
Sent: Monday, December 18, 2006 2:54 PM
To: toasters(a)mathworks.com
Subject: Aggregrate snap reserve (again)
Hi all,
We have had a lot of talk since DOT 7 of this but just to clear it
up some (I hope).
aggregate snapshots are required if you use (default 5%);
* RAID SyncMirror and/or MetroCluster configurations.
* Aggr copy command
* Core dump
IS the core dump in on aggr0 snap space only?
So if a Filer/FAS has two aggregates and not using or needing SyncMirror
and the aggrcopy command is it safe to turn off the aggregate snap
reserve for aggr1 (still leaving snap reserve on for aggr0 for core
dumps)?
NetApp - are aggregates snap reserve used for any other purpose?
Thanks
Michael
[View Less]
Hi all,
We have had a lot of talk since DOT 7 of this but just to clear it
up some (I hope).
aggregate snapshots are required if you use (default 5%);
* RAID SyncMirror and/or MetroCluster configurations.
* Aggr copy command
* Core dump
IS the core dump in on aggr0 snap space only?
So if a Filer/FAS has two aggregates and not using or needing SyncMirror
and the aggrcopy command is it safe to turn off the aggregate snap
reserve for aggr1 (still …
[View More]leaving snap reserve on for aggr0 for core
dumps)?
NetApp - are aggregates snap reserve used for any other purpose?
Thanks
Michael
[View Less]
This is completely supported. I would suggest putting loopback plugs on
the unused ports. This was best practice on FAS900, but I'm not sure
about FAS3000. The loopback plugs slightly improve takeover and reboot
times by not having the filer wait for timeout on an open loop.
LC loopback is X6521-R6
http://now.netapp.com/eservice/partDetails.do?partNumber=X6521-R6&produc
tId=60420&rohsCompliant=R6
I know it says FAS270 on there, but it is used on any LC connector where
loopback is …
[View More]needed. You can even use it to test Ethernet ports,
although there is no value in terminating unused Eth ports.
Enjoy!
Peter
-----Original Message-----
From: Stephen C. Losen [mailto:scl@sasha.acc.virginia.edu]
Sent: Monday, December 18, 2006 4:33 AM
To: toasters(a)mathworks.com
Subject: CF Disk Configuration Question
We have a 3050c clustered pair where both filers have 4 shelves of 144G
FC disks each. We want to add two shelves of 500G SATA drives to one
filer. We are not allowed to mix FC and SATA on the same FC Loop, so
the two new shelves must be on a separate loop. Since this is a CF
configuration, we will connect these shelves to both filers. The filer
that owns the shelves will connect using the "A" port of its adapter
while the partner filer will use its "B" port. But since we are not
adding any shelves to the partner filer, this leaves us with a vacant
"B" port on the filer that owns the shelves and a vacant "A" port on the
partner. Does anyone know if this is a problem?
Steve Losen scl(a)virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
[View Less]
Worse case, put a fiber loopback plug in any vacant fiber ports.
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Willeke, Jochen
Sent: Monday, December 18, 2006 8:59 AM
To: Stephen C. Losen; toasters(a)mathworks.com
Subject: RE: CF Disk Configuration Question
Hi,
we never run this configuration over a long time, but during cluster
updates we had this configuration several times, even it was only for a
couple of minutes. From …
[View More]the technical point of view it works perfectly.
But i do not know if it is an supported scenario from netapp.
Best Regards
Jochen
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Stephen C. Losen
Sent: Monday, December 18, 2006 1:33 PM
To: toasters(a)mathworks.com
Subject: CF Disk Configuration Question
We have a 3050c clustered pair where both filers have 4 shelves of 144G
FC disks each. We want to add two shelves of 500G SATA drives to one
filer. We are not allowed to mix FC and SATA on the same FC Loop, so
the two new shelves must be on a separate loop. Since this is a CF
configuration, we will connect these shelves to both filers. The filer
that owns the shelves will connect using the "A" port of its adapter
while the partner filer will use its "B" port. But since we are not
adding any shelves to the partner filer, this leaves us with a vacant
"B" port on the filer that owns the shelves and a vacant "A" port on the
partner. Does anyone know if this is a problem?
Steve Losen scl(a)virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
[View Less]
We have a 3050c clustered pair where both filers have 4 shelves
of 144G FC disks each. We want to add two shelves of 500G SATA
drives to one filer. We are not allowed to mix FC and SATA
on the same FC Loop, so the two new shelves must be on
a separate loop. Since this is a CF configuration, we will
connect these shelves to both filers. The filer that owns the
shelves will connect using the "A" port of its adapter while
the partner filer will use its "B" port. But since we are
not adding …
[View More]any shelves to the partner filer, this leaves us
with a vacant "B" port on the filer that owns the shelves and
a vacant "A" port on the partner. Does anyone know if this
is a problem?
Steve Losen scl(a)virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
[View Less]
Yeah - the default SCSI timeout in windows is 10 seconds - that should
be increased to at least 180 seconds as a failback could take up to 3
minutes.
If you have snapdrive, it does this for you automatically.
FYI - leaving it at only 10 seconds could be REALLY bad if you have a
performance hungry application like Exchange (verification plus normal
operation could create havoc with undersized spindle count).
Glenn
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-…
[View More]toasters@mathworks.com]
On Behalf Of rob-7704(a)austin.rr.com
Sent: Friday, December 15, 2006 7:31 PM
To: toasters(a)mathworks.com
Subject: iSCSI Cluster help
Hey you iSCSI wizards,
We are just getting into iSCSI for winders and have the following
scenario happen today:
We had a failover on a cluster member that host test/infrastructure
systems today, due to a faulty ESH2 (that was triggered by a background
FW update, but that is a different discussion).
This filer also has an iSCSI volume on a test window server.
During a failover, it reported errors, but the iSCSI reconnected.
However, during a giveback (after NetApp replaced the faulty ESH2), the
iSCSI connection gave errors and lost the connection altogether. We had
to reboot that test windows box to recover the iSCSI volume.
We are working with NetApp to learn more about this. But just from
browsing on the NOW site about iSCSI, there are recommendations about
increasing the timeout values from the default 10 sec to at least 60
sec, and other registry tunings for iSCSI to make it more robust.
http://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs16249http://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs17115
Anybody been there and done this or other tunings with success ?
TIA
-Rob
[View Less]
Hey you iSCSI wizards,
We are just getting into iSCSI for winders and have the following
scenario happen today:
We had a failover on a cluster member that host test/infrastructure
systems today, due to a faulty ESH2 (that was triggered by a background
FW update, but that is a different discussion).
This filer also has an iSCSI volume on a test window server.
During a failover, it reported errors, but the iSCSI reconnected.
However, during a giveback (after NetApp replaced the faulty ESH2),…
[View More] the
iSCSI connection gave errors and lost the connection altogether. We had
to reboot that test windows box to recover the iSCSI volume.
We are working with NetApp to learn more about this. But just from
browsing on the NOW site about iSCSI, there are recommendations about
increasing the timeout values from the default 10 sec to at least 60
sec, and other registry tunings for iSCSI to make it more robust.
http://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs16249http://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs17115
Anybody been there and done this or other tunings with success ?
TIA
-Rob
[View Less]
Hi Mike
GX maybe the AFS flavor from NetApp.
http://blogs.netapp.com/dave/TechTalk/?permalink=ONTAP-GX-151-Past-and-Futu…
On the other hand, if you're looking for something _free_,
try http://www.openafs.org/.
AFS is killer replacement for NFS if you have lots of many small files that get constantly updated. Of course, not to mention, doing away with stale file handles as well.
regards
rohit
Sphar, Mike wrote:
> Just throwing a question out there curious to hear people's thoughts or
…
[View More]> experiences. Every time I end up dealing with hundreds of stale file
> handles because of a server move/change I become increasingly annoyed by
> the stateless nature of NFS and think to myself "Maybe this time I'll
> finally start seriously looking at AFS."
>
> Other than lots of other ways an AFS deployment could be complicated, I
> wonder how if at all a Netapp can be part of an AFS deployment?
>
> Also feel free to tell me how using AFS is crazy in general and I should
> just accept my stale file handles.
>
----------------------------------------------------------------------
Online Criminal Justice Programs
Criminal Justice careers are booming. Education-Advancement offers...
http://tagline.bidsystem.com/fc/BgLEQfJAsToxV9QXVIZNjpNvgdu5bRXkSF30/
[View Less]