Notice:
The email and the information are strictly confidential and
only intended for the dedicated recipient. Access through
unauthorized third party is not allowed. Any duplication,
forwarding or analogue acts are prohibited.
If you received this in error please inform the sender and
delete the email and all attachments from any computer.
Thank you.
The footnote also confirms that this email and all
attachments has been checked automatically on
computer viruses.
Please take into …
[View More]consideration that virus scanners
can only detect known Viruses.
Therefore you should open emails and attachments
only after consultation with the sender.
PCIB assumes no responsibility for the circulation of
computer viruses and demands against PCIB cannot
be asserted.
Please use your email connection with us exclusively
for the exch ange of information. We do not accept
legally binding declarations by thi s means of
communication.
[View Less]
Since 2 months we use filefolding on all volumes. (F760,2TB,3000 User, only
cifs) No remarkable impact in cpu usage. No trouble.
Snapshotsize decreasing a little.
Regards Stefan Holzwarth
-----Ursprüngliche Nachricht-----
Von: Öberg Mats [mailto:mats.oberg@tietoenator.com]
Gesendet: Mittwoch, 11. Dezember 2002 14:20
An: toasters(a)mathworks.com
Betreff: file folding impact
Hi, I'm thinking about enabling file folding on one of our filers.
Have anybody tried this? If so, what kind of …
[View More]performance decrease/size gain
can be expected from enabling it?
---- Mats
[View Less]
Hi, I'm thinking about enabling file folding on one of our filers.
Have anybody tried this? If so, what kind of performance decrease/size gain
can be expected from enabling it?
---- Mats
Hello Toasters.
I would very much like to talk with any one using Net-SNMP to monitor
their Filers/Caches. I've been using simple SNMP-get's for some time
now (you can see my work here: http://www.cuddletech.com/netapp/ which
is slightly outdated). But I'd like to implement a traphandler in PERL
with Net-SNMP, but having some trouble with how to tackle the problem.
If anyone has designed traphandlers, or is using custom traphandles with
Net-SNMP please let me know.
Ben Rockwood
UNIX …
[View More]Systems Team
Homestead Inc.
[View Less]
SomeBody in Netapp Needs to look at this !
-----Original Message-----
From: Ambrose_Earle(a)shamrockfoods.com
[mailto:Ambrose_Earle@shamrockfoods.com]
Sent: Tuesday, December 10, 2002 9:00 PM
To: Kumar, Rahul
Cc: toasters(a)mathworks.com
Subject: Re: Dump issue
We are seeing very similar performance issues from our F840Cs. We have
opened a case with NetApps, but have received very little information
back from them.
In our case, a 3 drive volume backing up gzip'ed files maxes out our
…
[View More]SDLT drives at nearly 11 MB/sec. We have a 6 drive volume of highly
compressible information that will stream to tape at over 20 MB/sec.
However, a 7 drive volume that only contains a few dozen very large
Oracle datafiles, now at times struggles to write 4 MB/sec!
The only correlation that I can make is that when disk utilization goes
above 40%, the backup speed plummets below 10 MB/sec. Have you tried
running sysstat or statit during the backup?
My biggest concern is that this may be the first symptom of some major
problem. If 7 drives can't read faster than 4MB/sec, then wouldn't you
be a little worried?
[View Less]
Hi Art,
Firstly it won't create a _serious_ performance problem. WAFL stripes across
all the disks in the volume, regardsless of the number of RAID groups. Yes,
you have another parity disk but you won't have "single drive seek" issues
on the raid group with 1 data drive in it. Remember, we're dealing with a
RAID-aware filesystem here, not dumb block-level RAID.
However, the space consumption will be an issue for you, at least until you
add more drives. Most of my customers run 14 disk RAID …
[View More]groups (MTTDL=~95000
years for the old 18GB drive, I can't imagine it would be worse for the
newer drives).
Yep, you need to backup, recreate volume and restore. Don't forget to
recreate your qtrees manually before you restore.
OR, if you can temporarily borrowe another shelf from your FLNR (friendly
local NetApp reseller), you can create a new volume on the other shelf and
volcopy the data to and from it. Much quicker :o)
-----Original Message-----
From: Art Hebert [mailto:art@arzoon.com]
Sent: Tuesday, 10 December 2002 11:42 AM
To: 'toasters(a)mathworks.com'
Subject: Raid Group question
I have a current volume (vol0) that has a raidgroup size of 8. I had 3
spares that weren't being used and
I wanted to add them to the volume, but like an idiot didn't check the
raidgroup size. Thus when
I added the 3 disks to the volume (vol0) 1 went to raidgroup 0 and the
other 2 went to raidgroup 1 as a parity
drive and a data drive.
This will create a performance problem from what I can tell not mention
another parity drive being used.
I'd like to get these two disks back under raid group 0 if I can.
Suggestions on the best way to do this would be appreciated.
I'm thinking the safest way is to back it up and recreate the volumes and
restore.
Thanks
art hebert
**** ASI Solutions Disclaimer ****
The material transmitted may contain confidential and/or privileged
material and is intended only for the addressee. If you receive this in
error, please notify the sender and destroy any copies of the material
immediately. ASI will protect your Privacy according to the 10 Privacy
Principles outlined under the new Privacy Act, Dec 2001.
This email is also subject to copyright. Any use of or reliance upon this
material by persons or entities other than the addressee is prohibited.
E-mails may be interfered with, may contain computer viruses or other
defects. Under no circumstances do we accept liability for any loss or
damage which may result from your receipt of this message or any
attachments.
**** END OF MESSAGE ****
[View Less]
Rahul,
You're dumping to a remotely attached linear tape drive.
I have seen this kind of thing many times with DLT drives (of all flavours).
If the network link can't quite keep up with the drive for a moment due to
other network loads (or lots of seek on the filer due to large number of
inodes), the drive stops and rewinds ("backhitching"). The time it takes to
do this is hundreds of times longer than the momentary network congestion,
so this plays havoc with the I/O queuing on the host the …
[View More]tape is attached
to, which causes "dump" on the filer to back off and wait because it's not
getting RMT verifies. This results in even more sporadic gaps in the stream
to the tape drive, which backhitches more....
i.e. you end up in a vicious circle of performance degradation. The speed
variance available in SDLT is not enough to prevent backhitching.
The cures:
1) Attach the tape drive to the filer!
2) Provide dedicated bandwidth for the backup - i.e. a crossover gigabit
link between the Sun box and the filer. (private subnet IP range as well of
course)
3) Replace the SDLT with a non-linear tape that doesn't backhitch - i.e.
AIT3 or VXA2. AIT3 is IMHO superior to SDLT in many ways.
All of these options will cost you money - even (1) if you need to then buy
NetVault so you can backup all the other servers' local disks to the tape
attached to the filer. But obviously budget is a problem or you wouldn't be
using dump.
Hope this helps.
Alan.
-----Original Message-----
From: Kumar, Rahul [mailto:rahul.kumar@eds.com]
Sent: Tuesday, 10 December 2002 5:58 PM
To: toasters(a)mathworks.com
Subject: Dump issue
Importance: High
Hi
Any Ideas why the dump is running so slow. We are running the filer to
dump onto a SDLT tape drive
DUMP: creating "/vol/vol0/../snapshot_for_backup.0" snapshot.
DUMP: Using Full Volume Dump
DUMP: Dumping tape file 1 on /tmp/filer1tape
DUMP: Date of this level 0 dump: Mon Dec 9 23:02:06 2002.
DUMP: Date of last level 0 dump: the epoch.
DUMP: Dumping /vol/vol0/ to root
DUMP: mapping (Pass I)[regular files]
DUMP: mapping (Pass II)[directories]
DUMP: estimated 44482138 KB.
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: Mon Dec 9 23:08:37 2002 : We have written 328738 KB.
DUMP: Mon Dec 9 23:13:37 2002 : We have written 813336 KB.
DUMP: Mon Dec 9 23:18:37 2002 : We have written 1248349 KB.
DUMP: Mon Dec 9 23:23:37 2002 : We have written 1780575 KB.
DUMP: Mon Dec 9 23:28:37 2002 : We have written 2307507 KB.
DUMP: Mon Dec 9 23:33:37 2002 : We have written 2782586 KB.
DUMP: Mon Dec 9 23:38:37 2002 : We have written 3220125 KB.
DUMP: Mon Dec 9 23:43:37 2002 : We have written 3751844 KB.
DUMP: Mon Dec 9 23:48:37 2002 : We have written 4190113 KB.
DUMP: Mon Dec 9 23:53:37 2002 : We have written 4641781 KB.
DUMP: Mon Dec 9 23:58:37 2002 : We have written 5080513 KB.
DUMP: Tue Dec 10 00:03:37 2002 : We have written 5606338 KB.
DUMP: Tue Dec 10 00:08:37 2002 : We have written 6088137 KB.
DUMP: Tue Dec 10 00:13:37 2002 : We have written 6572608 KB.
DUMP: Tue Dec 10 00:18:37 2002 : We have written 7016947 KB.
DUMP: Tue Dec 10 00:23:37 2002 : We have written 7542679 KB.
DUMP: Tue Dec 10 00:28:37 2002 : We have written 7984060 KB.
DUMP: Tue Dec 10 00:33:37 2002 : We have written 8503801 KB.
DUMP: Tue Dec 10 00:38:37 2002 : We have written 8958353 KB.
DUMP: Tue Dec 10 00:43:37 2002 : We have written 9390217 KB.
DUMP: Tue Dec 10 00:48:37 2002 : We have written 9828824 KB.
DUMP: Tue Dec 10 00:53:37 2002 : We have written 10260501 KB.
Rahul
**** ASI Solutions Disclaimer ****
The material transmitted may contain confidential and/or privileged
material and is intended only for the addressee. If you receive this in
error, please notify the sender and destroy any copies of the material
immediately. ASI will protect your Privacy according to the 10 Privacy
Principles outlined under the new Privacy Act, Dec 2001.
This email is also subject to copyright. Any use of or reliance upon this
material by persons or entities other than the addressee is prohibited.
E-mails may be interfered with, may contain computer viruses or other
defects. Under no circumstances do we accept liability for any loss or
damage which may result from your receipt of this message or any
attachments.
**** END OF MESSAGE ****
[View Less]
Also would be interesting to know
how many inodes on the file system.
Martin
-----Original Message-----
From: Kumar, Rahul [mailto:rahul.kumar@eds.com]
Sent: Monday, December 09, 2002 10:58 PM
To: toasters(a)mathworks.com
Subject: Dump issue
Importance: High
Hi
Any Ideas why the dump is running so slow. We are running the filer to
dump onto a SDLT tape drive
DUMP: creating "/vol/vol0/../snapshot_for_backup.0" snapshot.
DUMP: Using Full Volume Dump
DUMP: Dumping tape file 1 on /tmp/…
[View More]filer1tape
DUMP: Date of this level 0 dump: Mon Dec 9 23:02:06 2002.
DUMP: Date of last level 0 dump: the epoch.
DUMP: Dumping /vol/vol0/ to root
DUMP: mapping (Pass I)[regular files]
DUMP: mapping (Pass II)[directories]
DUMP: estimated 44482138 KB.
DUMP: dumping (Pass III) [directories]
DUMP: dumping (Pass IV) [regular files]
DUMP: Mon Dec 9 23:08:37 2002 : We have written 328738 KB.
DUMP: Mon Dec 9 23:13:37 2002 : We have written 813336 KB.
DUMP: Mon Dec 9 23:18:37 2002 : We have written 1248349 KB.
DUMP: Mon Dec 9 23:23:37 2002 : We have written 1780575 KB.
DUMP: Mon Dec 9 23:28:37 2002 : We have written 2307507 KB.
DUMP: Mon Dec 9 23:33:37 2002 : We have written 2782586 KB.
DUMP: Mon Dec 9 23:38:37 2002 : We have written 3220125 KB.
DUMP: Mon Dec 9 23:43:37 2002 : We have written 3751844 KB.
DUMP: Mon Dec 9 23:48:37 2002 : We have written 4190113 KB.
DUMP: Mon Dec 9 23:53:37 2002 : We have written 4641781 KB.
DUMP: Mon Dec 9 23:58:37 2002 : We have written 5080513 KB.
DUMP: Tue Dec 10 00:03:37 2002 : We have written 5606338 KB.
DUMP: Tue Dec 10 00:08:37 2002 : We have written 6088137 KB.
DUMP: Tue Dec 10 00:13:37 2002 : We have written 6572608 KB.
DUMP: Tue Dec 10 00:18:37 2002 : We have written 7016947 KB.
DUMP: Tue Dec 10 00:23:37 2002 : We have written 7542679 KB.
DUMP: Tue Dec 10 00:28:37 2002 : We have written 7984060 KB.
DUMP: Tue Dec 10 00:33:37 2002 : We have written 8503801 KB.
DUMP: Tue Dec 10 00:38:37 2002 : We have written 8958353 KB.
DUMP: Tue Dec 10 00:43:37 2002 : We have written 9390217 KB.
DUMP: Tue Dec 10 00:48:37 2002 : We have written 9828824 KB.
DUMP: Tue Dec 10 00:53:37 2002 : We have written 10260501 KB.
Rahul
[View Less]
Art,
There is no dynamic way to rebuild raid group sizing, or allocate from one raid group to another. Your statement about the safest way is also the only way that I have ever heard of (well, I guess volcopy would work, copying to another volume, destroying and rebuilding the source, and volcopy back).
--sam
-----Original Message-----
From: Art Hebert [mailto:art@arzoon.com]
Sent: Monday, December 09, 2002 4:42 PM
To: 'toasters(a)mathworks.com'
Subject: Raid Group question
I have a …
[View More]current volume (vol0) that has a raidgroup size of 8. I had 3
spares that weren't being used and
I wanted to add them to the volume, but like an idiot didn't check the
raidgroup size. Thus when
I added the 3 disks to the volume (vol0) 1 went to raidgroup 0 and the
other 2 went to raidgroup 1 as a parity
drive and a data drive.
This will create a performance problem from what I can tell not mention
another parity drive being used.
I'd like to get these two disks back under raid group 0 if I can.
Suggestions on the best way to do this would be appreciated.
I'm thinking the safest way is to back it up and recreate the volumes and
restore.
Thanks
art hebert
[View Less]