Hi Toasters,
right now i am thinking about any kind of performance indicator for my
fileservice. We do supply storage to other departments of our company.
Of course one thing i can sell is space. But there is another indicator
which is performance. Has anybody good ideas how to define the
performance i can grant to a special service?
I was thinking about I/O's per second but do not know what a good number
would be. Has anybody ever had to work out those values and can give me
something to …
[View More]think about? I am interesting in any experience you had
with this topic...
I/O's, data-throughput, responsetime....And what are good values which i
can grant with netapp systems? We use R200, FAS960 and FAS3050.
Thanks in advance
Jochen
[View Less]
Folks,
Just an FYI to all: we've had to take the measure of restricting posting
to the 'toasters' list, meaning that the list will only accept posts
from an address which is subscribed. Sorry for taking this measure, but
the nature of email today has left us no alternative (not one that will
still allow us to get our jobs done, that is!); it's a way to keep the
spam to the list to a minimum. And for what it's worth, the spam to the
list is almost nil now. (knock wood and cross your …
[View More]fingers). Hopefully,
this measure along with the four (yep...four!) separate facilities we
have for filtering the gooey pink stuff will be sufficient to make the
list a friendlier place again.
For most, this will have no effect. But some may be subscribed via list
addresses which will be expanded on the destination domain, and for
these folks the ability to reply to or start new threads will be foiled,
unless the "From:" address matches the address which is subscribed to
the list. In other words:
Admin1 is one of 4 admins at XYZ Corp. who are all part of the alias
'toasters(a)xyzcorp.com'.
The admins at XYZ subscribe 'toasters(a)xyzcorp.com' to
'toasters(a)mathworks.com'.
An email to the 'toasters' list goes to XYZ and is expanded to all 4 admins.
Admin1 then they won't be able to reply or create new posts because
'admin1(a)xyzcorp.com' is not subscribed to the 'toasters' list.
So...sorry for any inconvenience to individuals subscribed in this
manner, but you'll have to subscribe individually in order to post to
the list.
--
*------------------------------------------*-----------------------*
| Kevin Davis (UNIX Sysadmin) | Natick, Massachusetts |
| 508.647.7660 | 01760-2098 |
| mailto:kevin.davis@mathworks.com *-----------------------*
| http://www.mathworks.com | |
*------------------------------------------*-----------------------*
[View Less]
You are correct - the reclamation is a new feature - introduced around
7.0 I believe.
Unsure what version of ONTAP the issue was being seen on, but there were
some bugs related to that feature-set that were fixed in later versions
- some bugs were related to its efficiency, while some were related to
the timing (when the freed space was showing back up).
Glenn
-----Original Message-----
From: Chris Thompson [mailto:cet1@cus.cam.ac.uk]
Sent: Friday, September 08, 2006 12:07 PM
To: toasters(a)…
[View More]mathworks.com
Cc: Glenn Walker
Subject: Re: Snap status and wafl scan
ggwalker(a)mindspring.com (Glenn Walker) writes:
> Ok - those are other things than what I previously mentioned.
>
> Active Bitmap rearrangement is normal - I believe it has been around
> since 6.5 (pretty sure it wasn't a flexvol thing, that was
'deswizzler'
> scan). It's to keep memory 'pretty', if memory serves.
>
> The container block reclamation is a scanner responsible for ferreting
> out blocks that have been freed and marking them as usable to the
system
> (delayed free scanner\delayed freer working here). This too is
normal.
Just in case it wasn't clear, I did not mean to suggest that these
things
were not "normal", or that they had any implications for filing system
integrity. It was the performance aspect of the "container block
reclamation"
scan that originally brought it to my attention: specifically that after
our dumps had finished (and the associated snapshots thus deleted) disc
I/O remained high and cache age very low for some time. I don't think
ONTAP used to behave like that in 6.5.
By contrast the "active bitmap rearrangement" scan (which is always
running: it just cycles around the block numbers indefinitely) has
no bad effect on performance that I have ever noticed.
--
Chris Thompson
Email: cet1(a)cam.ac.uk
[View Less]
Yes, Jumbo frames are supported via the ifconfig command. The command
allows you
to set the mtusize. Although it's probably worth opening a support case
and have
them run a perfstat on your system while you are experiencing slow
performance.
If it's really as slow as you say, it's probably not the mtusize. It
might be,
but there are probably other things you can adjust in your environment
first.
But NGS has the resources to work with you to figure those out.
Just a thought.
-- Adam Fox
…
[View More]adamfox(a)netapp.com
-----Original Message-----
From: kunle windapo [mailto:kwindapo@yahoo.com]
Sent: Friday, September 08, 2006 11:26 AM
To: toasters(a)mathworks.com
Subject: jumbo frames support
hello,
i have a FAS 270 with volumes nfs mounted on a Solaris
9 sparc server. there is a direct connection from the solaris server to
ethernet interface on the FAS 270 running at 1Gb/s.
problem is i experience very slow performance on the system, (io
related) and i'm troubleshooting. i want to know if NetApp supports
Jumbo frames on the Network interface.
thank you
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
[View Less]
hello,
i have a FAS 270 with volumes nfs mounted on a Solaris
9 sparc server. there is a direct connection from the
solaris server to ethernet interface on the FAS 270
running at 1Gb/s.
problem is i experience very slow performance on the
system, (io related) and i'm troubleshooting. i want
to know if NetApp supports Jumbo frames on the Network
interface.
thank you
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around …
[View More]http://mail.yahoo.com
[View Less]
Ok - those are other things than what I previously mentioned.
Active Bitmap rearrangement is normal - I believe it has been around
since 6.5 (pretty sure it wasn't a flexvol thing, that was 'deswizzler'
scan). It's to keep memory 'pretty', if memory serves.
The container block reclamation is a scanner responsible for ferreting
out blocks that have been freed and marking them as usable to the system
(delayed free scanner\delayed freer working here). This too is normal.
Glenn
-----Original …
[View More]Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Chris Thompson
Sent: Thursday, September 07, 2006 4:01 PM
To: toasters(a)mathworks.com
Cc: ecoli82(a)msn.com; etraitel(a)gmail.com
Subject: Re: Snap status and wafl scan
Eyal Traitel writes:
> Note that snap status is an advanced command so its output shouldn't
> necessarily cause you concerns...
Well, while we're talking about things wot mere mortals were not meant
to wot of...
ec0li <ecoli82(a)msn.com> wrote:
> When I check snap status, I see next to recent snapshots : (xxxx/xxxx
> remaining).
> Also, It seems that this is in fact the wafl scan process "snap create
> summary update"
The effect worrying me is somewhat different. After a snapshot is
deleted
the above state lasts only seconds, but there is then a much more
extended
period where "snap status" looks normal but "wafl scan status" shows
e.g.
carina*> wafl scan status
Volume CUS:
Scan id Type of scan progress
1 active bitmap rearrangement fbn 3340 of 4461 w/
max_chain_len 3
925 container block reclamation block 382 of 4461
The "container block reclamation" scan lasts several minutes (and this
is
a volume & aggregate of only a few hundred GB), and while it's going on
disc read activity is high and severely impacts the cache - at least, if
you trust the "cache age" shown by sysstat, which I am not sure I do.
This is with ONTAP 7.1.1P1 (but similar effects observed in earlier
7.x's).
--
Chris Thompson
Email: cet1(a)cam.ac.uk
[View Less]
I'm running into a recurring issue with backing up my NearStore and a long
F760 filer.
In the procoess of backing some large filesystems (qtrees), it appears
that NDMP is basically timing out and fails. I'm running Veritas
NetBackup 6.0 (finally upgrading from 4.5 recently). I think the issue
has to do with NDMP starting the dump commands on the filer, which then
does a three-way backup to another filer with local tape drives.
The debug from ndmp logs shows for a manual full backup of a …
[View More]filer:
Sep 05 21:05:57 EDT [ndmpd:58]: Log message: DUMP: creating
"/vol/vol0/../snapshot_for_backup.370" snapshot.
Sep 05 21:06:05 EDT [ndmpd:58]: Log message: DUMP: Using Partial Volume
Dump with Exclude Lists
Sep 05 21:07:31 EDT [ndmpd:58]: Log message: DUMP: Date of this level 1
dump: Tue Sep 5 21:05:57 2006.
Sep 05 21:07:31 EDT [ndmpd:58]: Log message: DUMP: Date of last level 0
dump: Fri Aug 18 19:20:57 2006.
Sep 05 21:07:31 EDT [ndmpd:58]: Log message: DUMP: Dumping
/vol/vol0/files3.rt to NDMP connection
Sep 05 21:07:31 EDT [ndmpd:58]: Log message: DUMP: mapping (Pass
I)[regular files]
It never completes the mapping of the files in Pass I. It just sits there.
The filesystems I'm backing up are between 250-650GB with a LOT of small
files (millions) and many subdirs. Tons of mail and html files.
My guess is that the amount of files and dirs are getting large that NDMP
can't map them fully and is timing out after 8 hours. It shouldn't take
that long to map files for the size of data it's doing; I've seem mapping
times be much less for larger sets of data.
I know that the filesystem topology can cause NDMP backups to be slow,
depending on file sizes, data layout, filer load, network etc, but this is
happening on moderately busy filers or bone idle NearStores.
Anyone run across lengthy DUMP times?
Chewing through NOW and Veritas support site hasn't turned up anything
obvious.
Just curious if others have run into NDMP/dump issues like this.
-Scott
[View Less]
Hello,
When I check snap status, I see next to recent snapshots : (xxxx/xxxx
remaining).
Also, It seems that this is in fact the wafl scan process "snap create
summary update"
Does anyone knows what is this process, its purpose, what happens if the
system reboots during this,...
I'm asking this because on one system with lots of snapshots(snapvault) this
process seems to never finish. In fact, each time a new snap is transfered
the counter restarts from the beginning but has not enough time …
[View More]to finish
until a new snapshot is created.
Consequence : in snap status I see more and more snapshots with (xxx/xxx
remaingint) on the right.
Thanks in advance for your answers!
--
View this message in context: http://www.nabble.com/Snap-status-and-wafl-scan-tf2232129.html#a6187160
Sent from the Network Appliance - Toasters forum at Nabble.com.
[View Less]
It may have something to do with the fact that SnapVault 'stitches'
snapshots together on the back end - by that I mean that there is a
coalescing function that puts the multitude of individually transferred
snapshots together once they are all on the destination system and it
makes it one single destination snapshot.
Glenn
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of ec0li
Sent: Thursday, September 07, 2006 5:26 AM
To: …
[View More]toasters(a)mathworks.com
Subject: Snap status and wafl scan
Hello,
When I check snap status, I see next to recent snapshots : (xxxx/xxxx
remaining).
Also, It seems that this is in fact the wafl scan process "snap create
summary update"
Does anyone knows what is this process, its purpose, what happens if the
system reboots during this,...
I'm asking this because on one system with lots of snapshots(snapvault)
this
process seems to never finish. In fact, each time a new snap is
transfered
the counter restarts from the beginning but has not enough time to
finish
until a new snapshot is created.
Consequence : in snap status I see more and more snapshots with (xxx/xxx
remaingint) on the right.
Thanks in advance for your answers!
--
View this message in context:
http://www.nabble.com/Snap-status-and-wafl-scan-tf2232129.html#a6187160
Sent from the Network Appliance - Toasters forum at Nabble.com.
[View Less]
Hi
We noticed it after upgrading from onTap 6.3 to 6.5
The fix in that case was to turn of ndmpd.offset_map.enable.
What happened in (I think it was in 6.4) was ndmpd had a slight
functionality change so it does a lot more processing up front.
Andrew
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Scott T. Mikusko
Sent: Thursday, 7 September 2006 9:20 AM
To: toasters(a)mathworks.com
Subject: NDMP backups, timeouts
I'm …
[View More]running into a recurring issue with backing up my NearStore and a
long F760 filer.
In the procoess of backing some large filesystems (qtrees), it appears
that NDMP is basically timing out and fails. I'm running Veritas
NetBackup 6.0 (finally upgrading from 4.5 recently). I think the issue
has to do with NDMP starting the dump commands on the filer, which then
does a three-way backup to another filer with local tape drives.
The debug from ndmp logs shows for a manual full backup of a filer:
Sep 05 21:05:57 EDT [ndmpd:58]: Log message: DUMP: creating
"/vol/vol0/../snapshot_for_backup.370" snapshot.
Sep 05 21:06:05 EDT [ndmpd:58]: Log message: DUMP: Using Partial Volume
Dump with Exclude Lists Sep 05 21:07:31 EDT [ndmpd:58]: Log message:
DUMP: Date of this level 1
dump: Tue Sep 5 21:05:57 2006.
Sep 05 21:07:31 EDT [ndmpd:58]: Log message: DUMP: Date of last level 0
dump: Fri Aug 18 19:20:57 2006.
Sep 05 21:07:31 EDT [ndmpd:58]: Log message: DUMP: Dumping
/vol/vol0/files3.rt to NDMP connection Sep 05 21:07:31 EDT [ndmpd:58]:
Log message: DUMP: mapping (Pass I)[regular files]
It never completes the mapping of the files in Pass I. It just sits
there.
The filesystems I'm backing up are between 250-650GB with a LOT of small
files (millions) and many subdirs. Tons of mail and html files.
My guess is that the amount of files and dirs are getting large that
NDMP can't map them fully and is timing out after 8 hours. It shouldn't
take that long to map files for the size of data it's doing; I've seem
mapping times be much less for larger sets of data.
I know that the filesystem topology can cause NDMP backups to be slow,
depending on file sizes, data layout, filer load, network etc, but this
is happening on moderately busy filers or bone idle NearStores.
Anyone run across lengthy DUMP times?
Chewing through NOW and Veritas support site hasn't turned up anything
obvious.
Just curious if others have run into NDMP/dump issues like this.
-Scott
"This e-mail and any attachments to it (the "Communication") is, unless otherwise stated, confidential, may contain copyright material and is for the use only of the intended recipient. If you receive the Communication in error, please notify the sender immediately by return e-mail, delete the Communication and the return e-mail, and do not read, copy, retransmit or otherwise deal with it. Any views expressed in the Communication are those of the individual sender only, unless expressly stated to be those of Australia and New Zealand Banking Group Limited ABN 11 005 357 522, or any of its related entities including ANZ National Bank Limited (together "ANZ"). ANZ does not accept liability in connection with the integrity of or errors in the Communication, computer virus, data corruption, interference or delay arising from or in respect of the Communication."
[View Less]