Hello All:
There are a couple of possibilities on this one.
Data ONTAP GX can handles "lots" of objects very fast.
Data ONTAP GX would fix this problem and remove all the limitations
but a lot of people aren't ready for GX yet. (Sabastian might consider it)
"Lots" is very subjective but the original case of using the "ls" command
and taking a coffee break is typical of a case where the directory structure is
just too large to fit into memory. Without more information, none of use can be …
[View More]sure
that is really the problem. (We'd have to see statistics and look at lots of factors).
If tanyone needs to access millions of files in a single directory and doesn't want to move to GX, then working with NetApp support to change the directory structure is the best case scenario. You can hopefully get the performance back with small tweaks.
"Metadata" is sounding like a subjective term as well. There is no evidence that the metadata is a problem.
The directory file is probably the problem (The directory file is a special file that has inode numbers and maps those numbers to names. It has to be read into system memory to send the information back for an "ls")
Just my .02 cents.
--April
----- Original Message ----
From: Blake Golliher <thelastman(a)gmail.com>
To: Peter D. Gray <pdg(a)uow.edu.au>
Cc: toasters(a)mathworks.com
Sent: Monday, October 22, 2007 6:26:56 PM
Subject: Re: WAFL metadata files
I'd argue that this is general file system issue, not so much a wafl
issue. I don't think wafl is particularly slow at this workload
either, they do far better then most other nas gear I've used for this
workload. There are trickier things out there, like bluearc has some
preloaded cache for meta data to help speed it along, but that's just
fixing the problem by tossing it all in memory. If you compare a file
system operation to file system operation between netapp and bluearc
I'm sure you'd fine similar performance issues for a directory with
millions of objects.
But I do hope some of those wafl guys can figure out a way to make
lots of objects in a file system faster. It can be a huge pain.
-Blake
On 10/22/07, Peter D. Gray <pdg(a)uow.edu.au> wrote:
> On Mon, Oct 22, 2007 at 12:55:48PM -0700, Blake Golliher wrote:
> > I have to deal with millions of objects in filesystems, I highly
> > recommend subdirectories. look at your nfs_hist output. First do
> > nfs_hist -z. Then count to 30 and run nfs_hist again. It's a
> > histogram of all nfs ops, and how long they took in milisecond
> > buckets. I'd bet lookup is taking a very long time. When dealing
> > with a large number of objects, sensible directory structures are
key.
> >
>
> Yes, but to be fair, this is a weakness in the wafl filesystem.
> You cannot have everything, and wafl has made a trade off
> in the way it stores file metadata that makes it slow to
> handle large number of files in a directory.
>
> I am not sure if netapp is planning any enhancements in this area
> or even what would be possible.
>
> Anybody care to comment?
>
> Regards,
> pdg
>
> --
>
> See mail headers for contact information.
>
>
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
[View Less]
I have just gone through the upgrade guide and drawn up my plan of attack so
to speak however it does not state how long the wafl conversion will take (I
appreciate this is hard to gauge) and if you have to wait for the conversion
to finish before you can create new aggregates and flex volumes and start
migrating data in to them from existing trad vols - can anyone advise?
Thanks
--
View this message in context: http://www.nabble.com/Quick-question-re-upgrade-from-Ontap-6.5.6-to-7.2.3-t……
[View More]Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
[View Less]
The WAFL file system store metadata in files, three files :
- inode file
- block-map file
- inode map
Is it possible to copy this files ?
I would like to make precises stats on this file system.
Thanks in advance,
Séb.
Have you tried the disk sanitize command? I haven't tried this, but
the man page seems to talk about what you want to do. Let us know if
this works for you.
The disk sanitize start, disk sanitize abort, and disk
sanitize status commands are used to start, abort, and
obtain status of the disk sanitization process. This pro-
cess runs in the background and sanitizes the disk by
writing the entire disk with each of the defined pat-
…
[View More]terns. The set of all pattern writes defines a cycle;
both pattern and cycle count parameters can be specified
by the user. Depending on the capacity of the disk and the
number of patterns and cycles defined, this process can
take several hours to complete. When the process has com-
pleted, the disk is in the sanitized state. The disk sani-
tize release command allows the user to return a sanitized
disk to the spare pool.
-G
On 10/22/07, Stefan Funke <bundy(a)arcor-online.net> wrote:
> Andrew Siegel wrote:
>
> > In my opinion (as a 12-year customer), the 16TB limit is the number one
> > deficiency in NetApp software at the moment, and perhaps their biggest
> > deficiency ever. I would be very surprised if they weren't losing
> > customers over this issue.
>
> Acknowledged. I think they changed the limit from 10TB to 16TB with
> Ontap7, correct? Disc sizes are growing and growing - unless NetApp
> changes to solid state disks (flash), a higher limit would be nice to
> have.
>
>
> BTW, I'm searching for a way to securely wipe out all data at our R200
> disks. (ATA, 154x274gig) I wanted to build a big volume, but the 16TB
> hit me too. Has anyone ever played with 'dd' at Ontap side? Is there a
> hidden random device (/dev/random) I can use to create random data? Any
> hints? :)
>
[View Less]
Andrew Siegel wrote:
> In my opinion (as a 12-year customer), the 16TB limit is the number one
> deficiency in NetApp software at the moment, and perhaps their biggest
> deficiency ever. I would be very surprised if they weren't losing
> customers over this issue.
Acknowledged. I think they changed the limit from 10TB to 16TB with
Ontap7, correct? Disc sizes are growing and growing - unless NetApp
changes to solid state disks (flash), a higher limit would be nice to
have.
…
[View More]BTW, I'm searching for a way to securely wipe out all data at our R200
disks. (ATA, 154x274gig) I wanted to build a big volume, but the 16TB
hit me too. Has anyone ever played with 'dd' at Ontap side? Is there a
hidden random device (/dev/random) I can use to create random data? Any
hints? :)
[View Less]
Is there an easy way to query a DFM server remotely? I'd like to grab
some data on demand (Current CPU). I could do that via SNMP without too
much trouble, but I'd rather not let all the clients directly poll the
filers.
The DFM server already has that information, and I can poll it on the
machine itself with something like 'dfm report filers-ops <filer>'. I'd
rather let the clients hit this machine so there's no performance impact
on the filers if something goes wrong. Is there a …
[View More]way of getting that
same data from a remote host without setting up accounts and automated
logins? I thought I might be able to run the dfm software on the client
directly, but if I can, I'm missing how to specify the server that it
contacts...
The web reports are fine, but I don't want to scrape them through the
normal display. If I can get the raw data from the DFM server via
http/snmp/other, that would be good.
Thanks.
--
Darren Dunham ddunham(a)taos.com
Senior Technical Consultant TAOS http://www.taos.com/
Got some Dr Pepper? San Francisco, CA bay area
< This line left intentionally blank to confuse you. >
[View Less]
Mellanox
------Original Message------
From: Albert Chin
Sender:
To: toasters(a)mathworks.com
ReplyTo: toasters(a)mathworks.com
Sent: Oct 20, 2007 01:07
Subject: FAS6070 NVRAM PCIe card
According to
http://www.netapp.com/go/techontap/fas6070.html?fmt=print, the FAS6070
has an NVRAM PCIe card. It's probably an OEM card and I'm curious if
anyone knows who manufactures the card.
--
albert chin (china(a)thewrittenword.com)
Sent via Blackberry - please excuse my typing
Or SSH or RSH the command u want.
**Sent using wireless handheld...
please excuse any typo's or brevity**
Kevin Parker - NWN Corporation
kparker(a)nwnit.com
(m) 919.830.5819
(o) 919.653.4489
-----Original Message-----
From: "Hackworth, Brian" <brian.hackworth(a)netapp.com>
To: "A Darren Dunham" <ddunham(a)taos.com>; "toasters(a)mathworks.com" <toasters(a)mathworks.com>
Sent: 10/19/2007 5:48 PM
Subject: RE: Query DFM server?
You can hit the DFM server with an HTTP request …
[View More]to send you the data it
would have shown you in the filers-ops report, but in a different form,
such as comma-separated values.
Use a URL like
http://{dfm-server}:8080/dfm/report/filers-ops?output-format=xls
You can pass arguments to filter the list by filer, or by group, however
you like.
- Brian
| -----Original Message-----
| From: A Darren Dunham [mailto:ddunham@taos.com]
| Sent: Friday, October 19, 2007 2:13 PM
| To: toasters(a)mathworks.com
| Subject: Query DFM server?
|
| Is there an easy way to query a DFM server remotely? I'd
| like to grab some data on demand (Current CPU). I could do
| that via SNMP without too much trouble, but I'd rather not
| let all the clients directly poll the filers.
|
| The DFM server already has that information, and I can poll
| it on the machine itself with something like 'dfm report
| filers-ops <filer>'. I'd rather let the clients hit this
| machine so there's no performance impact on the filers if
| something goes wrong. Is there a way of getting that same
| data from a remote host without setting up accounts and
| automated logins? I thought I might be able to run the dfm
| software on the client directly, but if I can, I'm missing
| how to specify the server that it contacts...
|
| The web reports are fine, but I don't want to scrape them
| through the normal display. If I can get the raw data from
| the DFM server via http/snmp/other, that would be good.
|
| Thanks.
|
| --
| Darren Dunham
| ddunham(a)taos.com
| Senior Technical Consultant TAOS
| http://www.taos.com/
| Got some Dr Pepper? San Francisco,
| CA bay area
| < This line left intentionally blank to confuse you. >
|
Note: This message and any attachments is intended solely for the use of the individual or entity to which it is addressed and may contain information that is non-public, proprietary, legally privileged, confidential, and/or exempt from disclosure. If you are not the intended recipient, you are hereby notified that any use, dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the original sender immediately by telephone or return email and destroy or delete this message along with any attachments immediately.
[View Less]
I will be out of the office starting 10/13/2007 and will not return until
11/01/2007.
I will respond to your message when I return. If you need some work done,
contact Steve Bengtson