Our latest server has been configured such that the paths are *almost*
identical for UNIX and Windows clients. Essentially, on Windows the UNC
paths are \servername\path\to\files and on UNIX (using automounter) the
paths are /servername/path/to/files. Our build group loves it because they
now have their build scripts dynamically adjusting the slashes depending on
which client they run on, but the actual paths can be set in master config
files without regard to which client will read it.
--
Mike Sphar - Sr Systems Administrator - Engineering Support Services, BOFH,
GWP, MCP, MCP+I, MCSE, BFD
-----Original Message-----
From: Jeffrey Krueger [mailto:jkrueger@qualcomm.com]
Sent: Tuesday, September 19, 2000 10:36 AM
To: Louis Brune
Cc: toasters@mathworks.com
Subject: Re: Optimal volume size
Hi Louis!! =)
Our biggest volumes are 500GB. That is NetApp's recommendation for largest
volume size in DOT 6.0 which supports up to 1.5 TB volumes. The problem is
that the smaller the volume, the more administrative overhead.
Its nice to have one volume on smaller capacity machines because NFS
exports and CIFS shares all point at "/some_data" rather than
"/vol/vol0/some_data". This also causes difficulties with naming schemes
since we like to see the qtree == cifs share (i.e. /some_project ==
\filer\some_project). If you have multiple volumes than which of
/vol/vol0/some_project and /vol/vol1/some_project is the one pointed at by
CIFS share "\filer\some_project".
This statements are nit-picky on my part, but hey, we like to use the KISS
principal and multi-volume is just another thing to think about.
More on this stuff...
On Tue, Sep 19, 2000 at 07:47:17AM -0700, Louis Brune wrote:
> We have a similar situation on our F740. We want to keep the volume
size
> small because:
> a) Restores can take forever as the data size gets bigger.
>
> Can you not back up with qtree granularity instead of per volume?
I agree. Backup per qtree. Per volume will take forever on restores,
especially for small sets of data. There are one or two drawbacks to
per-qtree backups, but at least one of those is fixed in DOT 6.0. =)
> b) 10-14 disks is a good compromise on reliability. Having 2 out of
14
> disks go bad at one time is much rarer than 2 out of 51 disks.
>
> You can beat this by using several raid groups per volume. Of course,
> you need a parity drive for each raid group.
Again, Louis is on track here. With DOT 5.0 and above, they support
multiple RAID groups per volume. Each RAID group can be n-1 on disks and
still run fine which achieves the data protection you're looking for.
> c) If you upgrade disks/shelves in the future, you will likely do it a
> volume at a time. We did this with a volume with 180GB and the volcopy
took
> 6-7 hours to complete. It is NOT very fast. With 400GB+ in a volume,
that
> should be 2-2.5x longer.
How did you run the volcopy? Over the network or local inside a single
filer? What kind of head(s)?
We've found volcopy to be extremely fast, especially for upgrading disks
and/or migrating data. In fact, we used volcopy to relocate and replicate
a huge chunk of data when Louis' company and ours split and it was a
life saver. We were using F760's, but it took about 1 hr. 20 minutes to
volcopy ~180GB. That was all in one head that had both a FC-AL adapter and
dual SCSI adapters.
> Hmm. My data-shuffling tends to be in smaller pieces. Wouldn't it be
> nice to have a qtreecopy? 7 hours for 180 GB sounds like about 7
> MB/second. If this is on 100 MB, it's not all that bad.
Yes! They do have ndmpcopy which will do filer-to-filer data copying,
however, there have been bugs posted to toasters regarding it. Something
that performed filer-to-filer, per-qtree data moves would be fantastic!
-- Jeff
--
----------------------------------------------------------------------------
Jeff Krueger E-Mail: jeff@qualcomm.com
Senior Engineer Phone: 858-651-6709
NetApp Filers / UNIX Infrastructure Fax: 858-651-6627
QUALCOMM, Inc. IT Engineering Web: www.qualcomm.com