Sorry for the confusion, here is a copy of the rc file.
#Regenerated by registry Fri May 26 17:19:40 PDT 2000 #Auto-generated by setup Tue May 2 19:06:59 GMT 2000 hostname srglfs3 ifconfig e0 `hostname`-e0 mediatype 100tx-fd netmask 255.255.0.0 ifconfig e3a `hostname`-e3a mediatype 100tx-fd netmask 255.255.0.0 route add default 10.0.0.160 1 routed on options dns.domainname brandes.com options dns.enable on options nis.enable off savecore exportfs -a nfs on options autosupport.enable on ndmpd on ndmpd status
----- Original Message ----- From: "Jim Ward" jimw@worksta.com To: ferdie@san.rr.com Sent: Friday, July 07, 2000 11:33 AM Subject: Re: NDMPD
Ferdie,
Have you tried "ndmpd on" in lower case?
Jim
-- __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __
__
|
|
| EMail: jimw@worksta.com Tel: +1 (603) 672-8600 x232
|
| WWW: http://www.worksta.com Fax: +1 (603) 672-3154
|
| Post: Workstation Solutions, Inc., Five Overlook Drive, Amherst, NH
03031 |
|
|
| * Data Backup and Recovery Solutions for Your Peace of Mind *
|
| __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __
__ |
Is the "nfs on" entry even required anymore?
Perhaps it's bailing out of the rc file once it reads that line.
Bruce
We have a F740 filer with two 18 gig shelves. Each shelf is dedicated to a volume.We have reserved disk on each shelf as a hotspare for that particular volume. So in total we have two hot spares.
Last week one of the disks failed on the first volume and failed disk was recontructed on the spare which was physically present on second shelf. So we ended up with the volume which transgresses over to another shelf. Our goal was to contain each volume to a shelf of its own.
First,is there a way to contain the volumes with hot spares? Secondly how do I make the first volume give up the disk on second shelf and recontruct it on spare which is physically present on first shelf.
I hope I made myself clear.
Thanks,
Bhavnesh
In article 3969D7B1.89F42292@dnrc.bell-labs.com, Bhavnesh Makin wrote:
First,is there a way to contain the volumes with hot spares? Secondly how do I make the first volume give up the disk on second shelf and recontruct it on spare which is physically present on first shelf.
You can't constrain things like that. Filers are pretty good in that they label each disk so you can juggle them around and it will recognise them correctly. ISTR that netapp do a good demo of that with a 3 disk volume. You should therefore be able to move the disks by: Identify which disks you want to swap and stick a post it note on each. Shutdown the filer. Swap the disks over. Cross Fingers. Power on.
I just got back from "NetApp 202 Advanced System Admin & Troubleshooting" class. They went over a similar scenario. They had two volumes, one on each of two controllers. (they said that there is 10 percent performance hit, when drives in an volume are on different controllers). For example, if they had two volumes, each with six drives, and the all of the drives for the first volume were on the first controller and all of the drives for the second volume were on the second controller. And then a drive failed in the second volume, and rebuilt on a spare on the first controller. There would be a 10 percent performance hit, every time you accessed the second volume.
This is what they said to do to:
Replace the failed drive in the second volume. Then pull out all of the spares, including the one that was used to rebuild the second volume. This will force the volume to rebuild on the new drive.
Bhavnesh Makin wrote:
We have a F740 filer with two 18 gig shelves. Each shelf is dedicated to a volume.We have reserved disk on each shelf as a hotspare for that particular volume. So in total we have two hot spares.
Last week one of the disks failed on the first volume and failed disk was recontructed on the spare which was physically present on second shelf. So we ended up with the volume which transgresses over to another shelf. Our goal was to contain each volume to a shelf of its own.
First,is there a way to contain the volumes with hot spares? Secondly how do I make the first volume give up the disk on second shelf and recontruct it on spare which is physically present on first shelf.
I hope I made myself clear.
Thanks,
Bhavnesh
Unless I misunderstand some of the bus issues involved, the 10% performance hit would only occur on writes, and since writes are grouped and it is unlikely that you are writing to the drives ever second, you'll never notice it.
Reads come off individual disks as needed, so it doesn't matter which controller they are on. (I guess if you needed to read a whole stripe at once there's a potential hit, but I'm not sure that necessary.) The point is, with two volumes you're using both controllers anyway.
Bruce
This is what the book says:
"-Write performance will suffer when a RAID group spans more than one FC-AL or SCSI adapter. -Keep all drives assigned to a RAID group on the same adapter whenever posible. -There is an approximate 10% decrease in write performance when the filer attempts to write to a RAID group spanning two adapters. This is due to inherent limitations in the PCI bus."
Bruce Sterling Woodcock wrote:
Unless I misunderstand some of the bus issues involved, the 10% performance hit would only occur on writes, and since writes are grouped and it is unlikely that you are writing to the drives ever second, you'll never notice it.
Reads come off individual disks as needed, so it doesn't matter which controller they are on. (I guess if you needed to read a whole stripe at once there's a potential hit, but I'm not sure that necessary.) The point is, with two volumes you're using both controllers anyway.
Bruce
Douglas Ritschel Douglas.Ritschel@fnc.fujitsu.com writes:
I just got back from "NetApp 202 Advanced System Admin & Troubleshooting" class. They went over a similar scenario. They had two volumes, one on each of two controllers. (they said that there is 10 percent performance hit, when drives in an volume are on different controllers). For example, if they had two volumes, each with six drives, and the all of the drives for the first volume were on the first controller and all of the drives for the second volume were on the second controller. And then a drive failed in the second volume, and rebuilt on a spare on the first controller. There would be a 10 percent performance hit, every time you accessed the second volume.
and Bruce Sterling Woodcock sirbruce@ix.netcom.com elaborates: | | Unless I misunderstand some of the bus issues involved, the 10% | performance hit would only occur on writes, and since writes are grouped | and it is unlikely that you are writing to the drives ever second, you'll | never notice it. | | Reads come off individual disks as needed, so it doesn't matter which | controller they are on. (I guess if you needed to read a whole stripe at | once there's a potential hit, but I'm not sure that necessary.) The point | is, with two volumes you're using both controllers anyway.
I'm a bit surprised to see the performance penalty, small though it may be, be this way around. I would have assumed that spreading the disc I/O load across controllers as much as possible would be the right thing to do. Can someone explain this seemingly counter-intuitive result in more detail?
Incidentally, the original poster talked only about separate shelves, not separate controllers. If by this he meant two shelves on a single FCAL loop (he didn't say whether his discs were FCAL or SCSI), would the above still apply or not?
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.
I'm a bit surprised to see the performance penalty, small though it may
be,
be this way around. I would have assumed that spreading the disc I/O load across controllers as much as possible would be the right thing to do. Can someone explain this seemingly counter-intuitive result in more detail?
I'm just guessing again here, but since the manul confirms that the penalty only occurs on writes, then my guess is that since WAFL writes out a whole stripe at a time, and since you pretty much have to wait for a write to complete when you make it, if your raid group is across two controllers then your stripe is across two controllers and thus you have two bus transfers across the PCI bus and two waits for successful completion, during which time the filer can't do much else.
Note that when they say 10% write penalty, they probably mean as measured by SpecSFS, during heavy loads. It doesn't mean that every time you save a file from your Unix or PC client, it'll take 10% longer. The filer will still return right away; it just slows the filer down a little when it flushes the NVRAM log. I suspect for most environments who only have writes every 5-10 seconds, it wouldn't be a very noticeable hit to performance.
Bruce
Can't you just type DISK SWAP amd move the disks where you want to? I thought that this situation is exactly tailored for the DISK SWAP command. No?
If you gotta monkey around with what disks are where [for whatever reason you have] then I thought that was the way to do it...
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com]On Behalf Of Bhavnesh Makin Sent: Monday, July 10, 2000 10:04 AM To: toasters@mathworks.com Subject: volume/shelf containment
We have a F740 filer with two 18 gig shelves. Each shelf is dedicated to a volume.We have reserved disk on each shelf as a hotspare for that particular volume. So in total we have two hot spares.
Last week one of the disks failed on the first volume and failed disk was recontructed on the spare which was physically present on second shelf. So we ended up with the volume which transgresses over to another shelf. Our goal was to contain each volume to a shelf of its own.
First,is there a way to contain the volumes with hot spares? Secondly how do I make the first volume give up the disk on second shelf and recontruct it on spare which is physically present on first shelf.
I hope I made myself clear.
Thanks,
Bhavnesh