you are 100% correct. the ability to hot-remove shelves that contain no data would be an incredible value gain. we are currently migrating into a multistore scenario in which we can do planned migrations of all data from one filer cluster to another and free up the original filer for maintenance tasks without any extended downtime or change to the clients in our NFS environments.
this is the next best thing while GX merges back into the current 7.x base down the road. we get planned maintenance windows of one hour or less and the ability to run environments without change on a different set of filer heads. ontap upgrades, hardware swaps, environment migrations much less painful and actually doable at this point. we have tons of 72/144gb drives to swap out before their support expiry in 2011 :)
good luck to the OP.
-- Daniel Leeds Manager, Storage Operations Edmunds, Inc. 1620 26th Street, Suite 400 South Santa Monica, CA 90404
310-309-4999 desk 310-430-0536 cell
-----Original Message----- From: Jeff Mohler [mailto:speedtoys.racing@gmail.com] Sent: Thu 11/13/2008 3:27 PM To: Leeds, Daniel Cc: Nils Vogels; Ray Van Dolson; toasters@mathworks.com Subject: Re: Removing a shelf.
If youre not so lucky, you can lose visibility to an entire random shelf on the system, forcing the rebuild of X missing drives.
I have seen this happen, and after a hard shutdown of all devices to re-map the loops, it all comes back, but you're still stuck rebuilding missing drives from the unsupported pull.
On Thu, Nov 13, 2008 at 3:03 PM, Leeds, Daniel dleeds@edmunds.com wrote:
yes, i neglected to mention we did a clean shutdown shortly after. this was coordinated as a way to fix an urgent need which it sounded like the original poster had in regards to sending a loaner shelf back.
it did function fine without any issues until an approved downtime window could be scheduled.
in most of our environments we have zero ability for any downtime scheduled or otherwise which is really killing us with a current shelf bug causing drives to randomly go amber and support recommends power cycling the shelf to correct it + updated firmware--but thats a whole other story.
--daniel
-- Daniel Leeds Manager, Storage Operations Edmunds, Inc. 1620 26th Street, Suite 400 South Santa Monica, CA 90404
310-309-4999 desk 310-430-0536 cell
-----Original Message----- From: Jeff Mohler [mailto:speedtoys.racing@gmail.com] Sent: Thu 11/13/2008 3:01 PM To: Leeds, Daniel Cc: Nils Vogels; Ray Van Dolson; toasters@mathworks.com Subject: Re: Removing a shelf.
Daniel:
I would check with support, IIRC, hot removing a shelf is _really_ frowned upon, and can cause mapping issues that can lead to _requiring_ you to power down all nodes and shelves to correct.
Even in failover mode, shelf mapping is persistent.
https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb40710
Daniel Leeds Manager, Storage Operations Edmunds, Inc. 1620 26th Street, Suite 400 South Santa Monica, CA 90404
310-309-4999 desk 310-430-0536 cell
-----Original Message----- From: owner-toasters@mathworks.com on behalf of Nils Vogels Sent: Thu 11/13/2008 1:16 PM To: Ray Van Dolson Cc: toasters@mathworks.com Subject: Re: Removing a shelf.
Hey Ray,
On Thu, Nov 13, 2008 at 9:31 PM, Ray Van Dolson rvandolson@esri.com wrote:
An aggregate spans two shelves, one of which is the shelf that needs to be returned. This aggregate contains a FlexVol currently in use. I'd like to preserve the FlexVol but remove the shelf.
If you can afford the downtime, you could offline the aggregate concerned, swap the disks into the shelf that can remain, and online the aggregate again. Should be a few minutes.
Another way, which does take a lot longer, is to use "disk replace" to copy the used disks from their old position to their new ones. No downtime, but the disks get copied one at a time, which can be a few hours.
HTH & HAND
-- Simple guidelines to happiness: Work like you don't need the money, Love like your heart has never been broken and Dance like no one can see you.