I guess my reply to Lohit should go to the mailing list too for archiving purposes ...
---------- Forwarded message ---------- From: Filip Sneppe filip.sneppe@gmail.com Date: Aug 21, 2007 10:42 PM Subject: Re: copying qtree To: Lohit lohit.b@gmail.com
Hi,
On 8/21/07, Lohit lohit.b@gmail.com wrote:
There is a qtree "proj-CFD" which is 200GB in size and the containing volume is 500GB. The volume is 90% full. I have been asked to increase the space for the qtree to 400GB. Previous admin created the aggregate(Aggr1) with a raidsize of 8 and there are two raidgroups with 8disks in each raidgroup.
I have three spares and adding these to the current aggregate would create another raidgroup of 3 disks and i will be left with 1 usable disk (we have raid_dp) and also i will have no spares left.
The other responses you've got so far pointed out that QSM would be the best choice to move the data to a volume on another aggregate.
But I'd like to point out that it actually is possible to add some of those spare disks to your existing aggregate aggr1 with raidsize 8 & raid_dp.
First you need to bump your raidsize for that aggregate using the "aggr options" command.
Next, when adding disks to your aggregate, use the "-g" option to the "aggr add" command to specify the raid group you want to add the disks to. You will need to use the "-f" flag to force this, too (and you shouldn't be doing this if you ever want to go back to ONTAP 6.2).
Here's what it looks like on a simulator (I started with a raid group size of 4 for demonstration purposes only):
filer*> sysconfig -r Aggregate aggrtest (online, raid_dp) (block checksums) Plex /aggrtest/plex0 (online, normal, active) RAID group /aggrtest/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity v4.32 v4 2 0 FC:B - FCAL N/A 36/74752 42/87168 parity v4.18 v4 1 2 FC:B - FCAL N/A 36/74752 42/87168 data v4.33 v4 2 1 FC:B - FCAL N/A 36/74752 42/87168 data v4.19 v4 1 3 FC:B - FCAL N/A 36/74752 42/87168
RAID group /aggrtest/plex0/rg1 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity v4.20 v4 1 4 FC:B - FCAL N/A 36/74752 42/87168 parity v4.34 v4 2 2 FC:B - FCAL N/A 36/74752 42/87168 data v4.21 v4 1 5 FC:B - FCAL N/A 36/74752 42/87168 data v4.35 v4 2 3 FC:B - FCAL N/A 36/74752 42/87168
Aggregate aggr0 (online, raid0) (block checksums) ... filer*> aggr options aggrtest nosnap=off, raidtype=raid_dp, raidsize=4, ignore_inconsistent=off, snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off, snapshot_autodelete=on, lost_write_protect=on filer*> aggr options aggrtest raidsize 16 filer*> aggr add aggrtest -f -g rg0 1 Addition of 1 disk to the aggregate has been initiated. The disk needs to be zeroed before addition to the aggregate. The process has been initiated and you will be notified via the system log as disks are added. filer*> aggr add aggrtest -f -g rg1 1 aggr add: Unable to perform operation, a disk add operation is in progress. filer*> Tue Aug 21 22:39:59 CEST [raid.disk.zero.done:notice]: Disk /v4.36 Shelf 2 Bay 4 [NETAPP VD-16MB-FZ-520 0042] S/N [35665418] : disk zeroing complete Tue Aug 21 22:40:00 CEST [raid.vol.disk.add.done:notice]: Addition of Disk /aggrtest/plex0/rg0/v4.36 Shelf 2 Bay 4 [NETAPP VD-16MB-FZ-520 0042] S/N [35665418] to aggregate aggrtest has completed successfully filer*> aggr add aggrtest -f -g rg1 1 Addition of 1 disk to the aggregate has been initiated. The disk needs to be zeroed before addition to the aggregate. The process has been initiated and you will be notified via the system log as disks are added. filer*> sysconfig -r Aggregate aggrtest (online, raid_dp, growing) (block checksums) Plex /aggrtest/plex0 (online, normal, active) RAID group /aggrtest/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity v4.32 v4 2 0 FC:B - FCAL N/A 36/74752 42/87168 parity v4.18 v4 1 2 FC:B - FCAL N/A 36/74752 42/87168 data v4.33 v4 2 1 FC:B - FCAL N/A 36/74752 42/87168 data v4.19 v4 1 3 FC:B - FCAL N/A 36/74752 42/87168 data v4.36 v4 2 4 FC:B - FCAL N/A 36/74752 42/87168
RAID group /aggrtest/plex0/rg1 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- dparity v4.20 v4 1 4 FC:B - FCAL N/A 36/74752 42/87168 parity v4.34 v4 2 2 FC:B - FCAL N/A 36/74752 42/87168 data v4.21 v4 1 5 FC:B - FCAL N/A 36/74752 42/87168 data v4.35 v4 2 3 FC:B - FCAL N/A 36/74752 42/87168
Targeted to traditional volume or aggregate but not yet assigned to a raid group RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks) --------- ------ ------------- ---- ---- ---- ----- -------------- -------------- pending v4.37 v4 2 5 FC:B - FCAL N/A 36/74752 42/87168 (zeroing, 19% done)
Aggregate aggr0 (online, raid0) (block checksums) ...
Now isn't that cool ? :-)
Best regards, Filip