We have an F760 with two shelves of 72 GB drives. There are two volumes of five drives each. We now face the decision to expand one of the volumes or create a new three-drive volume, leaving one hot spare.
If we create a new volume, we lose yet another large chunk of space for parity. (We also get a small number of spindles, but performance hasn't proven to be a problem in our situation.)
If we expand an existing volume, we get an uncomfortably large volume. The volumes are already 215 GB.
We're leaning toward the latter resolution, because disk usage inexorably increases, and purchasing more hardware is not an option even on the horizon.
How have others dealt with these matters? Do such large volumes entail problems beyond the obviously longer backup and restore times? Are we overlooking any major issues?
--Brian L. Brush Senior Systems Administrator Paradyne Corporation
We have had no trouble with large volumes until we got over 1.5 TB. NetApp told us that they have no plans to increase that max filesystem size.
You have to watch out for the new default raid size of 8 from the old of 14. You might prefer to increase it on some systems; rather than lose another disk to parity within a "larger" volume.
I recommend using qtrees right away to protect one filesystem usage pattern from crowding another.
At 11:24 AM -0400 4/12/01, Brian L. Brush wrote:
We have an F760 with two shelves of 72 GB drives. There are two volumes of five drives each. We now face the decision to expand one of the volumes or create a new three-drive volume, leaving one hot spare.
If we create a new volume, we lose yet another large chunk of space for parity. (We also get a small number of spindles, but performance hasn't proven to be a problem in our situation.)
If we expand an existing volume, we get an uncomfortably large volume. The volumes are already 215 GB.
We're leaning toward the latter resolution, because disk usage inexorably increases, and purchasing more hardware is not an option even on the horizon.
How have others dealt with these matters? Do such large volumes entail problems beyond the obviously longer backup and restore times? Are we overlooking any major issues?
--Brian L. Brush Senior Systems Administrator Paradyne Corporation
Brian,
I have 3 volumes over 300gb and one that is 600gb. The only problem I have is that it (the 600gb volume) is now fragmented from the way we added disks (add in groups of AT LEAST 3, preferably 5). Otherwise, backups finish in under 24 hours so daily fulls are possible if that's your schedule. And that's to 1 tape drive, if you have several and break it out by qtree it would be done in much less time. Restores are a different animal....
~JK
"Brian L. Brush" wrote:
We have an F760 with two shelves of 72 GB drives. There are two volumes of five drives each. We now face the decision to expand one of the volumes or create a new three-drive volume, leaving one hot spare.
If we create a new volume, we lose yet another large chunk of space for parity. (We also get a small number of spindles, but performance hasn't proven to be a problem in our situation.)
If we expand an existing volume, we get an uncomfortably large volume. The volumes are already 215 GB.
We're leaning toward the latter resolution, because disk usage inexorably increases, and purchasing more hardware is not an option even on the horizon.
How have others dealt with these matters? Do such large volumes entail problems beyond the obviously longer backup and restore times? Are we overlooking any major issues?
--Brian L. Brush Senior Systems Administrator Paradyne Corporation
We started out with a self-imposed 400GB volume limit, and are now increasing it to 600GB to accomodate our data growth. I find small volumes to be unweildly, especially if your data grows at an alarming rate like ours - it robs the NetApp of some of it's flexibility. Backup tape speeds and capacities are increasing, and larger volumes should become easier to backup and restore. That said, I'm curious about your fragmentation comment. How does that express itself, in performance degradation? For simple budgetary reasons, I almost never get to add 3-5 disks at a time to any of my file systems.
Moshe
Jeff Kennedy wrote:
Brian,
I have 3 volumes over 300gb and one that is 600gb. The only problem I have is that it (the 600gb volume) is now fragmented from the way we added disks (add in groups of AT LEAST 3, preferably 5). Otherwise, backups finish in under 24 hours so daily fulls are possible if that's your schedule. And that's to 1 tape drive, if you have several and break it out by qtree it would be done in much less time. Restores are a different animal....
~JK
"Brian L. Brush" wrote:
We have an F760 with two shelves of 72 GB drives. There are two volumes of five drives each. We now face the decision to expand one of the volumes or create a new three-drive volume, leaving one hot spare.
If we create a new volume, we lose yet another large chunk of space for parity. (We also get a small number of spindles, but performance hasn't proven to be a problem in our situation.)
If we expand an existing volume, we get an uncomfortably large volume. The volumes are already 215 GB.
We're leaning toward the latter resolution, because disk usage inexorably increases, and purchasing more hardware is not an option even on the horizon.
How have others dealt with these matters? Do such large volumes entail problems beyond the obviously longer backup and restore times? Are we overlooking any major issues?
--Brian L. Brush Senior Systems Administrator Paradyne Corporation
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
My fragmentation problems are in the backup area. Once my software told the filer to kick off a backup it was 3-4 hours before it would start writing to tape. Once it started writing though it was fast, 7-12MB/sec on average. Users have never complained about NFS performance. Supposedly this is a symptom of fragmentation; running the inode table takes substantially longer if it's fragmented. I'll give you another example:
I am using rsync to transfer this one volume to 3 new volumes. The qtree I was transferring was approx. 150gb in size, nothing special. It took 3 days before the file list was finished building and data started being transferred. It is now Monday the 16th and it's transferred not even half the data, I started this on the 9th.
I will add that at this point I question the number of inodes as well. The volume consists of 22x36gb disks and has a maxfilesize of 22,000,000. Now, given the number of disks that seems like a reasonable number so it shouldn't be a problem, but NetApp has no idea why this is happening and it's killing me on data transfers and backups.
~JK
Moshe Linzer wrote:
We started out with a self-imposed 400GB volume limit, and are now increasing it to 600GB to accomodate our data growth. I find small volumes to be unweildly, especially if your data grows at an alarming rate like ours - it robs the NetApp of some of it's flexibility. Backup tape speeds and capacities are increasing, and larger volumes should become easier to backup and restore. That said, I'm curious about your fragmentation comment. How does that express itself, in performance degradation? For simple budgetary reasons, I almost never get to add 3-5 disks at a time to any of my file systems.
Moshe
Jeff Kennedy wrote:
Brian,
I have 3 volumes over 300gb and one that is 600gb. The only problem I have is that it (the 600gb volume) is now fragmented from the way we added disks (add in groups of AT LEAST 3, preferably 5). Otherwise, backups finish in under 24 hours so daily fulls are possible if that's your schedule. And that's to 1 tape drive, if you have several and break it out by qtree it would be done in much less time. Restores are a different animal....
~JK
"Brian L. Brush" wrote:
We have an F760 with two shelves of 72 GB drives. There are two volumes of five drives each. We now face the decision to expand one of the volumes or create a new three-drive volume, leaving one hot spare.
If we create a new volume, we lose yet another large chunk of space for parity. (We also get a small number of spindles, but performance hasn't proven to be a problem in our situation.)
If we expand an existing volume, we get an uncomfortably large volume. The volumes are already 215 GB.
We're leaning toward the latter resolution, because disk usage inexorably increases, and purchasing more hardware is not an option even on the horizon.
How have others dealt with these matters? Do such large volumes entail problems beyond the obviously longer backup and restore times? Are we overlooking any major issues?
--Brian L. Brush Senior Systems Administrator Paradyne Corporation
--
Jeff Kennedy Unix Administrator AMCC jlkennedy@amcc.com
Our volumes are similarly large, mostly around 300 GB with a few closer to 600 GB. Fortunately, with ONTAP 6.x and NDMP, you can now use Direct Access Restore (DAR), so the restore times (for individual files at least) increases dramatically. As far as other issues we have seen with the large volumes, we tend to "waste" drives because we keep at least two as hot spares. Of course, the largest drives we are using are 36 GB; on our filers with 18GB drives, we usually keep 5 drives as spares to allow for failures and/or future growth.
We typically only have two volumes on a given filer, one root (usually two 18 GB drives, one data, one parity) and one vol0 (the rest of the disks, the number of which varies depending on the size of the drives). If you use qtrees to split this vol0 up into four even parts, you can avoid much of the headaches of large volumes; NetApp recommends four concurrent backup streams on a F760 (and I would guess the same would be true of the F8xx series). If you back up, and restore, the four qtrees separately, you should see fairly reasonable times, especially with directly attached tape drive(s) and DAR.
Geoff Hardin Dallas Semiconductor geoff.hardin@dalsemi.com
"Brian L. Brush" wrote:
We have an F760 with two shelves of 72 GB drives. There are two volumes of five drives each. We now face the decision to expand one of the volumes or create a new three-drive volume, leaving one hot spare.
If we create a new volume, we lose yet another large chunk of space for parity. (We also get a small number of spindles, but performance hasn't proven to be a problem in our situation.)
If we expand an existing volume, we get an uncomfortably large volume. The volumes are already 215 GB.
We're leaning toward the latter resolution, because disk usage inexorably increases, and purchasing more hardware is not an option even on the horizon.
How have others dealt with these matters? Do such large volumes entail problems beyond the obviously longer backup and restore times? Are we overlooking any major issues?
--Brian L. Brush Senior Systems Administrator Paradyne Corporation