i'm so grateful for the help yesterday! just for fun, i will share the day i had back with you all. i hope i get the chance to return the assistance some day.
thanks to the two best netapp salesmen on earth ;) i spent the morning yesterday printing pdfs and planning data migrations to ontap 7 and flexvols. the flexvol sizing calculation i am especially grateful for- it isn't obvious and i would certainly have guessed wrong.
armed with a clear direction about where my day was going, i marched into the office...and right into a user-impacting outage due to a full volume. up until yesterday i had never logged into any of the filers and knew nothing about them. so in the midst of the firefight, i had several revelations about the configuration. as for the full volume, i found that snapshots were being kept for only one week and used 17gb. but there was 180gb of snap reserve. so i cut the snap reserve in half, quieting the tumult, and explained to the boss that ten weeks of snap reserve is awfully nice to have, and if you're going to set that much space aside, you might want to keep a little more backup data around. :)
about the configurations, dozens of qtrees exist, many duplicated from filer to filer, but some very tiny, as if perhaps there was going to be mirroring at one time, but instead over the years the second qtree got used just a little on the second filer. none of the qtrees is running quotas and they all have the default unix permissions, so the implementation was never exploited. some of the directories at /vol/vol1/ aren't in qtrees, so consistency suffered over the years as well. however, the computer gods truely love me...the boss already upgraded the filers to ontap 7! each filer has two flexvols, one for system and one for data.
so, to plan the rearchitecting and data migration project, i spent the rest of the day making a spreadsheet describing the existing data directories and filling in nfs ops statistics for all the qtrees, the one real blessing of that feature in this setup. because of the single volume for everything i don't have snap statistics for each tree, but i created columns for that in my spreadsheet and we'll iterate on the configuration as stats come in later. stupidly, i counted the space each data directory is taking with du -sk, so i got the snaps as well and all the numbers are off. i guess today i'm going to be adding up directory sizes with a massive find -prune loop. (or is there a better way?)
at the end of the day, we met to discuss the spreadsheet, and decided the intuitive thing to do was create a new data architecture that mirrored the corporate org chart. this will facilitate setting up samba file sharing that makes sense. right now filer space is all mounted at / but in the future each top-level organization will have its own top-level directory on the sun servers for the mount points for the flexvol/qtrees it needs. with samba, right now folders for multiperson efforts are kind of shared all over the place whereever users happen to be working. going forward, for consistency each team will have a group-shareable folder for all the team members (whether it's needed now or not). and potentially, each major project will have a shareable folder that will span teams, not individuals. but project folders will all belong in a hierarchy owned by a project lead or someone, so the data can be maintained and removed later when it's no longer needed.
i plan to add the flexvol space calculation to the spreadsheet today so it can be used as a tool to redistribute space on the fly during architecture discussions.
thank you all again. :)
...lori
On 11/9/05, Lori Barfield itdirector@gmail.com wrote:
thank you, greg and john, for the recommendations and the pdf links. i was not aware of FlexVols and it looks like this is suddenly going to be a much more challenging exercise. :)
.... i guess today i'm going to be adding up directory sizes with a massive find -prune loop. (or is there a better way?) ......
If your stuff is in qtrees do quota report and the amount of data in each Qtree is tracked. Not sure if that helps.
--- Lori Barfield itdirector@gmail.com wrote:
i'm so grateful for the help yesterday! just for fun, i will share the day i had back with you all. i hope i get the chance to return the assistance some day.
thanks to the two best netapp salesmen on earth ;) i spent the morning yesterday printing pdfs and planning data migrations to ontap 7 and flexvols. the flexvol sizing calculation i am especially grateful for- it isn't obvious and i would certainly have guessed wrong.
armed with a clear direction about where my day was going, i marched into the office...and right into a user-impacting outage due to a full volume. up until yesterday i had never logged into any of the filers and knew nothing about them. so in the midst of the firefight, i had several revelations about the configuration. as for the full volume, i found that snapshots were being kept for only one week and used 17gb. but there was 180gb of snap reserve. so i cut the snap reserve in half, quieting the tumult, and explained to the boss that ten weeks of snap reserve is awfully nice to have, and if you're going to set that much space aside, you might want to keep a little more backup data around. :)
about the configurations, dozens of qtrees exist, many duplicated from filer to filer, but some very tiny, as if perhaps there was going to be mirroring at one time, but instead over the years the second qtree got used just a little on the second filer. none of the qtrees is running quotas and they all have the default unix permissions, so the implementation was never exploited. some of the directories at /vol/vol1/ aren't in qtrees, so consistency suffered over the years as well. however, the computer gods truely love me...the boss already upgraded the filers to ontap 7! each filer has two flexvols, one for system and one for data.
so, to plan the rearchitecting and data migration project, i spent the rest of the day making a spreadsheet describing the existing data directories and filling in nfs ops statistics for all the qtrees, the one real blessing of that feature in this setup. because of the single volume for everything i don't have snap statistics for each tree, but i created columns for that in my spreadsheet and we'll iterate on the configuration as stats come in later. stupidly, i counted the space each data directory is taking with du -sk, so i got the snaps as well and all the numbers are off. i guess today i'm going to be adding up directory sizes with a massive find -prune loop. (or is there a better way?)
at the end of the day, we met to discuss the spreadsheet, and decided the intuitive thing to do was create a new data architecture that mirrored the corporate org chart. this will facilitate setting up samba file sharing that makes sense. right now filer space is all mounted at / but in the future each top-level organization will have its own top-level directory on the sun servers for the mount points for the flexvol/qtrees it needs. with samba, right now folders for multiperson efforts are kind of shared all over the place whereever users happen to be working. going forward, for consistency each team will have a group-shareable folder for all the team members (whether it's needed now or not). and potentially, each major project will have a shareable folder that will span teams, not individuals. but project folders will all belong in a hierarchy owned by a project lead or someone, so the data can be maintained and removed later when it's no longer needed.
i plan to add the flexvol space calculation to the spreadsheet today so it can be used as a tool to redistribute space on the fly during architecture discussions.
thank you all again. :)
...lori
On 11/9/05, Lori Barfield itdirector@gmail.com wrote:
thank you, greg and john, for the recommendations
and the pdf links.
i was not aware of FlexVols and it looks like this
is suddenly going
to be a much more challenging exercise. :)
__________________________________ Yahoo! Mail - PC Magazine Editors' Choice 2005 http://mail.yahoo.com
stupidly, i counted the space each data directory is taking with du -sk, so i got the snaps as well and all the numbers are off. i guess today i'm going to be adding up directory sizes with a massive find -prune loop. (or is there a better way?)
It's probably easier to run the du within the most recent snapshot. Then there's nowhere else for it to descend..
On 11/10/05, Jerry juanino@yahoo.com wrote:
.... i guess today i'm going to be adding up directory sizes with a massive find -prune loop. (or is there a better way?)
If your stuff is in qtrees do quota report and the amount of data in each Qtree is tracked. Not sure if that helps.
fs01> quota report quota: quotas are off.
:(
...lori
Jerry> .... i guess today i'm going to be adding up directory sizes Jerry> with a massive find -prune loop. (or is there a better way?) Jerry> ......
Lori,
What's I'd do is setup a quotas file for all your qtrees, but put in an empty quota. Then you turn on quotas, wait for them to initialize, and then run a quota report. Works very well.
John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087
On 11/10/05, John Stoffel john.stoffel@taec.toshiba.com wrote:
Jerry> .... i guess today i'm going to be adding up directory sizes Jerry> with a massive find -prune loop. (or is there a better way?)
What's I'd do is setup a quotas file for all your qtrees, but put in an empty quota. Then you turn on quotas, wait for them to initialize, and then run a quota report. Works very well.
i looked at the man page and there are no hints about how to add an empty quota. i don't want to enable quotas and suddenly put everyone in outage because i don't know the right syntax. may i ask toasters for an example command, please? i'll try it first on a less-busy qtree, and if all goes well, will do it everywhere.
btw, i received the private suggestion of running a df from inside a snapshot for counting space used. it was such a good idea i wanted to share it. of course, that's not a proper way to monitor usage on a filer, but it got my doc effort off the ground this morning. thank you, toasters. :)
...lori
Here's how to do the tracking-only trees (from our active quota file). Enabling it causes no problems. We actually turn on certain tracking quotas in the middle of the day from time to time.
#Qtree Quotas #--------------------------------------------------------------------------------------------- #Target Type Disk Files Thold Sdisk Sfile Comment #---------------------------------------------------------------------------------------------
<------------snip /vol/perforce/ActiveXContainer tree - /vol/perforce/ASIC tree - /vol/perforce/ATC tree - /vol/perforce/CodeMart tree - /vol/perforce/ComponentWorks tree - /vol/perforce/CVI tree - /vol/perforce/DAQ tree - /vol/perforce/DAQConfig tree - /vol/perforce/DevSuite tree - <------------snip
The output of the quota report will look like the following. Notice that the "limit" fields for all quotas are "-" indicating no limit. However, it does give size and number of files (which we find useful)
hades> quota report K-Bytes Files Type ID Volume Tree Used Limit Used Limit Quota Specifier ----- -------- -------- -------- -------- -------- ------- ------- --------------- <------------snip tree 1 perforce ActiveXContainer 104340 - 5630 - /vol/perforce/ActiveXContainer tree 2 perforce ASIC 25975900 - 97173 - /vol/perforce/ASIC tree 3 perforce ATC 17604848 - 109831 - /vol/perforce/ATC tree 4 perforce CodeMart 4980056 - 35261 - /vol/perforce/CodeMart tree 5 perforce ComponentWorks 2085444 - 34330 - /vol/perforce/ComponentWorks tree 6 perforce CVI 9946108 - 86842 - /vol/perforce/CVI tree 7 perforce DAQ 18813660 - 223765 - /vol/perforce/DAQ tree 8 perforce DAQConfig 1026752 - 10632 - /vol/perforce/DAQConfig tree 9 perforce DevSuite 97776 - 1524 - /vol/perforce/DevSuite <------------snip
Jeff Mery - MCSE, MCP National Instruments
------------------------------------------------------------------------- "Allow me to extol the virtues of the Net Fairy, and of all the fantastic dorks that make the nice packets go from here to there. Amen." TB - Penny Arcade -------------------------------------------------------------------------
Lori Barfield itdirector@gmail.com Sent by: owner-toasters@mathworks.com 11/10/2005 02:10 PM
To John Stoffel john.stoffel@taec.toshiba.com, toasters@mathworks.com cc
Subject Re: howto document for initial volume architecting?
On 11/10/05, John Stoffel john.stoffel@taec.toshiba.com wrote:
Jerry> .... i guess today i'm going to be adding up directory sizes Jerry> with a massive find -prune loop. (or is there a better way?)
What's I'd do is setup a quotas file for all your qtrees, but put in an empty quota. Then you turn on quotas, wait for them to initialize, and then run a quota report. Works very well.
i looked at the man page and there are no hints about how to add an empty quota. i don't want to enable quotas and suddenly put everyone in outage because i don't know the right syntax. may i ask toasters for an example command, please? i'll try it first on a less-busy qtree, and if all goes well, will do it everywhere.
btw, i received the private suggestion of running a df from inside a snapshot for counting space used. it was such a good idea i wanted to share it. of course, that's not a proper way to monitor usage on a filer, but it got my doc effort off the ground this morning. thank you, toasters. :)
...lori
Lori Barfield wrote:
stupidly, i counted the space each data directory is taking with du -sk, so i got the snaps as well and all the numbers are off. i guess today i'm going to be adding up directory sizes with a massive find -prune loop. (or is there a better way?)
One trick for doing du's without encountering any .snapshot directories is to do the du within a snapshot.