i'm so grateful for the help yesterday! just for fun, i will share the day
i had back with you all. i hope i get the chance to return the assistance
some day.
thanks to the two best netapp salesmen on earth ;) i spent the morning
yesterday printing pdfs and planning data migrations to ontap 7 and
flexvols. the flexvol sizing calculation i am especially grateful for-
it isn't obvious and i would certainly have guessed wrong.
armed with a clear direction about where my day was going, i
marched into the office...and right into a user-impacting outage due
to a full volume. up until yesterday i had never logged into any of
the filers and knew nothing about them. so in the midst of the
firefight, i had several revelations about the configuration. as for
the full volume, i found that snapshots were being kept for only
one week and used 17gb. but there was 180gb of snap reserve.
so i cut the snap reserve in half, quieting the tumult, and explained
to the boss that ten weeks of snap reserve is awfully nice to have,
and if you're going to set that much space aside, you might want
to keep a little more backup data around. :)
about the configurations, dozens of qtrees exist, many duplicated
from filer to filer, but some very tiny, as if perhaps there was going
to be mirroring at one time, but instead over the years the second
qtree got used just a little on the second filer. none of the qtrees is
running quotas and they all have the default unix permissions, so the
implementation was never exploited. some of the directories at
/vol/vol1/ aren't in qtrees, so consistency suffered over the years as
well. however, the computer gods truely love me...the boss already
upgraded the filers to ontap 7! each filer has two flexvols, one for
system and one for data.
so, to plan the rearchitecting and data migration project, i spent
the rest of the day making a spreadsheet describing the existing
data directories and filling in nfs ops statistics for all the qtrees,
the one real blessing of that feature in this setup. because of the
single volume for everything i don't have snap statistics for each
tree, but i created columns for that in my spreadsheet and we'll
iterate on the configuration as stats come in later. stupidly, i
counted the space each data directory is taking with du -sk, so
i got the snaps as well and all the numbers are off. i guess today
i'm going to be adding up directory sizes with a massive find -prune
loop. (or is there a better way?)
at the end of the day, we met to discuss the spreadsheet, and
decided the intuitive thing to do was create a new data architecture
that mirrored the corporate org chart. this will facilitate setting up
samba file sharing that makes sense. right now filer space is all
mounted at / but in the future each top-level organization will
have its own top-level directory on the sun servers for the mount
points for the flexvol/qtrees it needs. with samba, right now folders
for multiperson efforts are kind of shared all over the place whereever
users happen to be working. going forward, for consistency each
team will have a group-shareable folder for all the team members
(whether it's needed now or not). and potentially, each major project
will have a shareable folder that will span teams, not individuals.
but project folders will all belong in a hierarchy owned by a project
lead or someone, so the data can be maintained and removed later
when it's no longer needed.
i plan to add the flexvol space calculation to the spreadsheet today
so it can be used as a tool to redistribute space on the fly during
architecture discussions.
thank you all again. :)
...lori
On 11/9/05, Lori Barfield <itdirector(a)gmail.com> wrote:
> thank you, greg and john, for the recommendations and the pdf links.
> i was not aware of FlexVols and it looks like this is suddenly going
> to be a much more challenging exercise. :)