Sorry for the newbie questions, one last question
When I do a df, for instance
/vol/vol0/ 127105232 748236 26356996 1% /vol/vol0/ /vol/vol0/.snapshot 31776304 4548 31771756 0% /vol/vol0/.snapsh ot
Does vol0's capacity include the snapshot capacity? That is to say, vol0 is really 127105232? Or, is vol0 really that plus the reserved snapshot capacity, in this case, 127105232 + 31776304?
Second I do understand from the documentation that if that snapshot exceeds the specified size, it will eat up the volumes capacity. In this case, both vol0 and snapshot would reflect the exceed capacity, right?
jstalbot@mail.com (Julius Talbot) writes:
Sorry for the newbie questions, one last question
When I do a df, for instance
/vol/vol0/ 127105232 748236 26356996 1% /vol/vol0/ /vol/vol0/.snapshot 31776304 4548 31771756 0% /vol/vol0/.snapshot
Does vol0's capacity include the snapshot capacity? That is to say, vol0 is really 127105232? Or, is vol0 really that plus the reserved snapshot capacity, in this case, 127105232 + 31776304?
The latter figure is more "real" than the former. If you change the snapshot reserve with "snap reserve vol0 [some-number]" then it is the sum of the two sizes that will stay constant at 158881536 KB.
That's not to say that it's totally "real", though. It represents the 90% occupancy limit imposed on a raw capacity of 176535040 KB = 172397.5 MB which itself comes from (I imagine) five "36GB" data discs each right-sized to 34500 MB with a 20.5 MB reserved low-address area pruned off.
Second I do understand from the documentation that if that snapshot exceeds the specified size, it will eat up the volumes capacity. In this case, both vol0 and snapshot would reflect the exceed capacity, right?
The snapshot usage in excess of the reserve is accounted against the regular /vol/vol0, yes. This is so that the usage in that line will reach 100% just when the volume really is "full" - i.e. attempts to allocate more space will fail. Again, you can experiment with altering the reserve to see how this works:
myfiler> snap reserve mail 35 myfiler> df /vol/mail Filesystem kbytes used avail capacity Mounted on /vol/mail/ 30514204 3859904 26654300 13% /vol/mail/ /vol/mail/.snapshot 16430720 15131316 1299404 92% /vol/mail/.snapshot myfiler> snap reserve mail 32 myfiler> df /vol/mail Filesystem kbytes used avail capacity Mounted on /vol/mail/ 31922552 3968848 27953704 12% /vol/mail/ /vol/mail/.snapshot 15022372 15131316 0 101% /vol/mail/.snapshot myfiler> snap reserve mail 35 myfiler> df /vol/mail Filesystem kbytes used avail capacity Mounted on /vol/mail/ 30514204 3859928 26654276 13% /vol/mail/ /vol/mail/.snapshot 16430720 15131316 1299404 92% /vol/mail/.snapshot
In the second "df", the 15131316-15022372 = 108944 KB excess is accounted against /vol/mail as well, making the usage 3859904+108944 = 3968848 KB.
Chris Thompson University of Cambridge Computing Service, Email: cet1@ucs.cam.ac.uk New Museums Site, Cambridge CB2 3QG, Phone: +44 1223 334715 United Kingdom.
Think about snapshots this way. Suppose there were no snap reserve at all. Whenever you delete a file that is also in a snapshot, you get no disk space back because the data blocks occupied by the file cannot be free until all snapshots that contain the file are also gone. Overwriting a file that is in a snapshot works similarly. The new version of the file consumes new disk space while the old version remains frozen in a snapshot. So overwriting the file consumes space while giving none back.
This isn't the sort of behavior that folks are used to, so NetApp invented the snap reserve. This is just a bookkeeping trick. Whenever you delete a file that is still in a snapshot, the space is subtracted from the snap reserve and added to the filesystem free space. So it just looks like you got some disk space back, but you really didn't. A similar thing happens when you overwrite a file.
You are correct that the actual size of a volume is the reported size of the volume plus the size of the snap reserve. The filer will not let you exceed the filesystem size limit. So if the filesystem fills, but the snap reserve still has free space, you can simply decrease the snap reserve to "enlarge" the filesystem.
If you want to guarantee that you will always be able to use all of your space, then set the snap reserve to 0%. If you do this however, then if the volume fills up, ordinary users may not be able to free up any disk space by deleting files. Getting disk space back then requires deleting snapshots, which only a sysadmin can do.
Sorry for the newbie questions, one last question
When I do a df, for instance
/vol/vol0/ 127105232 748236 26356996 1% /vol/vol0/ /vol/vol0/.snapshot 31776304 4548 31771756 0% /vol/vol0/.snapsh ot
Does vol0's capacity include the snapshot capacity? That is to say, vol0 is really 127105232? Or, is vol0 really that plus the reserved snapshot capacity, in this case, 127105232 + 31776304?
Second I do understand from the documentation that if that snapshot exceeds the specified size, it will eat up the volumes capacity. In this case, both vol0 and snapshot would reflect the exceed capacity, right? --
FREE Personalized E-mail at Mail.com http://www.mail.com/?sr=signup
Talk More, Pay Less with Net2Phone Direct(R), up to 1500 minutes free! http://www.net2phone.com/cgi-bin/link.cgi?143
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support