I have a fas270 with a volume named "san" containing one qtree and one LUN within the qtree.
There is only one aggregate and it contains volumes "san" and "vol0" (root).
The san volume is being synchronously snapmirrored to another fas270. Here are the sizes of everything:
df -A
Aggregate kbytes used avail capacity aggr0 1253187072 1200347772 52839300 96% aggr0/.snapshot 0 0 0 ---%
df
Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 16777216 349732 16427484 2% /vol/vol0/ /vol/vol0/.snapshot 4194304 64892 4129412 2% /vol/vol0/.snapshot
/vol/san/ 1119669456 1048001824 71667632 94% /vol/san/ /vol/san/.snapshot 58929968 102176 58827792 0% /vol/san/.snapshot
lun show
/vol/san/vmail/lun0 900.1g (966503301120) (r/w, online, mapped)
I got an autosupport email saying that the san volume had run out of space, and it was indeed 100% full.
I grew the san volume a little bit and I also reduced the snap reserve to 5%.
Now as I watch the "df" output the san volume continues to slowly consume space. As I understand it, the LUN is a fixed length file, so it cannot be growing. I can only conclude that WAFL metadata files in the volume must be growing, perhaps as a result of the LUN being populated with data, or the snapmirror, or both.
I'm concerned that I'll run out of space again in the volume, at which point I am just about out of options for enlarging it.
Does anyone know what's going on here?
As I have written this email, the space available in "san" has gone from 71667632 down to 71245556 and it's been dropping slowly and steadily for hours.
Does anyone know what's going on here? Will I hit bottom before running out of space again?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support