I have a fas270 with a volume named "san" containing one qtree and one LUN within the qtree.
There is only one aggregate and it contains volumes "san" and "vol0" (root).
The san volume is being synchronously snapmirrored to another fas270. Here are the sizes of everything:
df -A
Aggregate kbytes used avail capacity aggr0 1253187072 1200347772 52839300 96% aggr0/.snapshot 0 0 0 ---%
df
Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 16777216 349732 16427484 2% /vol/vol0/ /vol/vol0/.snapshot 4194304 64892 4129412 2% /vol/vol0/.snapshot
/vol/san/ 1119669456 1048001824 71667632 94% /vol/san/ /vol/san/.snapshot 58929968 102176 58827792 0% /vol/san/.snapshot
lun show
/vol/san/vmail/lun0 900.1g (966503301120) (r/w, online, mapped)
I got an autosupport email saying that the san volume had run out of space, and it was indeed 100% full.
I grew the san volume a little bit and I also reduced the snap reserve to 5%.
Now as I watch the "df" output the san volume continues to slowly consume space. As I understand it, the LUN is a fixed length file, so it cannot be growing. I can only conclude that WAFL metadata files in the volume must be growing, perhaps as a result of the LUN being populated with data, or the snapmirror, or both.
I'm concerned that I'll run out of space again in the volume, at which point I am just about out of options for enlarging it.
Does anyone know what's going on here?
As I have written this email, the space available in "san" has gone from 71667632 down to 71245556 and it's been dropping slowly and steadily for hours.
Does anyone know what's going on here? Will I hit bottom before running out of space again?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Check with "df -r" for reserved space. I am not sure whether synchronous SM is based on snapshots (description is a bit vague), but if yes - by default creating snapshot on volume with LUN will reserve the amount of space equal to changed data. The more data is changed in LUN, the more reserved space you need. In the worst case you need 1.8TB volume for your 900GB LUN.
С уважением / With best regards / Mit freundlichen Grüβen
--- Andrey Borzenkov Senior system engineer
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Stephen C. Losen Sent: Wednesday, January 02, 2008 9:53 PM To: toasters@mathworks.com Subject: Incredible shrinking volume
I have a fas270 with a volume named "san" containing one qtree and one LUN within the qtree.
There is only one aggregate and it contains volumes "san" and "vol0" (root).
The san volume is being synchronously snapmirrored to another fas270. Here are the sizes of everything:
df -A
Aggregate kbytes used avail capacity aggr0 1253187072 1200347772 52839300 96% aggr0/.snapshot 0 0 0 ---%
df
Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 16777216 349732 16427484 2% /vol/vol0/ /vol/vol0/.snapshot 4194304 64892 4129412 2% /vol/vol0/.snapshot
/vol/san/ 1119669456 1048001824 71667632 94% /vol/san/ /vol/san/.snapshot 58929968 102176 58827792 0% /vol/san/.snapshot
lun show
/vol/san/vmail/lun0 900.1g (966503301120) (r/w, online, mapped)
I got an autosupport email saying that the san volume had run out of space, and it was indeed 100% full.
I grew the san volume a little bit and I also reduced the snap reserve to 5%.
Now as I watch the "df" output the san volume continues to slowly consume space. As I understand it, the LUN is a fixed length file, so it cannot be growing. I can only conclude that WAFL metadata files in the volume must be growing, perhaps as a result of the LUN being populated with data, or the snapmirror, or both.
I'm concerned that I'll run out of space again in the volume, at which point I am just about out of options for enlarging it.
Does anyone know what's going on here?
As I have written this email, the space available in "san" has gone from 71667632 down to 71245556 and it's been dropping slowly and steadily for hours.
Does anyone know what's going on here? Will I hit bottom before running out of space again?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Hang on there. If you have a space-reserved LUN, then as soon as snapshot #1 is taken, Ontap immediately takes 100% of the LUN space at that time. Additional snaps do not matter to LUN space reservation.
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Borzenkov, Andrey Sent: Wednesday, January 02, 2008 3:57 PM To: Stephen C. Losen; toasters@mathworks.com Subject: RE: Incredible shrinking volume
Check with "df -r" for reserved space. I am not sure whether synchronous SM is based on snapshots (description is a bit vague), but if yes - by default creating snapshot on volume with LUN will reserve the amount of space equal to changed data. The more data is changed in LUN, the more reserved space you need. In the worst case you need 1.8TB volume for your 900GB LUN.
С уважением / With best regards / Mit freundlichen Grüβen
--- Andrey Borzenkov Senior system engineer
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Stephen C. Losen Sent: Wednesday, January 02, 2008 9:53 PM To: toasters@mathworks.com Subject: Incredible shrinking volume
I have a fas270 with a volume named "san" containing one qtree and one LUN within the qtree.
There is only one aggregate and it contains volumes "san" and "vol0" (root).
The san volume is being synchronously snapmirrored to another fas270. Here are the sizes of everything:
df -A
Aggregate kbytes used avail capacity aggr0 1253187072 1200347772 52839300 96% aggr0/.snapshot 0 0 0 ---%
df
Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 16777216 349732 16427484 2% /vol/vol0/ /vol/vol0/.snapshot 4194304 64892 4129412 2% /vol/vol0/.snapshot
/vol/san/ 1119669456 1048001824 71667632 94% /vol/san/ /vol/san/.snapshot 58929968 102176 58827792 0% /vol/san/.snapshot
lun show
/vol/san/vmail/lun0 900.1g (966503301120) (r/w, online, mapped)
I got an autosupport email saying that the san volume had run out of space, and it was indeed 100% full.
I grew the san volume a little bit and I also reduced the snap reserve to 5%.
Now as I watch the "df" output the san volume continues to slowly consume space. As I understand it, the LUN is a fixed length file, so it cannot be growing. I can only conclude that WAFL metadata files in the volume must be growing, perhaps as a result of the LUN being populated with data, or the snapmirror, or both.
I'm concerned that I'll run out of space again in the volume, at which point I am just about out of options for enlarging it.
Does anyone know what's going on here?
As I have written this email, the space available in "san" has gone from 71667632 down to 71245556 and it's been dropping slowly and steadily for hours.
Does anyone know what's going on here? Will I hit bottom before running out of space again?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Can't remember quite when this changed (7.0?) but it now only reserves 100% of the used space within the LUN when the first snap is taken. However as data changes the space used by the snapshot will grow, and if the LUN is slowly filling then the reserve space will slowly be going up also.
Thanks Oliver Bassett
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Glenn Dekhayser Sent: Thursday, 3 January 2008 10:50 a.m. To: Borzenkov, Andrey; Stephen C. Losen; toasters@mathworks.com Subject: RE: Incredible shrinking volume
Hang on there. If you have a space-reserved LUN, then as soon as snapshot #1 is taken, Ontap immediately takes 100% of the LUN space at that time. Additional snaps do not matter to LUN space reservation.
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Borzenkov, Andrey Sent: Wednesday, January 02, 2008 3:57 PM To: Stephen C. Losen; toasters@mathworks.com Subject: RE: Incredible shrinking volume
Check with "df -r" for reserved space. I am not sure whether synchronous SM is based on snapshots (description is a bit vague), but if yes - by default creating snapshot on volume with LUN will reserve the amount of space equal to changed data. The more data is changed in LUN, the more reserved space you need. In the worst case you need 1.8TB volume for your 900GB LUN.
С уважением / With best regards / Mit freundlichen Grüβen
--- Andrey Borzenkov Senior system engineer
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Stephen C. Losen Sent: Wednesday, January 02, 2008 9:53 PM To: toasters@mathworks.com Subject: Incredible shrinking volume
I have a fas270 with a volume named "san" containing one qtree and one LUN within the qtree.
There is only one aggregate and it contains volumes "san" and "vol0" (root).
The san volume is being synchronously snapmirrored to another fas270. Here are the sizes of everything:
df -A
Aggregate kbytes used avail capacity aggr0 1253187072 1200347772 52839300 96% aggr0/.snapshot 0 0 0 ---%
df
Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 16777216 349732 16427484 2% /vol/vol0/ /vol/vol0/.snapshot 4194304 64892 4129412 2% /vol/vol0/.snapshot
/vol/san/ 1119669456 1048001824 71667632 94% /vol/san/ /vol/san/.snapshot 58929968 102176 58827792 0% /vol/san/.snapshot
lun show
/vol/san/vmail/lun0 900.1g (966503301120) (r/w, online, mapped)
I got an autosupport email saying that the san volume had run out of space, and it was indeed 100% full.
I grew the san volume a little bit and I also reduced the snap reserve to 5%.
Now as I watch the "df" output the san volume continues to slowly consume space. As I understand it, the LUN is a fixed length file, so it cannot be growing. I can only conclude that WAFL metadata files in the volume must be growing, perhaps as a result of the LUN being populated with data, or the snapmirror, or both.
I'm concerned that I'll run out of space again in the volume, at which point I am just about out of options for enlarging it.
Does anyone know what's going on here?
As I have written this email, the space available in "san" has gone from 71667632 down to 71245556 and it's been dropping slowly and steadily for hours.
Does anyone know what's going on here? Will I hit bottom before running out of space again?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
The information contained in this email is privileged and confidential and intended for the addressee only. If you are not the intended recipient, you are asked to respect that confidentiality and not disclose, copy or make use of its contents. If received in error you are asked to destroy this email and contact the sender immediately. Your assistance is appreciated.
When lun space is consumed in the lun the first time, the data in the volume will grow until the 900 Gb of the lun space has been used. When you free up room in the lun, the WAFL filesystem can't know this, so all those 'freed blocks' will be kept in use on WAFL. When however it is a windows filesystem and you use Snapdrive 5.0, you can enable space reclamation. Snapdrive will then scan your NTFS filesystem and tell WAFL which blocks can be freed. Then and only then free space in the lun will be free space in the volume.
What Andrey says is correct. When you only have one snapshot in the volume, it is possible that all data will have changed since the snapshot was taken so the snapshot will also be 900Gb. This is idd worst case scenario.
What I would try is to just disable snapshot reserve (set it to 0%). This is the best practice when using luns. Snapshots will still be taken, but will just use free space in the volume.
Grtz, Tom Uptime Belgium
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Oliver Bassett Sent: donderdag 3 januari 2008 0:16 To: toasters@mathworks.com Subject: RE: Incredible shrinking volume
Can't remember quite when this changed (7.0?) but it now only reserves 100% of the used space within the LUN when the first snap is taken. However as data changes the space used by the snapshot will grow, and if the LUN is slowly filling then the reserve space will slowly be going up also.
Thanks Oliver Bassett
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Glenn Dekhayser Sent: Thursday, 3 January 2008 10:50 a.m. To: Borzenkov, Andrey; Stephen C. Losen; toasters@mathworks.com Subject: RE: Incredible shrinking volume
Hang on there. If you have a space-reserved LUN, then as soon as snapshot #1 is taken, Ontap immediately takes 100% of the LUN space at that time. Additional snaps do not matter to LUN space reservation.
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Borzenkov, Andrey Sent: Wednesday, January 02, 2008 3:57 PM To: Stephen C. Losen; toasters@mathworks.com Subject: RE: Incredible shrinking volume
Check with "df -r" for reserved space. I am not sure whether synchronous SM is based on snapshots (description is a bit vague), but if yes - by default creating snapshot on volume with LUN will reserve the amount of space equal to changed data. The more data is changed in LUN, the more reserved space you need. In the worst case you need 1.8TB volume for your 900GB LUN.
С уважением / With best regards / Mit freundlichen Grüβen
--- Andrey Borzenkov Senior system engineer
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Stephen C. Losen Sent: Wednesday, January 02, 2008 9:53 PM To: toasters@mathworks.com Subject: Incredible shrinking volume
I have a fas270 with a volume named "san" containing one qtree and one LUN within the qtree.
There is only one aggregate and it contains volumes "san" and "vol0" (root).
The san volume is being synchronously snapmirrored to another fas270. Here are the sizes of everything:
df -A
Aggregate kbytes used avail capacity aggr0 1253187072 1200347772 52839300 96% aggr0/.snapshot 0 0 0 ---%
df
Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 16777216 349732 16427484 2% /vol/vol0/ /vol/vol0/.snapshot 4194304 64892 4129412 2% /vol/vol0/.snapshot
/vol/san/ 1119669456 1048001824 71667632 94% /vol/san/ /vol/san/.snapshot 58929968 102176 58827792 0% /vol/san/.snapshot
lun show
/vol/san/vmail/lun0 900.1g (966503301120) (r/w, online, mapped)
I got an autosupport email saying that the san volume had run out of space, and it was indeed 100% full.
I grew the san volume a little bit and I also reduced the snap reserve to 5%.
Now as I watch the "df" output the san volume continues to slowly consume space. As I understand it, the LUN is a fixed length file, so it cannot be growing. I can only conclude that WAFL metadata files in the volume must be growing, perhaps as a result of the LUN being populated with data, or the snapmirror, or both.
I'm concerned that I'll run out of space again in the volume, at which point I am just about out of options for enlarging it.
Does anyone know what's going on here?
As I have written this email, the space available in "san" has gone from 71667632 down to 71245556 and it's been dropping slowly and steadily for hours.
Does anyone know what's going on here? Will I hit bottom before running out of space again?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
The information contained in this email is privileged and confidential and intended for the addressee only. If you are not the intended recipient, you are asked to respect that confidentiality and not disclose, copy or make use of its contents. If received in error you are asked to destroy this email and contact the sender immediately. Your assistance is appreciated.
thks! -----Original Message----- From: "De Wit Tom (Consultant)" tom.de.wit@volvo.com To: "Oliver Bassett" Oliver.Bassett@infinity.co.nz, toasters@mathworks.com Date: Thu, 3 Jan 2008 08:15:58 +0100 Subject: RE: Incredible shrinking volume
When lun space is consumed in the lun the first time, the data in the volume will grow until the 900 Gb of the lun space has been used. When you free up room in the lun, the WAFL filesystem can't know this, so all those 'freed blocks' will be kept in use on WAFL. When however it is a windows filesystem and you use Snapdrive 5.0, you can enable space reclamation. Snapdrive will then scan your NTFS filesystem and tell WAFL which blocks can be freed. Then and only then free space in the lun will be free space in the volume.
What Andrey says is correct. When you only have one snapshot in the volume, it is possible that all data will have changed since the snapshot was taken so the snapshot will also be 900Gb. This is idd worst case scenario.
What I would try is to just disable snapshot reserve (set it to 0%). This is the best practice when using luns. Snapshots will still be taken, but will just use free space in the volume.
Grtz, Tom Uptime Belgium
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Oliver Bassett Sent: donderdag 3 januari 2008 0:16 To: toasters@mathworks.com Subject: RE: Incredible shrinking volume
Can't remember quite when this changed (7.0?) but it now only reserves 100% of the used space within the LUN when the first snap is taken. However as data changes the space used by the snapshot will grow, and if the LUN is slowly filling then the reserve space will slowly be going up also.
Thanks Oliver Bassett
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Glenn Dekhayser Sent: Thursday, 3 January 2008 10:50 a.m. To: Borzenkov, Andrey; Stephen C. Losen; toasters@mathworks.com Subject: RE: Incredible shrinking volume
Hang on there. If you have a space-reserved LUN, then as soon as snapshot #1 is taken, Ontap immediately takes 100% of the LUN space at that time. Additional snaps do not matter to LUN space reservation.
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Borzenkov, Andrey Sent: Wednesday, January 02, 2008 3:57 PM To: Stephen C. Losen; toasters@mathworks.com Subject: RE: Incredible shrinking volume
Check with "df -r" for reserved space. I am not sure whether synchronous SM is based on snapshots (description is a bit vague), but if yes - by default creating snapshot on volume with LUN will reserve the amount of space equal to changed data. The more data is changed in LUN, the more reserved space you need. In the worst case you need 1.8TB volume for your 900GB LUN.
§³ §å§Ó§Ñ§Ø§Ö§ß§Ú§Ö§Þ / With best regards / Mit freundlichen Gr¨¹¦Âen
Andrey Borzenkov Senior system engineer
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com mailto:owner-toasters@mathworks.com ] On Behalf Of Stephen C. Losen Sent: Wednesday, January 02, 2008 9:53 PM To: toasters@mathworks.com Subject: Incredible shrinking volume
I have a fas270 with a volume named "san" containing one qtree and one LUN within the qtree.
There is only one aggregate and it contains volumes "san" and "vol0" (root).
The san volume is being synchronously snapmirrored to another fas270. Here are the sizes of everything:
df -A
Aggregate kbytes used avail capacity aggr0 1253187072 1200347772 52839300 96% aggr0/.snapshot 0 0 0 ---%
df
Filesystem kbytes used avail capacity Mounted on /vol/vol0/ 16777216 349732 16427484 2% /vol/vol0/ /vol/vol0/.snapshot 4194304 64892 4129412 2% /vol/vol0/.snapshot
/vol/san/ 1119669456 1048001824 71667632 94% /vol/san/ /vol/san/.snapshot 58929968 102176 58827792 0% /vol/san/.snapshot
lun show
/vol/san/vmail/lun0 900.1g (966503301120) (r/w, online, mapped)
I got an autosupport email saying that the san volume had run out of space, and it was indeed 100% full.
I grew the san volume a little bit and I also reduced the snap reserve to 5%.
Now as I watch the "df" output the san volume continues to slowly consume space. As I understand it, the LUN is a fixed length file, so it cannot be growing. I can only conclude that WAFL metadata files in the volume must be growing, perhaps as a result of the LUN being populated with data, or the snapmirror, or both.
I'm concerned that I'll run out of space again in the volume, at which point I am just about out of options for enlarging it.
Does anyone know what's going on here?
As I have written this email, the space available in "san" has gone from 71667632 down to 71245556 and it's been dropping slowly and steadily for hours.
Does anyone know what's going on here? Will I hit bottom before running out of space again?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
The information contained in this email is privileged and confidential and intended for the addressee only. If you are not the intended recipient, you are asked to respect that confidentiality and not disclose, copy or make use of its contents. If received in error you are asked to destroy this email and contact the sender immediately. Your assistance is appreciated.