Whats a decent amount of free space in regards to this?
80%? 85%? 90%?
Also are we talking about at the volume level, or aggregate level?
Just to clarify :)
Hadrian Baron Network Engineer VEGAS.com
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Stephen C. Losen Sent: Friday, July 14, 2006 5:43 AM To: margesimpson@hushmail.com Cc: toasters@mathworks.com Subject: Re: Defragmenation Question on Filer after few GB deletion.
Hi all: Could tech gurus respond to this question: I am planning to clean up some space on the filer - say 10GB.
Since WAFL FS is intellegent minimizing the defrag issues, what happens on deletion of the files on my volume? If I delete 10GB of 50GB, will the FS still be fragmented, or WAFL does it auto when files are deleted? Do I will have to run defrag tools explicitly?
Thank you for your reply. Marge.
I don't think you will have any performance problems and I don't think that you will need to run any tools to reorganize the disk storage. Freeing disk space does not increase "fragmentation" in WAFL.
The way WAFL is designed, if you have a decent amount of free space then WAFL is able to write data efficiently and hence read it back later efficiently. If you have enough free blocks, then WAFL can localize a disk writing episode to a small number of RAID stripes, which is more efficient than scattering writes over many RAID stripes, because WAFL must update the parity for each RAID stripe that is changed. Note that each block in a RAID stripe is on a different disk, so writing a RAID stripe writes to numerous disks in parallel.
Where you run into trouble is when you let an aggregate or traditional volume get nearly full. This means that free blocks are scarce and so WAFL must make do with whatever free blocks it can find. The free blocks may be scattered over many RAID stripes, which requires many more parity updates. This can also hurt read performance later when you want to read the data back (even though parity is not an issue when reading).
Freeing up disk space should improve your write performance and not hurt your read performance.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Whats a decent amount of free space in regards to this?
80%? 85%? 90%?
Also are we talking about at the volume level, or aggregate level?
Just to clarify :)
Hadrian Baron Network Engineer VEGAS.com
Without testing I don't know what the right answer is and I don't know if there is a point where performance decreases abruptly or if it just tails off gradually. My gut feeling would be to try not to fill an aggregate over 90%.
As for where the free space is located, as I understand it all free blocks in an aggregate are "at large" and could be assigned to any flex volume in the aggregate. The "total size" of a volume simply reserves a certain number of free blocks, but not a specific set of free blocks. (You can over commit an aggregate and have flex volumes whose sizes add up to more than the size of the aggregate. In this case there is no guarantee that a volume can be filled to its total size because the aggregate could fill first. I am assuming that the aggregate is not over committed in this discussion.)
So when you run "df -A" and look at the free space in the aggregate, that shows you how much space has not been promised to any flex volume. If the aggregate has snapshot reserve space, then any unused snap reserve space is also free space.
If you run "df" to see the free space in a flex volume, then that shows how many free blocks in the aggregate are promised to this volume. And any free space in the volume's snap reserve is also promised to the volume.
Therefore you need to add up the free space from "df -A" and the free space from each flex volume in the aggregate, including free space in the snap reserves, to calculate the total free space in the aggregate. If this is at least 10% of the total size of the aggregate, then you are probably in good shape.
This is kind of slick because one of your flex volumes can run at close to 100% full, but if there is sufficient free space in the aggregate then this does not hurt performance.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Please beware of a few bugs:
One affected the flexvol @ 100% but space avail in an aggregate Another affected volume not being guaranteed due to metadata overhead being added on to the vol size (fixed by using the 10% overhead now)
Both only affected flexvols and are not a problem in the 7.0.4 code if I recall)
The reason these are important is not so much the 'fragmentation' question, as much as how counting blocks can get tricky - especially with non-guaranteed volumes.
Just be aware that you have to think about the space - it's not always necessary to have free space in the aggregate so long as the vols are free - likewise, if the volumes are full it shouldn't affect performance so long as the aggregate has free space. But knowing how much free space is actually there is important too.
Glenn
-----Original Message----- From: owner-toasters@mathworks.com on behalf of Stephen C. Losen Sent: Fri 7/21/2006 2:46 PM To: Hadrian Baron Cc: margesimpson@hushmail.com; toasters@mathworks.com Subject: Re: Defragmenation Question on Filer after few GB deletion.
Whats a decent amount of free space in regards to this?
80%? 85%? 90%?
Also are we talking about at the volume level, or aggregate level?
Just to clarify :)
Hadrian Baron Network Engineer VEGAS.com
Without testing I don't know what the right answer is and I don't know if there is a point where performance decreases abruptly or if it just tails off gradually. My gut feeling would be to try not to fill an aggregate over 90%.
As for where the free space is located, as I understand it all free blocks in an aggregate are "at large" and could be assigned to any flex volume in the aggregate. The "total size" of a volume simply reserves a certain number of free blocks, but not a specific set of free blocks. (You can over commit an aggregate and have flex volumes whose sizes add up to more than the size of the aggregate. In this case there is no guarantee that a volume can be filled to its total size because the aggregate could fill first. I am assuming that the aggregate is not over committed in this discussion.)
So when you run "df -A" and look at the free space in the aggregate, that shows you how much space has not been promised to any flex volume. If the aggregate has snapshot reserve space, then any unused snap reserve space is also free space.
If you run "df" to see the free space in a flex volume, then that shows how many free blocks in the aggregate are promised to this volume. And any free space in the volume's snap reserve is also promised to the volume.
Therefore you need to add up the free space from "df -A" and the free space from each flex volume in the aggregate, including free space in the snap reserves, to calculate the total free space in the aggregate. If this is at least 10% of the total size of the aggregate, then you are probably in good shape.
This is kind of slick because one of your flex volumes can run at close to 100% full, but if there is sufficient free space in the aggregate then this does not hurt performance.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support