Hi Edward,
It's a case of whether or not it makes sense...
Firstly, since WAFL always writes to new locations (a basic reason for NetApp's outstanding write performance - avoidance of seeks), any defragmentation effort would quickly be wasted. Filers tend to stay in a state of "partially fragmented" as NVRAM write gathering keeps files in large contiguous "chunks".
Secondly, since blocks in snapshots are shared with the active filesystem and several versions of a file are sharing blocks, defragmenting becomes an impossibly complex task. Even if you treated the active filesystem as the base for gathering file segments you would need to rearrange the pointers in the snapshot metadata inodes. Defragmentation would be VERY slow.
Doing what you have done (copy off then back on again) may improve read transfer rate for some large files for a short time. But it would do little to nothing for transactional I/O, so for most applications there's no point. Also, this means that you have created new versions of the files that don't share blocks with any of your snapshots, so you lose huge amounts of disk space.
Not knocking your intentions, it's just that this discussion has been coming up for years know, and the answer hasn't changed. If it had (or if it really mattered) NetApp would have done somthing about it long ago.
regards,
Alan McLachlan Senior Systems Engineer Storage Management Solutions ASI Solutions www.asi.com.au Ph +61 2 6230 1566 Fax +61 2 6230 5343 Mobile +61 428 655644 e-mail amclachlan@asi.com.au
-----Original Message----- From: Edward Hibbert [mailto:EH@dataconnection.com] Sent: Wednesday, 18 December 2002 10:36 PM To: 'toasters@mathworks.com' Subject: Defragmentation
I think it's general knowledge that once a NetApp gets above 90% full, performance starts to degrade quite noticeably.
I've been testing an application recently and observed that this is the case on ONTAP 5.2: * When ~95% full, my performance test came up with a number of ~4 seconds. * Freeing up space to get to around 60% full, the number was ~3.5 seconds. * Copying off my data files and copying them back to defragment them, the number was 2.8s.
The only thing I've been able to find about fragmentation is: * Some statements that WAFL tries hard to avoid fragmentation (which I'm sure it does) but can't always (which I'm sure of too). * Copying off data and copying it back again is the only way to resolve it.
The latter solution doesn't seem practical if the data in question is supposed to be available 24*7. Some applications have a method of doing this at an application level, but does anyone know whether there are methods/future plans for background non-intrusive defragmentation at a filesystem level? Or is it just me who thinks this would be a good idea?
Edward Hibbert Internet Applications Group Data Connection Ltd Tel: +44 131 662 1212 Fax: +44 131 662 1345 Email: eh@dataconnection.com Web: http://www.dataconnection.com
**** ASI Solutions Disclaimer **** The material transmitted may contain confidential and/or privileged material and is intended only for the addressee. If you receive this in error, please notify the sender and destroy any copies of the material immediately. ASI will protect your Privacy according to the 10 Privacy Principles outlined under the new Privacy Act, Dec 2001.
This email is also subject to copyright. Any use of or reliance upon this material by persons or entities other than the addressee is prohibited.
E-mails may be interfered with, may contain computer viruses or other defects. Under no circumstances do we accept liability for any loss or damage which may result from your receipt of this message or any attachments. **** END OF MESSAGE ****