I think it's general knowledge that once a NetApp gets above 90% full, performance starts to degrade quite noticeably.
I've been testing an application recently and observed that this is the case on ONTAP 5.2: * When ~95% full, my performance test came up with a number of ~4 seconds. * Freeing up space to get to around 60% full, the number was ~3.5 seconds. * Copying off my data files and copying them back to defragment them, the number was 2.8s.
The only thing I've been able to find about fragmentation is: * Some statements that WAFL tries hard to avoid fragmentation (which I'm sure it does) but can't always (which I'm sure of too). * Copying off data and copying it back again is the only way to resolve it.
The latter solution doesn't seem practical if the data in question is supposed to be available 24*7. Some applications have a method of doing this at an application level, but does anyone know whether there are methods/future plans for background non-intrusive defragmentation at a filesystem level? Or is it just me who thinks this would be a good idea?
Edward Hibbert Internet Applications Group Data Connection Ltd Tel: +44 131 662 1212 Fax: +44 131 662 1345 Email: eh@dataconnection.com Web: http://www.dataconnection.com