There’s also the ‘schedule-able’ command ‘wafl reallocate’ that does the same thing.

 

It’s very useful, though there is some debate on exactly what happens when snapshots are taken into consideration.  Generally speaking, if there is adequate free space, and the data in question is deemed ‘fragmented’ by the scanner, then it will move it.

 

Not that it much matters, but adding disks to an existing RG vs adding a new RG is about the same:  when data is written within an RG, it is striped across the disks in such a fashion that each disk gets x number of contiguous blocks, then it writes to the next disk (64 or 128 depending on the version of ONTAP and disk type).  Parity is calculated per stripe.  When writing across multiple RGs, WAFL decides how much data per RG to write (tetris, but not the game) and breaks up the writes per RG in this manner.  I believe this is in 64MB chunks.

 

Either way, the data is written to the location where the best example of contiguous free space exists.

 

Glenn

 


From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Glenn Dekhayser
Sent: Monday, January 22, 2007 3:22 PM
To: toasters@mathworks.com
Subject: RE: Aggregate expansion

 

All;

 

It was my understanding that you can alleviate the pain caused by an uneven expansion of disks as mentioned below by performing a wafl scan reallocate, anyone have info to contrary?

 

Glenn

 


From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Fox, Adam
Sent: Monday, January 22, 2007 2:30 PM
To: Suresh Rajagopalan; toasters@mathworks.com
Subject: RE: Aggregate expansion

That depends greatly on the values of N and n, as well as the RAID group size of the aggregate.

Done according to best practices, there will be almost no difference.  Done poorly, it can make all the

difference in the world.

 

My personal view (I'm in no position to speak officially for NetApp) on this to add disks to an aggregate in one of 2 multiples:

 

1) a whole RG at a time

2) half a RG at a time.

 

This typically allows for a sufficient number of free disks such that you should not expect any noticable

performance difference.  I realize that not all sites can implement this, but let's look at a worst practice:

 

N = n+1

 

Fill up the aggregate, then add 1 disk.  Ouch!  This hurts!  So you've seen what I consider to be the best

case for adding space, and you've seen the worst case.  So how close you are to these extremes should

give you can idea of what to expect.

 

I know this isn't a simple answer, but I believe it to be accruate.

-- Adam Fox
adamfox@netapp.com

 

 


From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com]
Sent: Monday, January 22, 2007 12:46 PM
To: toasters@mathworks.com
Subject: Aggregate expansion

Is there any difference between creating an aggregate on a certain number of disks (say n) , and then later expanding the aggregate  to N disks, as opposed to creating the initial aggregate on N disks?

 

Suresh