What’s the inode count on each of those
volumes?
From: owner-
Sent: Monday, October 13, 2008
5:15 AM
To: 
Subject: Snapvault slow on one
specific volume?
I have an aggr with
two volumes on it. One of them is a 3.5 TB CIFS/NFS share that is reasonably
fast to snapvault and a 1 TB NFS share (ESX VMs) that is exceptionally
slow.  As in it’s been doing it’s initial copy for over a week and still
has not finished. NDMP backups of this volume are also quite slow, does anyone
know why it would be so much slower then the other volume using the same
spindles? The filer is not under extreme load, although occasionally it’s
pretty busy. Here is a “normal” sysstat:
CPU  
Total    Net kB/s    Disk kB/s   
Tape kB/s Cache Cache  CP  CP Disk
      
ops/s    in   out   read  write 
read write   age   hit time ty util
 12%   
1007  2970  8996  15769   9117    
0     0     8   93% 
49%  :  41%
 18%    
920  2792  6510  11715   6924    
0     0     8   99% 
84%  T  39%
 15%   
1276  3580 10469  15942   8041    
0     0    10   92% 
33%  T  36%
 13%   
1487  3416 11347  15632   4907    
0     0    11   89% 
42%  :  43%
 17%   
1417  3180  9890  14000   9444    
0     0     9   98% 
79%  T  41%
 13%    
972  3704  9705  15427   9934    
0     0     7   92% 
46%  T  51%
 18%   
1087  2947 11911  17717   4640    
0     0     9   98% 
33%  T  47%
 11%   
1204  3358 11219  14090   5159    
0     0     7   88%
100%  :  50%
 12%   
1161  2808  9085  12640   5936    
0     0     9   90% 
33%  T  44%
 13%    
981  4735 11919  16125   7097    
0     0     9   92% 
45%  :  43%
 15%   
1158  5780 12480  17565   8266    
0     0    10   92% 
88%  T  41% 
I’m just having
difficulty trying to determine why two volumes on the same spindles would
be  so different in the time it takes to do their initial transfer. Also,
the VM’s do not seem slower then those hosted on other aggregates (this one is
3 RG of 11 disks each, Ontap 7.2.4 on a 3070A IBM rebranded).