FC and iSCSI does mean FAT VMDK, unless
you create them manually and specify thin provisioned (not typical).  The
Storage VMotion info is good to know – I hope they get that fixed soon.
Thanks for the additional info – it’s
something for us to watch out for.  We went NFS from the start (and
performed P2V and V2V into the NFS-based datastores), but I know that SVMotion
has been used and templates as well.  I’ll try to check our use of
templates a bit later today…
Glenn
From: Darren Sykes
[mailto:Darren.Sykes@csr.com] 
Sent: Tuesday, October 14, 2008
3:00 AM
To: Glenn Walker; Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
Glenn,
 
That's true; by default all new VM's created on an NFS
volume would be thin provisioned. I'm not sure if that's the case for templates
though (I thought they were created fat on purpose for performance reasons when
deploying them). 
 
Also, we migrated from FC and iSCSI LUNS (which is basically
a file copy) so most of our VM's are fat anyway. From what I understand using
SVMOTION also results in a fat filed VM, though that's not officially supported
on NFS in ESX3.5.
 
So, in summary there are a few reasons why you might end up
with non-thin provisioned VM's on NFS and may therefore hit this bug. 
 
Darren
 
From: Glenn
Walker [mailto:ggwalker@mindspring.com]
Sent: Tue 10/14/2008 03:23
To: Darren Sykes; Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
I was under the impression that ESX over
NFS used thin-provisioned VMDKs by default (that’s how it is in our
environment, and all of the files are appearing as thin-provisioned). 
Would this then be not the same bug?  Thin-provisioned VMDKs means that
the portion of the VMDK not allocated to the guest would be treated as a sparse
file, not a file filled with zeros.  (unless someone decided to perform a
full format on the ‘disk’, perhaps?)
 
From: owner-toasters@mathworks.com
[mailto:owner-toasters@mathworks.com] On
Behalf Of Darren Sykes
Sent: Monday, October 13, 2008
3:51 PM
To: Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
 
Jeremy/All,
 
Following on from our conversation offline: 
 
It would seem you (and I) have been suffering from the bug
described here: http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=281669
 
We saw it on template volumes. I'm planning to disable ASIS
on that volume to attempt to speed up access. 
 
Obviously, that solution may be less than useful in your
environment where it's live data volumes which benefit from ASIS. 
 
Darren
 
 
 
 
From: owner-toasters@mathworks.com
on behalf of Page, Jeremy
Sent: Mon 10/13/2008 13:14
To: toasters@mathworks.com
Subject: Snapvault slow on one
specific volume?
I have an aggr with
two volumes on it. One of them is a 3.5 TB CIFS/NFS share that is reasonably fast
to snapvault and a 1 TB NFS share (ESX VMs) that is exceptionally slow. 
As in it’s been doing it’s initial copy for over a week and still
has not finished. NDMP backups of this volume are also quite slow, does anyone
know why it would be so much slower then the other volume using the same
spindles? The filer is not under extreme load, although occasionally it’s
pretty busy. Here is a “normal” sysstat:
 
CPU  
Total    Net kB/s    Disk kB/s   
Tape kB/s Cache Cache  CP  CP Disk
      
ops/s    in   out   read  write 
read write   age   hit time ty util
 12%   
1007  2970  8996  15769   9117    
0     0     8   93% 
49%  :  41%
 18%    
920  2792  6510  11715   6924    
0     0     8   99% 
84%  T  39%
 15%   
1276  3580 10469  15942   8041    
0     0    10   92% 
33%  T  36%
 13%   
1487  3416 11347  15632   4907    
0     0    11   89% 
42%  :  43%
 17%   
1417  3180  9890  14000   9444    
0     0     9   98% 
79%  T  41%
 13%    
972  3704  9705  15427   9934    
0     0     7   92% 
46%  T  51%
 18%   
1087  2947 11911  17717   4640    
0     0     9   98% 
33%  T  47%
 11%   
1204  3358 11219  14090   5159    
0     0     7   88%
100%  :  50%
 12%   
1161  2808  9085  12640   5936    
0     0     9   90% 
33%  T  44%
 13%    
981  4735 11919  16125   7097    
0     0     9   92% 
45%  :  43%
 15%   
1158  5780 12480  17565   8266    
0     0    10   92% 
88%  T  41% 
 
I’m just having
difficulty trying to determine why two volumes on the same spindles would
be  so different in the time it takes to do their initial transfer. Also,
the VM’s do not seem slower then those hosted on other aggregates (this
one is 3 RG of 11 disks each, Ontap 7.2.4 on a 3070A IBM rebranded).
 
To report this email as spam click here.
This
message (including any attachments) contains confidential 
and/or proprietary information intended only for the addressee. 
Any unauthorized disclosure, copying, distribution or reliance on 
the contents of this information is strictly prohibited and may 
constitute a violation of law. If you are not the intended 
recipient, please notify the sender immediately by responding to 
this e-mail, and delete the message from your system. If you 
have any questions about this e-mail please notify the sender 
immediately.