Guessing that it sped up access due to the
bug (and/or regular performance degradation from hitting the same blocks
multiple times due to de-dupe)?
With some of the dedupe improvements
rumored in 7.3, I’d expect that to potentially improve.
From: Darren Sykes
[mailto:Darren.Sykes@csr.com]
Sent: Tuesday, October 14, 2008
9:45 AM
To: Glenn Walker; Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
SMotion - you'd hope (without breaking any
NDA's) that they would address that in the next version, and possibly give you
the option to specify thin or fat disks explicitly.
Out of interest - I removed dedup on our
templates volume and a VM provisioning job that took 16 mins yesterday took less
than 5 mins today.
Darren.
From: Glenn
Walker [mailto:ggwalker@mindspring.com]
Sent: 14 October 2008 13:43
To: Darren Sykes; Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
FC and iSCSI does mean FAT VMDK, unless
you create them manually and specify thin provisioned (not typical). The
Storage VMotion info is good to know – I hope they get that fixed soon.
From: Darren Sykes
[mailto:Darren.Sykes@csr.com]
Sent: Tuesday, October 14, 2008
3:00 AM
To: Glenn Walker; Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
Glenn,
From: Glenn
Walker [mailto:ggwalker@mindspring.com]
Sent: Tue 10/14/2008 03:23
To: Darren Sykes; Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
I was under the impression that ESX over
NFS used thin-provisioned VMDKs by default (that’s how it is in our
environment, and all of the files are appearing as thin-provisioned).
Would this then be not the same bug? Thin-provisioned VMDKs means that
the portion of the VMDK not allocated to the guest would be treated as a sparse
file, not a file filled with zeros. (unless someone decided to perform a
full format on the ‘disk’, perhaps?)
From:
owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Darren Sykes
Sent: Monday, October 13, 2008
3:51 PM
To: Page, Jeremy;
toasters@mathworks.com
Subject: RE: Snapvault slow on one
specific volume?
Jeremy/All,
From:
owner-toasters@mathworks.com on behalf of Page, Jeremy
Sent: Mon 10/13/2008 13:14
To: toasters@mathworks.com
Subject: Snapvault slow on one
specific volume?
I have an aggr with
two volumes on it. One of them is a 3.5 TB CIFS/NFS share that is reasonably
fast to snapvault and a 1 TB NFS share (ESX VMs) that is exceptionally
slow. As in it’s been doing it’s initial copy for over a week
and still has not finished. NDMP backups of this volume are also quite slow,
does anyone know why it would be so much slower then the other volume using the
same spindles? The filer is not under extreme load, although occasionally
it’s pretty busy. Here is a “normal” sysstat:
ops/s in out read write
read write age hit time ty util
12%
1007 2970 8996 15769 9117
0 0 8 93%
49% : 41%
18%
920 2792 6510 11715 6924
0 0 8 99%
84% T 39%
15%
1276 3580 10469 15942 8041
0 0 10 92%
33% T 36%
13%
1487 3416 11347 15632 4907
0 0 11 89%
42% : 43%
17%
1417 3180 9890 14000 9444
0 0 9 98%
79% T 41%
13%
972 3704 9705 15427 9934
0 0 7 92%
46% T 51%
18%
1087 2947 11911 17717 4640
0 0 9 98%
33% T 47%
11%
1204 3358 11219 14090 5159
0 0 7 88%
100% : 50%
12%
1161 2808 9085 12640 5936
0 0 9 90%
33% T 44%
13%
981 4735 11919 16125 7097
0 0 9 92%
45% : 43%
15%
1158 5780 12480 17565 8266
0 0 10 92%
88% T 41%
This
message (including any attachments) contains confidential
and/or proprietary information intended only for the addressee.
Any unauthorized disclosure, copying, distribution or reliance on
the contents of this information is strictly prohibited and may
constitute a violation of law. If you are not the intended
recipient, please notify the sender immediately by responding to
this e-mail, and delete the message from your system. If you
have any questions about this e-mail please notify the sender
immediately.