What would you do? ESX 3.5 running all windows servers; FCP with all fiber drives We have an existing 6030 with 1 aggr, 1 large volume with ~5 LUNS 500GB a piece; running DOT 7.24. (upgrading soon, so no dedupe running yet). We have a new HA 6040 (PAM cards) =). The existing volume on the 6030 is still at 100 percent fractional reserve.
We have SMVI running in our test environment, but not in prod yet. We wanted to snapmirror from the 6030 to the 6040 and visa versa. The 6040 is running 7.3.1.1 so I can only go from the 6030 to the 6040 for now. No vmware on the 6040 yet. Question: I have 3 shelves each for each head (6 300gb total for the 6040) and 1 big aggr on the 6030, 11.5TB agr 4.97used in the volume on the 6030.
I was thinking of having the volumes for the 6040 @ 750GB and 1 LUN. I am averaging ~20 windows boxes. I figured when SMVI runs, it is only 1 snapmirror update vs running the job with allt he LUNS in 1 big aggr and multiple snapmirrors happening. I will make the fractional reserve set to 50% on the 6040 with thin provisioning at the LUN level, and dedupe.
Pros: I think its more granular. Having to find a location for 6TB is a bit difficult
Con: Not as much dedupe by having all VM's in 1 volume.
Thoughts, experiences, gotchas.?
Have you considered using NFS rather than FC?
It would allow you to have larger volumes (as you're not constrained by the same VM/datastore limit due to ESX locking on block based devices) and you'll therefore no longer need fractional reserve and will get better dedup savings?
If that's not a possibility, you'll have to consider whether 20 VM's per datastore is ideal; When I last used FC, best practice was less than that.
Darren
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of steve klise Sent: 01 June 2009 21:39 To: toasters@mathworks.com Subject: ESX volume design question
What would you do? ESX 3.5 running all windows servers; FCP with all fiber drives We have an existing 6030 with 1 aggr, 1 large volume with ~5 LUNS 500GB a piece; running DOT 7.24. (upgrading soon, so no dedupe running yet). We have a new HA 6040 (PAM cards) =). The existing volume on the 6030 is still at 100 percent fractional reserve.
We have SMVI running in our test environment, but not in prod yet. We wanted to snapmirror from the 6030 to the 6040 and visa versa. The 6040 is running 7.3.1.1 so I can only go from the 6030 to the 6040 for now. No vmware on the 6040 yet. Question: I have 3 shelves each for each head (6 300gb total for the 6040) and 1 big aggr on the 6030, 11.5TB agr 4.97used in the volume on the 6030.
I was thinking of having the volumes for the 6040 @ 750GB and 1 LUN. I am averaging ~20 windows boxes. I figured when SMVI runs, it is only 1 snapmirror update vs running the job with allt he LUNS in 1 big aggr and multiple snapmirrors happening. I will make the fractional reserve set to 50% on the 6040 with thin provisioning at the LUN level, and dedupe.
Pros: I think its more granular. Having to find a location for 6TB is a bit difficult
Con: Not as much dedupe by having all VM's in 1 volume.
Thoughts, experiences, gotchas.?
He mentioned that he is running smvi - I am not sure smvi supports nfs yet.....but highly recommend nfs on NetApp for VMWare make sure you read the latest TR from NetApp about optimal configuration of VMWare on NetApp - also make sure you upgrade to ESX 3.5U3 or later.
Darren Sykes wrote:
Have you considered using NFS rather than FC?
It would allow you to have larger volumes (as you're not constrained by the same VM/datastore limit due to ESX locking on block based devices) and you'll therefore no longer need fractional reserve and will get better dedup savings?
If that's not a possibility, you'll have to consider whether 20 VM's per datastore is ideal; When I last used FC, best practice was less than that.
Darren
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of steve klise Sent: 01 June 2009 21:39 To: toasters@mathworks.com Subject: ESX volume design question
What would you do? ESX 3.5 running all windows servers; FCP with all fiber drives We have an existing 6030 with 1 aggr, 1 large volume with ~5 LUNS 500GB a piece; running DOT 7.24. (upgrading soon, so no dedupe running yet). We have a new HA 6040 (PAM cards) =). The existing volume on the 6030 is still at 100 percent fractional reserve.
We have SMVI running in our test environment, but not in prod yet. We wanted to snapmirror from the 6030 to the 6040 and visa versa. The 6040 is running 7.3.1.1 so I can only go from the 6030 to the 6040 for now. No vmware on the 6040 yet. Question: I have 3 shelves each for each head (6 300gb total for the 6040) and 1 big aggr on the 6030, 11.5TB agr 4.97used in the volume on the 6030.
I was thinking of having the volumes for the 6040 @ 750GB and 1 LUN. I am averaging ~20 windows boxes. I figured when SMVI runs, it is only 1 snapmirror update vs running the job with allt he LUNS in 1 big aggr and multiple snapmirrors happening. I will make the fractional reserve set to 50% on the 6040 with thin provisioning at the LUN level, and dedupe.
Pros: I think its more granular. Having to find a location for 6TB is a bit difficult
Con: Not as much dedupe by having all VM's in 1 volume.
Thoughts, experiences, gotchas.?
SMVI supports NFS.
Stetson M. Webster Professional Services Consultant NCIE-SAN, NCIE-B&R, SNIA-SCSN-E NetApp Global Services - Southeast District 919.250.0052 Mobile Stetson.Webster@netapp.com Learn how: netapp.com/guarantee
-----Original Message----- From: Jack Lyons [mailto:jack1729@gmail.com] Sent: Tuesday, June 02, 2009 7:08 AM To: Darren Sykes Cc: steve klise; toasters@mathworks.com Subject: Re: ESX volume design question
He mentioned that he is running smvi - I am not sure smvi supports nfs yet.....but highly recommend nfs on NetApp for VMWare make sure you read
the latest TR from NetApp about optimal configuration of VMWare on NetApp - also make sure you upgrade to ESX 3.5U3 or later.
Darren Sykes wrote:
Have you considered using NFS rather than FC?
It would allow you to have larger volumes (as you're not constrained
by
the same VM/datastore limit due to ESX locking on block based devices) and you'll therefore no longer need fractional reserve and will get better dedup savings?
If that's not a possibility, you'll have to consider whether 20 VM's
per
datastore is ideal; When I last used FC, best practice was less than that.
Darren
-----Original Message----- From: owner-toasters@mathworks.com
[mailto:owner-toasters@mathworks.com]
On Behalf Of steve klise Sent: 01 June 2009 21:39 To: toasters@mathworks.com Subject: ESX volume design question
What would you do? ESX 3.5 running all windows servers; FCP with all fiber drives We have an existing 6030 with 1 aggr, 1 large volume with ~5 LUNS
500GB
a piece; running DOT 7.24. (upgrading soon, so no dedupe running yet).
We
have a new HA 6040 (PAM cards) =). The existing volume on the 6030 is still at 100 percent fractional reserve.
We have SMVI running in our test environment, but not in prod yet. We wanted to snapmirror from the 6030 to the 6040 and visa versa. The 6040 is running 7.3.1.1 so I can only go from the 6030 to the 6040 for now. No vmware on the 6040 yet. Question: I have 3 shelves each for each head (6 300gb total for the 6040) and 1 big aggr on the 6030, 11.5TB agr 4.97used in the volume on the 6030.
I was thinking of having the volumes for the 6040 @ 750GB and 1 LUN.
I
am averaging ~20 windows boxes. I figured when SMVI runs, it is only 1 snapmirror update vs running the job with allt he LUNS in 1 big aggr
and
multiple snapmirrors happening. I will make the fractional reserve
set
to 50% on the 6040 with thin provisioning at the LUN level, and dedupe.
Pros: I think its more granular. Having to find a location for 6TB is a bit difficult
Con: Not as much dedupe by having all VM's in 1 volume.
Thoughts, experiences, gotchas.?
It does indeed (we use it).
You're probably thinking of Protection Manager which doesn't yet.
-----Original Message----- From: Webster, Stetson [mailto:Stetson.Webster@netapp.com] Sent: 02 June 2009 14:24 To: Jack Lyons; Darren Sykes Cc: steve klise; toasters@mathworks.com Subject: RE: ESX volume design question
SMVI supports NFS.
Stetson M. Webster Professional Services Consultant NCIE-SAN, NCIE-B&R, SNIA-SCSN-E NetApp Global Services - Southeast District 919.250.0052 Mobile Stetson.Webster@netapp.com Learn how: netapp.com/guarantee
-----Original Message----- From: Jack Lyons [mailto:jack1729@gmail.com] Sent: Tuesday, June 02, 2009 7:08 AM To: Darren Sykes Cc: steve klise; toasters@mathworks.com Subject: Re: ESX volume design question
He mentioned that he is running smvi - I am not sure smvi supports nfs yet.....but highly recommend nfs on NetApp for VMWare make sure you read
the latest TR from NetApp about optimal configuration of VMWare on NetApp - also make sure you upgrade to ESX 3.5U3 or later.
Darren Sykes wrote:
Have you considered using NFS rather than FC?
It would allow you to have larger volumes (as you're not constrained
by
the same VM/datastore limit due to ESX locking on block based devices) and you'll therefore no longer need fractional reserve and will get better dedup savings?
If that's not a possibility, you'll have to consider whether 20 VM's
per
datastore is ideal; When I last used FC, best practice was less than that.
Darren
-----Original Message----- From: owner-toasters@mathworks.com
[mailto:owner-toasters@mathworks.com]
On Behalf Of steve klise Sent: 01 June 2009 21:39 To: toasters@mathworks.com Subject: ESX volume design question
What would you do? ESX 3.5 running all windows servers; FCP with all fiber drives We have an existing 6030 with 1 aggr, 1 large volume with ~5 LUNS
500GB
a piece; running DOT 7.24. (upgrading soon, so no dedupe running yet).
We
have a new HA 6040 (PAM cards) =). The existing volume on the 6030 is still at 100 percent fractional reserve.
We have SMVI running in our test environment, but not in prod yet. We wanted to snapmirror from the 6030 to the 6040 and visa versa. The 6040 is running 7.3.1.1 so I can only go from the 6030 to the 6040 for now. No vmware on the 6040 yet. Question: I have 3 shelves each for each head (6 300gb total for the 6040) and 1 big aggr on the 6030, 11.5TB agr 4.97used in the volume on the 6030.
I was thinking of having the volumes for the 6040 @ 750GB and 1 LUN.
I
am averaging ~20 windows boxes. I figured when SMVI runs, it is only 1 snapmirror update vs running the job with allt he LUNS in 1 big aggr
and
multiple snapmirrors happening. I will make the fractional reserve
set
to 50% on the 6040 with thin provisioning at the LUN level, and dedupe.
Pros: I think its more granular. Having to find a location for 6TB is a bit difficult
Con: Not as much dedupe by having all VM's in 1 volume.
Thoughts, experiences, gotchas.?
To report this email as spam click https://www.mailcontrol.com/sr/R0Xy9+TfW7HTndxI!oX7UvGHrMX8oTLhHD5sjDdaz 9i7cGxM4TH3Ls6Y49S0EHviNMZGMWsdFFLwFL4l1WzKMQ== .
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 6/2/09 9:28 AM, Darren Sykes wrote:
It does indeed (we use it).
You're probably thinking of Protection Manager which doesn't yet.
Also VMWare doesnt support SRM under NFS datastores in ESX 3.5. I also hear that this wont be supported under vSphere 4 either (not enough QA cycles to certify). Just another gotcha making NFS a second-class VMWare citizen as far as I recall.
Still VMWare atop NFS is so worth the ease-of-use, diminished-complexity ... :)
Cheers.
- -- Nick Silkey
I think we may see it come out soon though, and like you said even if I can't use SRM I'll deal with it, I'd rather have NFS.
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Nick Silkey Sent: Tuesday, June 02, 2009 11:52 AM To: Darren Sykes Cc: Webster, Stetson; Jack Lyons; steve klise; toasters@mathworks.com Subject: Re: ESX volume design question
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 6/2/09 9:28 AM, Darren Sykes wrote:
It does indeed (we use it).
You're probably thinking of Protection Manager which doesn't yet.
Also VMWare doesnt support SRM under NFS datastores in ESX 3.5. I also hear that this wont be supported under vSphere 4 either (not enough QA cycles to certify). Just another gotcha making NFS a second-class VMWare citizen as far as I recall.
Still VMWare atop NFS is so worth the ease-of-use, diminished-complexity ... :)
Cheers.
- -- Nick Silkey
Please be advised that this email may contain confidential information. If you are not the intended recipient, please do not read, copy or re-transmit this email. If you have received this email in error, please notify us by email by replying to the sender and by telephone (call us collect at +1 202-828-0850) and delete this message and any attachments. Thank you in advance for your cooperation and assistance.
In addition, Danaher and its subsidiaries disclaim that the content of this email constitutes an offer to enter into, or the acceptance of, any contract or agreement or any amendment thereto; provided that the foregoing disclaimer does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment to this email.