Vaughn:
NetApp covers this technology in their VMware Storage Best Practices paper, TR3428.
You're way too modest....you WROTE that TR......
:)
Glenn @ Voyant
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner- toasters@mathworks.com] On Behalf Of M. Vaughn Stewart Sent: Wednesday, July 11, 2007 10:53 PM To: Scott Lowe Cc: Davies,Matt; toasters@mathworks.com Subject: Re: NFS vs. iSCSI for VMware (was "Re: List still active?")
Scott,
Your comments on the 2x the space for a NetApp is LUN is a bit misleading and is out of date.
NetApp LUNs do NOT require any additional space. i.e. 50gbs = 50gbs
If you want to take a snapshot of that LUN you had to ensure that you had additional capacity in case the volume became full in which case the snapshot function would be disabled until manual intervention deleted old snapshots. In this design NetApp did not defferentiate the value of production data and snapshot data, so a requirement of a reserve equaling 100% of the LUN size was required. i.e. 50gb = 50gb + 50 gb
+
snapshot space.
This design seemed inefficient when compared to snap reserves with NAS data, but when compared to other SAN vendors and their n disk backup mechanisms, it was extremely space efficient. I wont go into details here but to say that enterprise class on disk backups with traditional arrays require 100% overhead for each individual backup.
Today NetApp offers a means where one can say production data is more valuable than snapshots. They provide two dynamic space management policies, auto vol grow and auto snapshot delete. With these two policies you can eliminate any additional LUN overhead and should a volume run out of space the appropriate policy will be enforced and will ensure that production data remains online. Now 50GB with snapshots = 50GB + snapshot space only!
NetApp covers this technology in their VMware Storage Best Practices paper, TR3428.
Vaughn
Scott Lowe wrote:
The #1 complaint I have about using iSCSI relates to how LUNs are handled in the NetApp world:
- You have to (by default) allocate 2x + deltas for LUNs. For a
100GB LUN, you'd need 240GB of space (by default--I know there are workarounds with ONTAP 7.2 and later). With an NFS mount (which is nothing more than exporting a FlexVol), you only need to account for the 20% Snapshot reserve.
- You can't resize iSCSI LUNs. With NFS on a FlexVol, you can
resize to your heart's content because WAFL is controlling the filesystem--not the host.
- It's open, meaning that your VMDKs aren't locked into the
proprietary VMFS file system. This could potentially simplify
backups
and restores.
As Glenn @ Voyant already mentioned, you also gain thin provisioned disks by default and more knowledge/history/experience with NFS than with iSCSI.
Thanks, Scott Lowe ePlus Technology, Inc. slowe@eplus.com
On Jul 11, 2007, at 9:34 AM, Davies,Matt wrote:
Scott,
Any chance you could expand on the advantages of NFS over Iscsi ?
NFS
isn't an area I have any experience of....
Cheers
Matt
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-
toasters@mathworks.com]
On Behalf Of Scott Lowe Sent: 11 July 2007 14:05 To: Forest Leonard; toasters@mathworks.com Subject: Re: List still active?
Forest,
When it comes time to configure VMkernel for VMotion, then I'd definitely recommend keeping it separate from the virtual machine network. As it stands right now, you don't even need a VMkernel
NIC
configured because it sounds like you are using Qlogic iSCSI HBAs
and
only have a single ESX Server. Since the Qlogic cards handle the iSCSI traffic and there is no VMotion, there no current need for a VMkernel NIC (unless you want to use NFS from the FAS to provide additional storage for VMs--which, by the way, works pretty well
and
has some nice advantages over iSCSI, IMHO).
Regards, Scott Lowe ePlus Technology, Inc. slowe@eplus.com
On Jul 11, 2007, at 8:33 AM, Forest Leonard wrote:
Interesting stuff. I only have one ESX server so I haven't gotten to the Vmotion configuration yet.. That should be later this year.
I just configured a RDM to run a Virtual server on. I found an
article
where you want to create the type as NTFS if it is a RDM for a
windows
host. Not sure if I am going to use this going forward. I actually don't know if you can migrate into a RDM. It doesn't look like it.
I am only using 2 NIC's on my Vmware server.. and a QLOGIC card
for
my
ISCSI access.. I may need to look at bulking up my NIC configuration. It looks like I may gain some performance if I separate my VMKernal from my Virtual Machine network?
Thanks, Forest
-----Original Message----- From: Davies,Matt [mailto:MDAVIES@generalatlantic.com] Sent: Wednesday, July 11, 2007 8:18 AM To: Forest Leonard; ggwalker@mindspring.com;
toasters@mathworks.com
Subject: RE: List still active?
Sounds like we are both at the same stage.
We are not using RDM's yet, however when it comes to exchange we
will
have to, or I may just stick with using the Microsoft Iscsi initiatator from within the VM, not exactly supported, but I know of other
people
doing it.
We have migrated 8 machines so far, into just one datastore and have not seen any performance problems at all, although most of the
machines
have very low IO requirements.
We are using a script to snapshot and then replicate using
snapmirror,
it works very well, however our Virtual Centre server is also a VM which was causing a few problems with the snapshots on the VMware side
not
being removed, but moving this to a separate datastore seems have cured the problems, even SQL doesn't seem to have a problem.
The script is the one written by Evan Battle, that is in the
newest
netapp docs on VMware. I did have a few problems with ssh to the filer, but we are now using rsh and it seems to be ok.
I don't know how you have setup your virtual switches on the ESX
side,
but I got some best practice information out of VMware on that subject.
Each of our ESX IBM3550 host servers have 6 Nics, connected as follows.
2 Nics for service console VMKernel for Vmotion, load balance
using
Virtual port ID
2 Nics for Virtual Machine network, load balance using Virtual
port
ID
2 Nics for ISCSI (Service Console and VMKernel), load balance
using
IP
Hash
Hope this helps....
-----Original Message----- From: Forest Leonard [mailto:fleonard@rvigroup.com] Sent: 11 July 2007 11:06 To: Davies,Matt; ggwalker@mindspring.com; toasters@mathworks.com Subject: RE: List still active?
Hey Matt... I am actually doing the exact same thing on a FAS
270..
I
have migrated about 7 servers so far.
Are you using RDM's (remote device mappings) for the Virtual Machines? I actually just created 2 200GB LUNs on the netapp to use as
Datastores
and have not had any performance issues.
Just wondering what your experience with RDM's are.. I added one into a virtual machine.. It just lets you map a LUN directly to a Virtual machine.
Has anyone out there used the netapp script to capture a VM
snapshot?
Thanks, Forest
From: owner-toasters@mathworks.com on behalf of Davies,Matt Sent: Wed 7/11/2007 1:00 AM To: ggwalker@mindspring.com; toasters@mathworks.com Subject: RE: List still active?
Still working by the looks of things.
busy in process of migrating all our physical severs to VM, stored on a iscsi lun on a FAS 270.
For those that want to know we are using the software iscsi
initiator
within ESX and have not had any problems so far.
cheers
matt
-----Original Message----- From: owner-toasters@mathworks.com owner-toasters@mathworks.com To: toasters@mathworks.com toasters@mathworks.com Sent: Wed Jul 11 01:03:49 2007 Subject: List still active?
I've noticed that I'm still subscribed, but have received no email since July 4th. Everyone didn't trade their NetApp gear for something
else
while I was out of town did they?? J
This e-mail (including all attachments) is confidential and may be privileged. It is for the exclusive use of the addressee only. If you are not
the
addressee, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error,
please
erase all copies of the message and its attachments and notify us
immediately
at
help@generalatlantic.com mailto:help@generalatlantic.com. Thank
You.
######################################################################
## ############# Note: If you have received this message in error, please notify the
original
sender immediately by telephone (203)975-2100 or by return e-mail,
and
delete the message, along with any attachments from your computer.
If
you have received this message in error, you are hereby notified
that
any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited.This note is also to notify the recipient of this email that it has been scanned for all known viruses and attacking techniques. Thank you.
######################################################################
## #############
######################################################################
############### Note: If you have received this message in error, please notify the original sender immediately by telephone (203)975-2100 or by
return
e-mail, and delete the message, along with any attachments from your computer. If you have received this message in error, you
are
hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited.This note is also to notify the recipient of this email that it has been scanned for all known viruses and attacking techniques. Thank you. http://www.rvigroup.com
######################################################################
###############
Wait for the next version we have a ton of new and / or enhanced info...
Any of you attending VMwrold07?
V
On 7/12/07, Glenn Dekhayser gdekhayser@voyantinc.com wrote:
Vaughn:
NetApp covers this technology in their VMware Storage Best Practices paper, TR3428.
You're way too modest....you WROTE that TR......
:)
Glenn @ Voyant
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner- toasters@mathworks.com] On Behalf Of M. Vaughn Stewart Sent: Wednesday, July 11, 2007 10:53 PM To: Scott Lowe Cc: Davies,Matt; toasters@mathworks.com Subject: Re: NFS vs. iSCSI for VMware (was "Re: List still active?")
Scott,
Your comments on the 2x the space for a NetApp is LUN is a bit misleading and is out of date.
NetApp LUNs do NOT require any additional space. i.e. 50gbs = 50gbs
If you want to take a snapshot of that LUN you had to ensure that you had additional capacity in case the volume became full in which case the snapshot function would be disabled until manual intervention deleted old snapshots. In this design NetApp did not defferentiate the value of production data and snapshot data, so a requirement of a reserve equaling 100% of the LUN size was required. i.e. 50gb = 50gb + 50 gb
snapshot space.
This design seemed inefficient when compared to snap reserves with NAS data, but when compared to other SAN vendors and their n disk backup mechanisms, it was extremely space efficient. I wont go into details here but to say that enterprise class on disk backups with traditional arrays require 100% overhead for each individual backup.
Today NetApp offers a means where one can say production data is more valuable than snapshots. They provide two dynamic space management policies, auto vol grow and auto snapshot delete. With these two policies you can eliminate any additional LUN overhead and should a volume run out of space the appropriate policy will be enforced and will ensure that production data remains online. Now 50GB with snapshots = 50GB + snapshot space only!
NetApp covers this technology in their VMware Storage Best Practices paper, TR3428.
Vaughn
Scott Lowe wrote:
The #1 complaint I have about using iSCSI relates to how LUNs are handled in the NetApp world:
- You have to (by default) allocate 2x + deltas for LUNs. For a
100GB LUN, you'd need 240GB of space (by default--I know there are workarounds with ONTAP 7.2 and later). With an NFS mount (which is nothing more than exporting a FlexVol), you only need to account for the 20% Snapshot reserve.
- You can't resize iSCSI LUNs. With NFS on a FlexVol, you can
resize to your heart's content because WAFL is controlling the filesystem--not the host.
- It's open, meaning that your VMDKs aren't locked into the
proprietary VMFS file system. This could potentially simplify
backups
and restores.
As Glenn @ Voyant already mentioned, you also gain thin provisioned disks by default and more knowledge/history/experience with NFS than with iSCSI.
Thanks, Scott Lowe ePlus Technology, Inc. slowe@eplus.com
On Jul 11, 2007, at 9:34 AM, Davies,Matt wrote:
Scott,
Any chance you could expand on the advantages of NFS over Iscsi ?
NFS
isn't an area I have any experience of....
Cheers
Matt
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-
toasters@mathworks.com]
On Behalf Of Scott Lowe Sent: 11 July 2007 14:05 To: Forest Leonard; toasters@mathworks.com Subject: Re: List still active?
Forest,
When it comes time to configure VMkernel for VMotion, then I'd definitely recommend keeping it separate from the virtual machine network. As it stands right now, you don't even need a VMkernel
NIC
configured because it sounds like you are using Qlogic iSCSI HBAs
and
only have a single ESX Server. Since the Qlogic cards handle the iSCSI traffic and there is no VMotion, there no current need for a VMkernel NIC (unless you want to use NFS from the FAS to provide additional storage for VMs--which, by the way, works pretty well
and
has some nice advantages over iSCSI, IMHO).
Regards, Scott Lowe ePlus Technology, Inc. slowe@eplus.com
On Jul 11, 2007, at 8:33 AM, Forest Leonard wrote:
Interesting stuff. I only have one ESX server so I haven't gotten to the Vmotion configuration yet.. That should be later this year.
I just configured a RDM to run a Virtual server on. I found an
article
where you want to create the type as NTFS if it is a RDM for a
windows
host. Not sure if I am going to use this going forward. I actually don't know if you can migrate into a RDM. It doesn't look like it.
I am only using 2 NIC's on my Vmware server.. and a QLOGIC card
for
my
ISCSI access.. I may need to look at bulking up my NIC configuration. It looks like I may gain some performance if I separate my VMKernal from my Virtual Machine network?
Thanks, Forest
-----Original Message----- From: Davies,Matt [mailto:MDAVIES@generalatlantic.com] Sent: Wednesday, July 11, 2007 8:18 AM To: Forest Leonard; ggwalker@mindspring.com;
toasters@mathworks.com
Subject: RE: List still active?
Sounds like we are both at the same stage.
We are not using RDM's yet, however when it comes to exchange we
will
have to, or I may just stick with using the Microsoft Iscsi initiatator from within the VM, not exactly supported, but I know of other
people
doing it.
We have migrated 8 machines so far, into just one datastore and have not seen any performance problems at all, although most of the
machines
have very low IO requirements.
We are using a script to snapshot and then replicate using
snapmirror,
it works very well, however our Virtual Centre server is also a VM which was causing a few problems with the snapshots on the VMware side
not
being removed, but moving this to a separate datastore seems have cured the problems, even SQL doesn't seem to have a problem.
The script is the one written by Evan Battle, that is in the
newest
netapp docs on VMware. I did have a few problems with ssh to the filer, but we are now using rsh and it seems to be ok.
I don't know how you have setup your virtual switches on the ESX
side,
but I got some best practice information out of VMware on that subject.
Each of our ESX IBM3550 host servers have 6 Nics, connected as follows.
2 Nics for service console VMKernel for Vmotion, load balance
using
Virtual port ID
2 Nics for Virtual Machine network, load balance using Virtual
port
ID
2 Nics for ISCSI (Service Console and VMKernel), load balance
using
IP
Hash
Hope this helps....
-----Original Message----- From: Forest Leonard [mailto:fleonard@rvigroup.com] Sent: 11 July 2007 11:06 To: Davies,Matt; ggwalker@mindspring.com; toasters@mathworks.com Subject: RE: List still active?
Hey Matt... I am actually doing the exact same thing on a FAS
270..
I
have migrated about 7 servers so far.
Are you using RDM's (remote device mappings) for the Virtual Machines? I actually just created 2 200GB LUNs on the netapp to use as
Datastores
and have not had any performance issues.
Just wondering what your experience with RDM's are.. I added one into a virtual machine.. It just lets you map a LUN directly to a Virtual machine.
Has anyone out there used the netapp script to capture a VM
snapshot?
Thanks, Forest
From: owner-toasters@mathworks.com on behalf of Davies,Matt Sent: Wed 7/11/2007 1:00 AM To: ggwalker@mindspring.com; toasters@mathworks.com Subject: RE: List still active?
Still working by the looks of things.
busy in process of migrating all our physical severs to VM, stored on a iscsi lun on a FAS 270.
For those that want to know we are using the software iscsi
initiator
within ESX and have not had any problems so far.
cheers
matt
-----Original Message----- From: owner-toasters@mathworks.com owner-toasters@mathworks.com To: toasters@mathworks.com toasters@mathworks.com Sent: Wed Jul 11 01:03:49 2007 Subject: List still active?
I've noticed that I'm still subscribed, but have received no email since July 4th. Everyone didn't trade their NetApp gear for something
else
while I was out of town did they?? J
This e-mail (including all attachments) is confidential and may be privileged. It is for the exclusive use of the addressee only. If you are not
the
addressee, you are hereby notified that any dissemination of this communication is strictly prohibited. If you have received this communication in error,
please
erase all copies of the message and its attachments and notify us
immediately
at
help@generalatlantic.com mailto:help@generalatlantic.com. Thank
You.
######################################################################
## ############# Note: If you have received this message in error, please notify the
original
sender immediately by telephone (203)975-2100 or by return e-mail,
and
delete the message, along with any attachments from your computer.
If
you have received this message in error, you are hereby notified
that
any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited.This note is also to notify the recipient of this email that it has been scanned for all known viruses and attacking techniques. Thank you.
######################################################################
## #############
######################################################################
############### Note: If you have received this message in error, please notify the original sender immediately by telephone (203)975-2100 or by
return
e-mail, and delete the message, along with any attachments from your computer. If you have received this message in error, you
are
hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited.This note is also to notify the recipient of this email that it has been scanned for all known viruses and attacking techniques. Thank you. http://www.rvigroup.com
######################################################################
###############