Hi,
I'm trying to characterize the workload on our Netapp Filer running VMware over NFS and iSCSI for benchmarking purposes using IOmeter.
Is it possible to determine what the percentage random sequential distribution the workload is on a Filer e.g from a perfstat?
I've looked at vscsiStats but that is only per VM and not for the aggregate workload on the Filer.
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-meas... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
statit will give it to you /per disk
/ On 01/16/2014 04:25 PM, Martin wrote:
Hi,
I'm trying to characterize the workload on our Netapp Filer running VMware over NFS and iSCSI for benchmarking purposes using IOmeter.
Is it possible to determine what the percentage random sequential distribution the workload is on a Filer e.g from a perfstat?
I've looked at vscsiStats but that is only per VM and not for the aggregate workload on the Filer.
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-meas... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
Hi Jeremy,
Thanks for the reply. Excuse my ignorance here but I looked at the disk statistics from statit and it I can't see how I can determine the random/sequential workload distribution from the counters shown.
From further reading it appears to me that if you have mixed workloads on a
VMware datastores the aggregate workload on the array will be mainly random?
Martin
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-meas... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
Statit will get you the AMALGAMATED workload to disks on writes, not individual IOs. There's nothing there.
On reads..sure, I can see that. The un-cached client read IO requests+readahead..but still an amalgamated result aggregate wide since you have no data locality to be very precise about it.
On Thu, Jan 16, 2014 at 1:32 PM, Jeremy Page jeremy.page@gilbarco.comwrote:
statit will give it to you
*per disk *
On 01/16/2014 04:25 PM, Martin wrote:
Hi,
I'm trying to characterize the workload on our Netapp Filer running VMware over NFS and iSCSI for benchmarking purposes using IOmeter.
Is it possible to determine what the percentage random sequential distribution the workload is on a Filer e.g from a perfstat?
I've looked at vscsiStats but that is only per VM and not for the aggregate workload on the Filer.
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-meas... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com. _______________________________________________ Toasters mailing listToasters@teaparty.nethttp://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
I am not sure that's really true. ALL writes are going to be "sequential" if they can - the filer writes to NVRAM not disk directly. So the AFAIK the writes are always as sequential as they can be*
Reads are cached like crazy, esp when PAM/FlashCache etc.
*Generally when someone says they want to know they are looking for bottlenecks. I stand by statit as a tool for this since you are seeing how each disk is working. You might not know exactly what's causing the issue but it's good data to know what is going on at the aggregate level (which is what he is asking about) and the RG level (which is where the performance for an aggregate is going to be determined).
Disk Statistics (per second) ut% is the percent of time the disk was busy. xfers is the number of data-transfer commands issued per second. xfers = ureads + writes + cpreads + greads + gwrites chain is the average number of 4K blocks per command. usecs is the average disk round-trip time per 4K block.
*Jeremy Page* | Senior Technical Architect | *Gilbarco Veeder-Root, A Danaher Company* *Office:*336-547-5399 | *Cell:*336-601-7274 | *24x7 Emergency:*336-430-8151 ------------------------------------------------------------------------ On 01/16/2014 04:56 PM, Jeff Mohler wrote:
Statit will get you the AMALGAMATED workload to disks on writes, not individual IOs. There's nothing there.
On reads..sure, I can see that. The un-cached client read IO requests+readahead..but still an amalgamated result aggregate wide since you have no data locality to be very precise about it.
On Thu, Jan 16, 2014 at 1:32 PM, Jeremy Page <jeremy.page@gilbarco.com mailto:jeremy.page@gilbarco.com> wrote:
statit will give it to you /per disk / On 01/16/2014 04:25 PM, Martin wrote:
Hi, I'm trying to characterize the workload on our Netapp Filer running VMware over NFS and iSCSI for benchmarking purposes using IOmeter. Is it possible to determine what the percentage random sequential distribution the workload is on a Filer e.g from a perfstat? I've looked at vscsiStats but that is only per VM and not for the aggregate workload on the Filer. -- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-measurement-on-Filers-tp25474.html Sent from the Network Appliance - Toasters mailing list archive at Nabble.com. _______________________________________________ Toasters mailing list Toasters@teaparty.net <mailto:Toasters@teaparty.net> http://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment. _______________________________________________ Toasters mailing list Toasters@teaparty.net <mailto:Toasters@teaparty.net> http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
:s/NVRAM/System Memory/g
Fixed it for ya.
:)
On Thu, Jan 16, 2014 at 2:01 PM, Jeremy Page jeremy.page@gilbarco.comwrote:
I am not sure that's really true. ALL writes are going to be "sequential" if they can - the filer writes to NVRAM not disk directly. So the AFAIK the writes are always as sequential as they can be*
Reads are cached like crazy, esp when PAM/FlashCache etc.
*Generally when someone says they want to know they are looking for bottlenecks. I stand by statit as a tool for this since you are seeing how each disk is working. You might not know exactly what's causing the issue but it's good data to know what is going on at the aggregate level (which is what he is asking about) and the RG level (which is where the performance for an aggregate is going to be determined).
Disk Statistics (per second) ut% is the percent of time the disk was busy. xfers is the number of data-transfer commands issued per second. xfers = ureads + writes + cpreads + greads + gwrites chain is the average number of 4K blocks per command. usecs is the average disk round-trip time per 4K block.
*Jeremy Page* | Senior Technical Architect | *Gilbarco Veeder-Root, A Danaher Company* *Office:*336-547-5399 | *Cell:*336-601-7274 | *24x7 Emergency:* 336-430-8151
On 01/16/2014 04:56 PM, Jeff Mohler wrote:
Statit will get you the AMALGAMATED workload to disks on writes, not individual IOs. There's nothing there.
On reads..sure, I can see that. The un-cached client read IO requests+readahead..but still an amalgamated result aggregate wide since you have no data locality to be very precise about it.
On Thu, Jan 16, 2014 at 1:32 PM, Jeremy Page jeremy.page@gilbarco.comwrote:
statit will give it to you
*per disk *
On 01/16/2014 04:25 PM, Martin wrote:
Hi,
I'm trying to characterize the workload on our Netapp Filer running VMware over NFS and iSCSI for benchmarking purposes using IOmeter.
Is it possible to determine what the percentage random sequential distribution the workload is on a Filer e.g from a perfstat?
I've looked at vscsiStats but that is only per VM and not for the aggregate workload on the Filer.
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-meas... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com. _______________________________________________ Toasters mailing listToasters@teaparty.nethttp://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
Thanks for the info, looks like I was looking in the wrong place and didn't recall the Filer will try to make all I/O sequential to the disks (Basic WAFL principle).
From reading around it looks like any time you mix sequential and random I/O as in a mixed VMware environment the aggregate I/O will be random.
Whats odd with our environment is the read/write I/O is 30/70 when most other peoples VMware environment is the opposite ratio. I think this maybe why our environment is heavy on the Filers resources.
I've managed to create an appropriate load test with IOmeter to approximate our production VMware workload.
Thanks Martin
On 16/01/2014 22:54, Jeff Mohler-2 [via Network Appliance - Toasters] wrote:
:s/NVRAM/System Memory/g
Fixed it for ya.
:)
On Thu, Jan 16, 2014 at 2:01 PM, Jeremy Page <[hidden email] </user/SendEmail.jtp?type=node&node=25479&i=0>> wrote:
I am not sure that's really true. ALL writes are going to be "sequential" if they can - the filer writes to NVRAM not disk directly. So the AFAIK the writes are always as sequential as they can be* Reads are cached like crazy, esp when PAM/FlashCache etc. *Generally when someone says they want to know they are looking for bottlenecks. I stand by statit as a tool for this since you are seeing how each disk is working. You might not know exactly what's causing the issue but it's good data to know what is going on at the aggregate level (which is what he is asking about) and the RG level (which is where the performance for an aggregate is going to be determined). Disk Statistics (per second) ut% is the percent of time the disk was busy. xfers is the number of data-transfer commands issued per second. xfers = ureads + writes + cpreads + greads + gwrites chain is the average number of 4K blocks per command. usecs is the average disk round-trip time per 4K block. *Jeremy Page* | Senior Technical Architect | *Gilbarco Veeder-Root, A Danaher Company* *Office:*<a href="tel:336-547-5399" value="+13365475399" target="_blank">336-547-5399 | *Cell:*<a href="tel:336-601-7274" value="+13366017274" target="_blank">336-601-7274 | *24x7 Emergency:*<a href="tel:336-430-8151" value="+13364308151" target="_blank">336-430-8151 ------------------------------------------------------------------------ On 01/16/2014 04:56 PM, Jeff Mohler wrote:
Statit will get you the AMALGAMATED workload to disks on writes, not individual IOs. There's nothing there. On reads..sure, I can see that. The un-cached client read IO requests+readahead..but still an amalgamated result aggregate wide since you have no data locality to be very precise about it. On Thu, Jan 16, 2014 at 1:32 PM, Jeremy Page <[hidden email] </user/SendEmail.jtp?type=node&node=25479&i=1>> wrote: statit will give it to you /per disk / On 01/16/2014 04:25 PM, Martin wrote:
Hi, I'm trying to characterize the workload on our Netapp Filer running VMware over NFS and iSCSI for benchmarking purposes using IOmeter. Is it possible to determine what the percentage random sequential distribution the workload is on a Filer e.g from a perfstat? I've looked at vscsiStats but that is only per VM and not for the aggregate workload on the Filer. -- View this message in context:http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-measurement-on-Filers-tp25474.html Sent from the Network Appliance - Toasters mailing list archive at Nabble.com. _______________________________________________ Toasters mailing list [hidden email] </user/SendEmail.jtp?type=node&node=25479&i=2> http://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment. _______________________________________________ Toasters mailing list [hidden email] </user/SendEmail.jtp?type=node&node=25479&i=3> http://www.teaparty.net/mailman/listinfo/toasters -- --- Gustatus Similis Pullus
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
--
Gustatus Similis Pullus
Toasters mailing list [hidden email] </user/SendEmail.jtp?type=node&node=25479&i=4> http://www.teaparty.net/mailman/listinfo/toasters
If you reply to this email, your message will be added to the discussion below: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-meas...
To unsubscribe from Random/Sequential measurement on Filers, click here http://network-appliance-toasters.10978.n7.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=25474&code=bWFydGluQGxlZ2dhdHQubWUudWt8MjU0NzR8LTIwMTk1ODAyMjk=. NAML http://network-appliance-toasters.10978.n7.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-meas... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
"From reading around it looks like any time you mix sequential and random I/O as in a mixed VMware environment the aggregate I/O will be random."
I would not say that.
For writes, its very not random to disk, the consistency point removes the randomness of the resulting write IOs going to disk.
For reads..each read is it's own IO...generally, but there is no way to see which protocol IO was which disk IO to really tell, but its way less hidden/removed for reads than writes.
On Mon, Jan 20, 2014 at 2:17 PM, Martin martin@leggatt.me.uk wrote:
Thanks for the info, looks like I was looking in the wrong place and didn't recall the Filer will try to make all I/O sequential to the disks (Basic WAFL principle).
From reading around it looks like any time you mix sequential and random I/O as in a mixed VMware environment the aggregate I/O will be random.
Whats odd with our environment is the read/write I/O is 30/70 when most other peoples VMware environment is the opposite ratio. I think this maybe why our environment is heavy on the Filers resources.
I've managed to create an appropriate load test with IOmeter to approximate our production VMware workload.
Thanks Martin
On 16/01/2014 22:54, Jeff Mohler-2 [via Network Appliance - Toasters] wrote:
:s/NVRAM/System Memory/g
Fixed it for ya.
:)
On Thu, Jan 16, 2014 at 2:01 PM, Jeremy Page <[hidden email]http://user/SendEmail.jtp?type=node&node=25479&i=0
wrote:
I am not sure that's really true. ALL writes are going to be "sequential" if they can - the filer writes to NVRAM not disk directly. So the AFAIK the writes are always as sequential as they can be*
Reads are cached like crazy, esp when PAM/FlashCache etc.
*Generally when someone says they want to know they are looking for bottlenecks. I stand by statit as a tool for this since you are seeing how each disk is working. You might not know exactly what's causing the issue but it's good data to know what is going on at the aggregate level (which is what he is asking about) and the RG level (which is where the performance for an aggregate is going to be determined).
Disk Statistics (per second) ut% is the percent of time the disk was busy. xfers is the number of data-transfer commands issued per second. xfers = ureads + writes + cpreads + greads + gwrites chain is the average number of 4K blocks per command. usecs is the average disk round-trip time per 4K block.
*Jeremy Page* | Senior Technical Architect | *Gilbarco Veeder-Root, A Danaher Company* *Office:*<a href=<a class="moz-txt-link-rfc2396E" href="tel:336-547-5399 ">"tel:336-547-5399" value="+13365475399" target="_blank">336-547-5399 | *Cell:*<a href=<a class="moz-txt-link-rfc2396E" href="tel:336-601-7274 ">"tel:336-601-7274" value="+13366017274" target="_blank">336-601-7274 | *24x7 Emergency:*<a href=<a class="moz-txt-link-rfc2396E" href="tel: 336-430-8151">"tel:336-430-8151" value="+13364308151" target="_blank"> 336-430-8151
On 01/16/2014 04:56 PM, Jeff Mohler wrote:
Statit will get you the AMALGAMATED workload to disks on writes, not individual IOs. There's nothing there.
On reads..sure, I can see that. The un-cached client read IO requests+readahead..but still an amalgamated result aggregate wide since you have no data locality to be very precise about it.
On Thu, Jan 16, 2014 at 1:32 PM, Jeremy Page <[hidden email]<http://user/SendEmail.jtp?type=node&node=25479&i=1>
wrote:
statit will give it to you
*per disk *
On 01/16/2014 04:25 PM, Martin wrote:
Hi,
I'm trying to characterize the workload on our Netapp Filer running VMware over NFS and iSCSI for benchmarking purposes using IOmeter.
Is it possible to determine what the percentage random sequential distribution the workload is on a Filer e.g from a perfstat?
I've looked at vscsiStats but that is only per VM and not for the aggregate workload on the Filer.
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-measurement-on-Filers-tp25474.html Sent from the Network Appliance - Toasters mailing list archive at Nabble.com. _______________________________________________ Toasters mailing list [hidden email] <http://user/SendEmail.jtp?type=node&node=25479&i=2>http://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
Toasters mailing list [hidden email] <http://user/SendEmail.jtp?type=node&node=25479&i=3> http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
--
Gustatus Similis Pullus
Toasters mailing list [hidden email] <http://user/SendEmail.jtp?type=node&node=25479&i=4> http://www.teaparty.net/mailman/listinfo/toasters
If you reply to this email, your message will be added to the discussion below:
http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-measurement-on-Filers-tp25474p25479.html To unsubscribe from Random/Sequential measurement on Filers, click here. NAML<http://network-appliance-toasters.10978.n7.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
View this message in context: Re: Random/Sequential measurement on Filers<http://network-appliance-toasters.10978.n7.nabble.com/Random-Sequential-measurement-on-Filers-tp25474p25491.html>
Sent from the Network Appliance - Toasters mailing list archive<http://network-appliance-toasters.10978.n7.nabble.com/>at Nabble.com.
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
No..NVRAM is not in the actual path of a write IO to disk.
Writes go to system RAM, a copy is made to NVRAM, when the system RAM space allocated is 1/2 full or the timer goes off..then writes are made to disk and the NVRAM copy is discarded.
NVRAM is there, but not part of the IO path to disk, it is beside it, and un-used unless the system crashed before RAM blocks were fully committed to disk.
On Mon, Jan 20, 2014 at 2:26 PM, Michael Bergman < michael.bergman@ericsson.com> wrote:
Couldn't resist... (sorry)
Jeff Mohler wrote:
:s/NVRAM/System Memory/g
Fixed it for ya.
Shouldn't that be more like
:s/NVRAM/System Memory plus a copy into NVRAM including mirror across to the HA cluster partners (if any) NVRAM, before returning with 'written OK'/g
:-)
/M
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
PS: I think we just agreed. :)
But its good to make the hard point, that NVRAM is not part of a write. Its common to see it as such, but it's systemically more correct to never mention NVRAM when talking about writes..cuz it doesnt matter. Its just a protection.
On Mon, Jan 20, 2014 at 2:31 PM, Jeff Mohler speedtoys.racing@gmail.comwrote:
No..NVRAM is not in the actual path of a write IO to disk.
Writes go to system RAM, a copy is made to NVRAM, when the system RAM space allocated is 1/2 full or the timer goes off..then writes are made to disk and the NVRAM copy is discarded.
NVRAM is there, but not part of the IO path to disk, it is beside it, and un-used unless the system crashed before RAM blocks were fully committed to disk.
On Mon, Jan 20, 2014 at 2:26 PM, Michael Bergman < michael.bergman@ericsson.com> wrote:
Couldn't resist... (sorry)
Jeff Mohler wrote:
:s/NVRAM/System Memory/g
Fixed it for ya.
Shouldn't that be more like
:s/NVRAM/System Memory plus a copy into NVRAM including mirror across to the HA cluster partners (if any) NVRAM, before returning with 'written OK'/g
:-)
/M
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
Isn't NVRAM slower than system RAM? So in theory could cause a bottleneck (this may no longer be true but you don't also have to wait for a CP on system RAM). Also, because the data is buffered this way should generally be a sequential write (assuming IO size)
On 01/20/2014 05:33 PM, Jeff Mohler wrote:
PS: I think we just agreed. :)
But its good to make the hard point, that NVRAM is not part of a write. Its common to see it as such, but it's systemically more correct to never mention NVRAM when talking about writes..cuz it doesnt matter. Its just a protection.
On Mon, Jan 20, 2014 at 2:31 PM, Jeff Mohler <speedtoys.racing@gmail.com mailto:speedtoys.racing@gmail.com> wrote:
No..NVRAM is not in the actual path of a write IO to disk. Writes go to system RAM, a copy is made to NVRAM, when the system RAM space allocated is 1/2 full or the timer goes off..then writes are made to disk and the NVRAM copy is discarded. NVRAM is there, but not part of the IO path to disk, it is beside it, and un-used unless the system crashed before RAM blocks were fully committed to disk. On Mon, Jan 20, 2014 at 2:26 PM, Michael Bergman <michael.bergman@ericsson.com <mailto:michael.bergman@ericsson.com>> wrote: Couldn't resist... (sorry) Jeff Mohler wrote: :s/NVRAM/System Memory/g Fixed it for ya. Shouldn't that be more like :s/NVRAM/System Memory plus a copy into NVRAM including mirror across to the HA cluster partners (if any) NVRAM, before returning with 'written OK'/g :-) /M _______________________________________________ Toasters mailing list Toasters@teaparty.net <mailto:Toasters@teaparty.net> http://www.teaparty.net/mailman/listinfo/toasters -- --- Gustatus Similis Pullus
--
Gustatus Similis Pullus
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
"Isn't NVRAM slower than system RAM?" --- The access to/from/etc, sure. LIkely. Probly.
"So in theory could cause a bottleneck" --- Not unless your network is feeding the filer data faster than the speed of access to NVRAM to make copies of it.
..which is why if you wanna go faster, blame the network team. :) :)
On Tue, Jan 21, 2014 at 5:32 AM, Jeremy Page jeremy.page@gilbarco.comwrote:
Isn't NVRAM slower than system RAM? So in theory could cause a bottleneck (this may no longer be true but you don't also have to wait for a CP on system RAM). Also, because the data is buffered this way should generally be a sequential write (assuming IO size)
On 01/20/2014 05:33 PM, Jeff Mohler wrote:
PS: I think we just agreed. :)
But its good to make the hard point, that NVRAM is not part of a write. Its common to see it as such, but it's systemically more correct to never mention NVRAM when talking about writes..cuz it doesnt matter. Its just a protection.
On Mon, Jan 20, 2014 at 2:31 PM, Jeff Mohler speedtoys.racing@gmail.comwrote:
No..NVRAM is not in the actual path of a write IO to disk.
Writes go to system RAM, a copy is made to NVRAM, when the system RAM space allocated is 1/2 full or the timer goes off..then writes are made to disk and the NVRAM copy is discarded.
NVRAM is there, but not part of the IO path to disk, it is beside it, and un-used unless the system crashed before RAM blocks were fully committed to disk.
On Mon, Jan 20, 2014 at 2:26 PM, Michael Bergman < michael.bergman@ericsson.com> wrote:
Couldn't resist... (sorry)
Jeff Mohler wrote:
:s/NVRAM/System Memory/g
Fixed it for ya.
Shouldn't that be more like
:s/NVRAM/System Memory plus a copy into NVRAM including mirror across to the HA cluster partners (if any) NVRAM, before returning with 'written OK'/g
:-)
/M
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus
--
Gustatus Similis Pullus
Toasters mailing listToasters@teaparty.nethttp://www.teaparty.net/mailman/listinfo/toasters
Please be advised that this email may contain confidential information. If you are not the intended recipient, please notify us by email by replying to the sender and delete this message. The sender disclaims that the content of this email constitutes an offer to enter into, or the acceptance of, any agreement; provided that the foregoing does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment.
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
On Jan 21, 2014, at 10:07 AM, Jeff Mohler speedtoys.racing@gmail.com wrote:
..which is why if you wanna go faster, blame the network team. :) :)
Unless it is virtualization performance issue then you need to blame storage latency.
Just sayin’!
I usually describe NVRAM as a transaction log...
On 1/20/2014 11:33 PM, Jeff Mohler wrote:
PS: I think we just agreed. :)
But its good to make the hard point, that NVRAM is not part of a write. Its common to see it as such, but it's systemically more correct to never mention NVRAM when talking about writes..cuz it doesnt matter. Its just a protection.
On Mon, Jan 20, 2014 at 2:31 PM, Jeff Mohler <speedtoys.racing@gmail.com mailto:speedtoys.racing@gmail.com> wrote:
No..NVRAM is not in the actual path of a write IO to disk. Writes go to system RAM, a copy is made to NVRAM, when the system RAM space allocated is 1/2 full or the timer goes off..then writes are made to disk and the NVRAM copy is discarded. NVRAM is there, but not part of the IO path to disk, it is beside it, and un-used unless the system crashed before RAM blocks were fully committed to disk. On Mon, Jan 20, 2014 at 2:26 PM, Michael Bergman <michael.bergman@ericsson.com <mailto:michael.bergman@ericsson.com>> wrote: Couldn't resist... (sorry) Jeff Mohler wrote: :s/NVRAM/System Memory/g Fixed it for ya. Shouldn't that be more like :s/NVRAM/System Memory plus a copy into NVRAM including mirror across to the HA cluster partners (if any) NVRAM, before returning with 'written OK'/g :-) /M _______________________________________________ Toasters mailing list Toasters@teaparty.net <mailto:Toasters@teaparty.net> http://www.teaparty.net/mailman/listinfo/toasters -- --- Gustatus Similis Pullus
--
Gustatus Similis Pullus
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Sebastian Goetze wrote:
I usually describe NVRAM as a transaction log...
Being really really picky now [sorry]... is it..?
The NVRAM holds a meticulously designed copy (super high integrity) of the transaction log for WAFL, while at the same time limiting the size of the transaction log which is in effect in RAM in the controller.
So saying to ppl that the NVRAM actually *is* the transaction log, is not strictly correct either. Which is why I personally prefer not to say that. Frankly avoiding to mention the NVRAM at all (including the size of it) is the best thing most often, and then when ppl (happens quite often) say things like: "ha ha, a NetApp has sooo little write cache EMC VNX is much better" then you take out the big arsenal and teach them... ;-)
It is true that the speed of the Flash based NVRAM isn't zero effect on things inside ONTAP when writing to disk. But it's small, very small. Tiny, insignificant compared to other factors.
It's also true that the size of the transaction log, as limited by the size of the NVRAM, can affect things w.r.t. performance in various ways. It's good to a certain extent to have a bigger transaction log, but the bigger the NVRAM the worse it gets in a HA failover situation. It slows things down, that's the trade-off. There's no problem per se to make the NVRAM (= the WAFL log) much bigger, the HW guys at NetApp could easily do that but the SW teams responsible for the HA cluster won't allow it, that's basically how it plays out
Cheers, /M
On 1/20/2014 11:33 PM, Jeff Mohler wrote:
PS: I think we just agreed. :)
But its good to make the hard point, that NVRAM is not part of a write. Its common to see it as such, but it's systemically more correct to never mention NVRAM when talking about writes..cuz it doesnt matter. Its just a protection.
Hi Michael,
just to avoid misunderstandings regarding '*Flash* based NVRAM' (see below) - and to be picky - ... ;-)
Most 'NVRAM' is actually NVMEM - battery backed DIMMs on the mainboard (22x0, 32x0 not to mention some of the older models). So AFAIK negligible speed difference.
Exception being the NVRAM8 (62x0), where we have a PCIe card with DIMMs and destaging to Flash ("persistent write log"). To quote the FAQ: "The persistent write log feature on the FAS/V6200 destages NVRAM contents to flash memory in the event of a dirty shutdown. Since the NVLOG contents are stored in flash memory, they are protected permanently regardless of how long the power outage lasts. During initial bootup after power is restored the destaged NVLOG will be replayed, enabling the file system contents to include any acknowledged writes since the last consistency checkpoint (CP) to disk." "In the event of a dirty shutdown, NVRAM8 uses its battery to keep the NVLOG DIMMs refreshed while it destages their contents to flash memory. The operation takes about a minute and afterward the NVRAM8 card turns itself off."
So in conclusion: no Flash (except in case of NVRAM8 destaging), just DIMMs with regular DIMM speeds.
Sebastian
On 1/24/2014 2:15 PM, Michael Bergman wrote:
Sebastian Goetze wrote:
I usually describe NVRAM as a transaction log...
Being really really picky now [sorry]... is it..?
The NVRAM holds a meticulously designed copy (super high integrity) of the transaction log for WAFL, while at the same time limiting the size of the transaction log which is in effect in RAM in the controller.
So saying to ppl that the NVRAM actually *is* the transaction log, is not strictly correct either. Which is why I personally prefer not to say that. Frankly avoiding to mention the NVRAM at all (including the size of it) is the best thing most often, and then when ppl (happens quite often) say things like: "ha ha, a NetApp has sooo little write cache EMC VNX is much better" then you take out the big arsenal and teach them... ;-)
It is true that the speed of the Flash based NVRAM isn't zero effect on things inside ONTAP when writing to disk. But it's small, very small. Tiny, insignificant compared to other factors.
It's also true that the size of the transaction log, as limited by the size of the NVRAM, can affect things w.r.t. performance in various ways. It's good to a certain extent to have a bigger transaction log, but the bigger the NVRAM the worse it gets in a HA failover situation. It slows things down, that's the trade-off. There's no problem per se to make the NVRAM (= the WAFL log) much bigger, the HW guys at NetApp could easily do that but the SW teams responsible for the HA cluster won't allow it, that's basically how it plays out
Cheers, /M
On 1/20/2014 11:33 PM, Jeff Mohler wrote:
PS: I think we just agreed. :)
But its good to make the hard point, that NVRAM is not part of a write. Its common to see it as such, but it's systemically more correct to never mention NVRAM when talking about writes..cuz it doesnt matter. Its just a protection.
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters