Here's a question - we had to replace some parts in our F330, and ended up only putting 2 of the 4 simms from the NVRAM card back onto the card. What kind of performance does more/less NVRAM have on a filer?
----------- Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO
Hi Jay,
This will affect write performance. In essence, there will only be half as much space in NVRAM to hold write data before flushing to disk, so you will most likely end up flushing to disk more often.
--Lee
On Fri, 18 Feb 2000, Jay Orr wrote:
Here's a question - we had to replace some parts in our F330, and ended up only putting 2 of the 4 simms from the NVRAM card back onto the card. What kind of performance does more/less NVRAM have on a filer?
Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO
---
Lee Razo (lrazo@netapp.com) Network Appliance Europe Hoofddorp, Netherlands
"Very difficult it is to do poetry in this world of constant Denny's being open all night" -Don Van Vliet
Yea, I figured as such, but I was looking for a more quantative answer. Our CPU isn't that taxed, so in theory would we not notice any performance hit? I mean, regardless of the amount of NVRAM, the info has to be written back - I would just imagine that less ram means more writes. How does this factor into the big picture of "performance of the filer" ?
Hi Jay,
This will affect write performance. In essence, there will only be half as much space in NVRAM to hold write data before flushing to disk, so you will most likely end up flushing to disk more often.
--Lee
On Fri, 18 Feb 2000, Jay Orr wrote:
Here's a question - we had to replace some parts in our F330, and ended up only putting 2 of the 4 simms from the NVRAM card back onto the card. What kind of performance does more/less NVRAM have on a filer?
Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO
Lee Razo (lrazo@netapp.com) Network Appliance Europe Hoofddorp, Netherlands
"Very difficult it is to do poetry in this world of constant Denny's being open all night" -Don Van Vliet
----------- Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO
Ahh, I see and good question. Not sure myself of the exact numbers, however if the load on the filer is already light, it may even be possible that you are not even using half of the original NVRAM size before the 10 second timer expires anyway. In this case I would think the net performance hit would be 0%
But as far as performance effect as a result of more writes, I should probably leave that one for someone else...
--Lee
On Fri, 18 Feb 2000, Jay Orr wrote:
Yea, I figured as such, but I was looking for a more quantative answer. Our CPU isn't that taxed, so in theory would we not notice any performance hit? I mean, regardless of the amount of NVRAM, the info has to be written back - I would just imagine that less ram means more writes. How does this factor into the big picture of "performance of the filer" ?
Hi Jay,
This will affect write performance. In essence, there will only be half as much space in NVRAM to hold write data before flushing to disk, so you will most likely end up flushing to disk more often.
--Lee
On Fri, 18 Feb 2000, Jay Orr wrote:
Here's a question - we had to replace some parts in our F330, and ended up only putting 2 of the 4 simms from the NVRAM card back onto the card. What kind of performance does more/less NVRAM have on a filer?
Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO
Lee Razo (lrazo@netapp.com) Network Appliance Europe Hoofddorp, Netherlands
"Very difficult it is to do poetry in this world of constant Denny's being open all night" -Don Van Vliet
Jay Orr Systems Administrator Fujitsu Nexion Inc. St. Louis, MO
---
Lee Razo (lrazo@netapp.com) Network Appliance Europe Hoofddorp, Netherlands
"Very difficult it is to do poetry in this world of constant Denny's being open all night" -Don Van Vliet
----- Original Message ----- From: Jay Orr orrjl@stl.nexen.com To: Lee Razo lrazo@netapp.com Cc: toasters@mathworks.com Sent: Friday, February 18, 2000 9:08 AM Subject: Re: NVRAM memory
Yea, I figured as such, but I was looking for a more quantative answer. Our CPU isn't that taxed, so in theory would we not notice any performance hit?
Just having extra CPU won't help. A better indication is how often you see the writes on the systat being done. If you just see them in bunches every 5-10 seconds, you'll be fine. (Is it still every 10 seconds?) If you see them more often than that, the less NVRAM will hurt you.
I mean, regardless of the amount of NVRAM, the info has to be written back - I would just imagine that less ram means more writes. How does this factor into the big picture of "performance of the filer" ?
Suppose I write a 7MB file to your F330. You had 8MB of NVRAM. I start writing. At 4MB you switch half of the NVRAM over and start flushing it to disk. I fill up the rest of your NVRAM and the filer returns back success and my client is ready to go do something else while the filer busily commits everything to disk.
Now, same situation, only you have 4MB. After 2 MB you switch, then I fill up the next 2MB, and now I'm stuck waiting for you to finish your disk activity. You do, and I write the next 2MB, and I have to wait again. Instead of being able to write to you at "memory" speeds (basically as fast as I can go and the network will allow), I am stuck waiting on the disks.
The quantitative impact depends on how big the files are being written and how often they are written. If it's just users writing small files and the infrequent large one, at most they'll notice an extra second or two wait when saving something. If it's a database or news server getting written to every second, the whole thing will be slowed down considerably. Maybe 20-40%?
Bruce
On Fri, 18 Feb 2000, Bruce Sterling Woodcock wrote:
Just having extra CPU won't help. A better indication is how often you see the writes on the systat being done. If you just see them in bunches every 5-10 seconds, you'll be fine. (Is it still every 10 seconds?) If you see them more often than that, the less NVRAM will hurt you.
This is not neccesarily true. It will hurt you in certain, in my opinion exotic, scenarios.
Suppose I write a 7MB file to your F330. You had 8MB of NVRAM. I start writing. At 4MB you switch half of the NVRAM over and start flushing it to disk. I fill up the rest of your NVRAM and the filer returns back success and my client is ready to go do something else while the filer busily commits everything to disk.
Now, same situation, only you have 4MB. After 2 MB you switch, then I fill up the next 2MB, and now I'm stuck waiting for you to finish your disk activity. You do, and I write the next 2MB, and I have to wait again. Instead of being able to write to you at "memory" speeds (basically as fast as I can go and the network will allow), I am stuck waiting on the disks.
Suppose you write a 10MB file at a blazing speed. With 8MB NVRAM, you'll fill up both parts of the NVRAM and wait while 4MB of data is written to disk. With 4MB NVRAM, you'll happily fill up both sides, but then wait only for 2MB of data to be written to disk before you can write some more. That's half the waiting time. If the filer is smart, it will cache NFS packets without acknowledging them before they are put in the NVRAM. As you see it is a case of using larger spoons at a slower pace or smaller spoons at a faster pace. The large spoons take more time to empty.
The quantitative impact depends on how big the files are being written and how often they are written. If it's just users writing small files and the infrequent large one, at most they'll notice an extra second or two wait when saving something. If it's a database or news server getting written to every second, the whole thing will be slowed down considerably. Maybe 20-40%?
Again, please see above before you accept this theory. I'm not sure that it is all this clear cut. I would expect the amount of NVRAM to have a significantly smaller impact on continuous "contiguous" writes than what is advertised above. However, just like the previous comentator, I don't have real data to substantiate this.
Tom
----- Original Message ----- From: tkaczma@gryf.net Cc: toasters@mathworks.com Sent: Thursday, February 24, 2000 3:14 AM Subject: Re: NVRAM memory
On Fri, 18 Feb 2000, Bruce Sterling Woodcock wrote:
Just having extra CPU won't help. A better indication is how often you see the writes on the systat being done. If you just see them in bunches every 5-10 seconds, you'll be fine. (Is it still every 10
seconds?)
If you see them more often than that, the less NVRAM will hurt you.
This is not neccesarily true. It will hurt you in certain, in my opinion exotic, scenarios.
You are, alas, incorrect.
Suppose I write a 7MB file to your F330. You had 8MB of NVRAM. I start writing. At 4MB you switch half of the NVRAM over and start flushing it to disk. I fill up the rest of your NVRAM and the filer
returns
back success and my client is ready to go do something else while the filer busily commits everything to disk.
Now, same situation, only you have 4MB. After 2 MB you switch, then I fill up the next 2MB, and now I'm stuck waiting for you to finish your disk activity. You do, and I write the next 2MB, and I have to wait again. Instead of being able to write to you at "memory" speeds (basically as fast as I can go and the network will allow), I am stuck waiting on the disks.
Suppose you write a 10MB file at a blazing speed.
No, please address *my* example, which was designed to illustrate the point. The fact you can invent a scenario where the effect is reduced is irrelevant.
With 8MB NVRAM, you'll fill up both parts of the NVRAM and wait while 4MB of data is written to disk. With 4MB NVRAM, you'll happily fill up both sides, but then wait only for 2MB of data to be written to disk before you can write some more. That's half the waiting time.
Yes, but with the 8MV NVRAM, you wait once. The key is the client doesn't have to wait for the final disk write, just the final write to NVRAM. So for a 10MB file:
8MB NVRAM Send 4MB (4s) Send 4MB (4s) Wait for 4MB write (4w) Send 2MB (2s) Total Time - 10s + 4w
4MB NVRAM Send 2MB (2s) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Totale Time - 10s + 6w
Bruce
On Thu, 24 Feb 2000, Bruce Sterling Woodcock wrote:
Yes, but with the 8MV NVRAM, you wait once. The key is the client doesn't have to wait for the final disk write, just the final write to NVRAM. So for a 10MB file:
8MB NVRAM Send 4MB (4s) Send 4MB (4s) Wait for 4MB write (4w) Send 2MB (2s) Total Time - 10s + 4w
4MB NVRAM Send 2MB (2s) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) Totale Time - 10s + 6w
I like your example, let's bump it up to 100MB
8 MB NVRAM:
Send 4MB (4s) Send 4MB (4s) Wait for 4MB write (4w) Send 4MB (4s) Wait for 4MB write (4w) ... Send 4MB (4s) Total time = 100s + 92w
4 MB NVRAM
Send 2MB (2s) Send 2MB (2s) Wait for 2MB write (2w) Send 2MB (2s) ... Send 2MB (2s) Total time = 100s + 96w
This is certainly not the 40% penalty you advertised. In fact as the size of the file increases the difference remains constant, i.e. 4w. In addition, if we talk about larger capacities of NVRAM and no pre-NVRAM caching the NFS requests may be dropped causing the client to retransmit. I agree with you that this is not the whole picture and that there is overhead we have not considered here, like interleaving, or writing small files/rewriting the same block, but simply saying that larger NVRAMs will necessarily significantly improve performance is a falacy. In the past I approached NetApp with a question whether they ever considered breaking up the NVRAM into smaller pieces creating more of a circular buffer. This would be the optimal solution if the overhead in doing this would not be significant. NetApp dismissed this idea, perhaps rightly so, perhaps not, claiming that it would make the code too complex. I'm NOT against larger NVRAMs/write caches, I'm for more granular NVRAMs.
Tom
----- Original Message ----- From: tkaczma@gryf.net Cc: toasters@mathworks.com Sent: Thursday, February 24, 2000 1:36 PM Subject: Re: NVRAM memory
This is certainly not the 40% penalty you advertised.
Firstly, I never advertised 40% penalty in this particular case. I advertised a 20-40% penalty in another, far more general, case. As you admit, there's a lot more overhead involved that mentioned here; I was just trying to illustrate it simply for someone who had a simple question. For the full skinny, we'd need one of the coders or architects to post here, without telling us too much that is confidential. :)
I'm NOT against larger NVRAMs/write caches, I'm for more granular NVRAMs.
Well, you can be for it and that's fine. But larger NVRAM will help in write-intensive environment in the general case, and that is supported by actual experimental data. Perhaps in theory it should not be so, and perhaps Netapp does something stupid in their coding that makes it that way, but it's still a fact.
Bruce
This will affect write performance. In essence, there will only be half as much space in NVRAM to hold write data before flushing to disk, so you will most likely end up flushing to disk more often.
Yea, I figured as such, but I was looking for a more quantative answer. Our CPU isn't that taxed, so in theory would we not notice any performance hit? I mean, regardless of the amount of NVRAM, the info has to be written back - I would just imagine that less ram means more writes. How does this factor into the big picture of "performance of the filer" ?
It depends. When a filer boots, it partitions the available NVRAM into two equal chunks. (If it is a member of a cluser, "available" NVRAM is half of the physical NVRAM with the other half belonging to the cluster partner.) Data and metadata writes are logged to the first chunk until one of two events triggers the establishment of a new consistency point on disk, which consists of writing out all of the logged writes while sending new writes to the other NVRAM chunk. The trigger events are
(1) the current chunk becomes full (2) ten seconds have elapsed since the last CP (longer for NetCache)
If your write rate is sufficiently low, such that the timer will trigger the start of a new CP when less than half of the NVRAM chunk is used, then taking out half of the NVRAM should have no performance impact at all. (Someone else mentioned that in an F330, NVRAM will be interleaved if you have 8 MB but not if you have 2 MB -- and may or may not with 4 MB. That's true, and will impact write performance, but if you have relatively few writes and the CPU doesn't have anything better to do, the impact probably won't be significant.)
-- Karl Swartz Network Appliance Engineering Work: kls@netapp.com http://www.netapp.com/ Home: kls@chicago.com http://www.chicago.com/~kls/
Karl,
Why haven't you given the option to upgrade to bigger then 32MB NVRAMs ? It seems as a logical upgrade path for heavy write-based installations.
How can I check what's the utilization of the NVRAM (if my triggers are more "full NVRAM" or "10sec.") ?
Eyal.
Karl Swartz wrote:
This will affect write performance. In essence, there will only be half as much space in NVRAM to hold write data before flushing to disk, so you will most likely end up flushing to disk more often.
Yea, I figured as such, but I was looking for a more quantative answer. Our CPU isn't that taxed, so in theory would we not notice any performance hit? I mean, regardless of the amount of NVRAM, the info has to be written back - I would just imagine that less ram means more writes. How does this factor into the big picture of "performance of the filer" ?
It depends. When a filer boots, it partitions the available NVRAM into two equal chunks. (If it is a member of a cluser, "available" NVRAM is half of the physical NVRAM with the other half belonging to the cluster partner.) Data and metadata writes are logged to the first chunk until one of two events triggers the establishment of a new consistency point on disk, which consists of writing out all of the logged writes while sending new writes to the other NVRAM chunk. The trigger events are
(1) the current chunk becomes full (2) ten seconds have elapsed since the last CP (longer for NetCache)
If your write rate is sufficiently low, such that the timer will trigger the start of a new CP when less than half of the NVRAM chunk is used, then taking out half of the NVRAM should have no performance impact at all. (Someone else mentioned that in an F330, NVRAM will be interleaved if you have 8 MB but not if you have 2 MB -- and may or may not with 4 MB. That's true, and will impact write performance, but if you have relatively few writes and the CPU doesn't have anything better to do, the impact probably won't be significant.)
-- Karl Swartz Network Appliance Engineering Work: kls@netapp.com http://www.netapp.com/ Home: kls@chicago.com http://www.chicago.com/~kls/
----- Original Message ----- From: Eyal Traitel eyal.traitel@motorola.com To: Karl Swartz kls@netapp.com Cc: Jay Orr orrjl@stl.nexen.com; lrazo@netapp.com; toasters@mathworks.com Sent: Saturday, February 19, 2000 2:28 AM Subject: Re: NVRAM memory
Karl,
I'm not Karl, but I'm awake, so... :)
Why haven't you given the option to upgrade to bigger then 32MB NVRAMs ? It seems as a logical upgrade path for heavy write-based installations.
This I don't know, other than the fact that you have the battery power to support it and the loading requirements for a PCI slot and so on.
I would also be inclined to believe that heavy write-based installations that would benefit from MORE than 32MB are fairly rare. Few filers are sitting there getting 16MB of writes every second. .
How can I check what's the utilization of the NVRAM (if my triggers are more "full NVRAM" or "10sec.") ?
Run systat 1 on the console and watch the disk writes. Whenever the NVRAM gets flushed, the value on disk writes will got from 0 to whatever. If you only see a couple of lines every 10 seconds, then your NVRAM isn't getting half-full usually; if it is writing more than ever 5 seconds, you're pretty write-intensive. If it's almost constant, you'd benefit from more NVRAM.
Of course, this is just one-time look. You'd want to monitor your filer at multiple times during heavy load problems. If you want some more general statistics of your filer's NVRAM load over time, you can go into rc_toggle_basic mode and type wafl_susp -w. In the output you'll see values for cp_from_timer (how many times NVRAM was flushed to disk at the 10 second mark), cp_from_log_full (how many times it was flushed because the NVRAM filled up), and the all-important cp_from_cp (how many times NVRAM filled up, it tried to flush to disk, and another flush was already in progress and had not yet completed). Fush == cp == consistency point.
However, don't ask me to interpret all the stuff in wafl_susp -w and don't blame me if you type to wrong thing in rc_toggle_basic mode and accidentally wipe out your filesystem. Don't fool around in this mode; it can render your filer inoperable. Get in, type wafl_susp -w, and get out (type rc_toggle_basic again).
Bruce
Thanks.
I already can see this on one of the more used ones:
cp_from_timer = 288539 cp_from_snapshot = 210065 cp_from_low_water = 0 cp_from_high_water = 2 cp_from_log_full = 138834 cp_from_timer_nvlog = 10 cp_from_cp = 6006
Can someone from NetApp answer on if it would be useful to have an upgrade for such machine ?
Eyal.
Bruce Sterling Woodcock wrote:
----- Original Message ----- From: Eyal Traitel eyal.traitel@motorola.com To: Karl Swartz kls@netapp.com Cc: Jay Orr orrjl@stl.nexen.com; lrazo@netapp.com; toasters@mathworks.com Sent: Saturday, February 19, 2000 2:28 AM Subject: Re: NVRAM memory
Karl,
I'm not Karl, but I'm awake, so... :)
Why haven't you given the option to upgrade to bigger then 32MB NVRAMs ? It seems as a logical upgrade path for heavy write-based installations.
This I don't know, other than the fact that you have the battery power to support it and the loading requirements for a PCI slot and so on.
I would also be inclined to believe that heavy write-based installations that would benefit from MORE than 32MB are fairly rare. Few filers are sitting there getting 16MB of writes every second. .
How can I check what's the utilization of the NVRAM (if my triggers are more "full NVRAM" or "10sec.") ?
Run systat 1 on the console and watch the disk writes. Whenever the NVRAM gets flushed, the value on disk writes will got from 0 to whatever. If you only see a couple of lines every 10 seconds, then your NVRAM isn't getting half-full usually; if it is writing more than ever 5 seconds, you're pretty write-intensive. If it's almost constant, you'd benefit from more NVRAM.
Of course, this is just one-time look. You'd want to monitor your filer at multiple times during heavy load problems. If you want some more general statistics of your filer's NVRAM load over time, you can go into rc_toggle_basic mode and type wafl_susp -w. In the output you'll see values for cp_from_timer (how many times NVRAM was flushed to disk at the 10 second mark), cp_from_log_full (how many times it was flushed because the NVRAM filled up), and the all-important cp_from_cp (how many times NVRAM filled up, it tried to flush to disk, and another flush was already in progress and had not yet completed). Fush == cp == consistency point.
However, don't ask me to interpret all the stuff in wafl_susp -w and don't blame me if you type to wrong thing in rc_toggle_basic mode and accidentally wipe out your filesystem. Don't fool around in this mode; it can render your filer inoperable. Get in, type wafl_susp -w, and get out (type rc_toggle_basic again).
Bruce
----- Original Message ----- From: Eyal Traitel eyal.traitel@motorola.com To: Bruce Sterling Woodcock sirbruce@ix.netcom.com Cc: Eyal Traitel (r55789) eyal.traitel@motorola.com; Karl Swartz kls@netapp.com; Jay Orr orrjl@stl.nexen.com; lrazo@netapp.com; toasters@mathworks.com; beepy@netapp.com Sent: Saturday, February 19, 2000 8:52 AM Subject: Re: NVRAM memory
Thanks.
I already can see this on one of the more used ones:
cp_from_timer = 288539 cp_from_snapshot = 210065 cp_from_low_water = 0 cp_from_high_water = 2 cp_from_log_full = 138834 cp_from_timer_nvlog = 10 cp_from_cp = 6006
Can someone from NetApp answer on if it would be useful to have an upgrade for such machine ?
To me cp_from_log_full seems high.
I did some digging in my old notes and found that one engineer recommened cp_from_log_full should only be 10% of cp_from_timer. I'm not sure what cp_from_snapshot is but even if you add that to cp_from_timer your log is still filling up over 25% of the time. cp_from_cp is not too bad, though, so maybe it is not hurting you that much.
Again, I'd check systat 1 during heavy loads and if it looks okay, your NVRAM is being heavily utilized but not overloaded.
Bruce
Bruce Sterling Woodcock wrote:
Of course, this is just one-time look. You'd want to monitor your filer at multiple times during heavy load problems. If you want some more general statistics of your filer's NVRAM load over time, you can go into rc_toggle_basic mode and type wafl_susp -w. In the output you'll see values for cp_from_timer (how many times NVRAM was flushed to disk at the 10 second mark), cp_from_log_full (how many times it was flushed because the NVRAM filled up), and the all-important cp_from_cp (how many times NVRAM filled up, it tried to flush to disk, and another flush was already in progress and had not yet completed). Fush == cp == consistency point.
cp_from_timer = 30001 cp_from_snapshot = 21586 cp_from_low_water = 0 cp_from_high_water = 0 cp_from_log_full = 1198160 cp_from_timer_nvlog = 19 cp_from_cp = 3312
ratbert> uptime 10:22am up 67 days, 9:35 195691476 NFS ops, 0 CIFS ops, 110 HTTP ops
This is an F760 in a cluster. Our filers have 32MB NVRAM each. Will NetApp offer more than 32MB per filer?
Systat shows nearly constant write activity.
----- Original Message ----- From: Michael S. Keller mkeller@mail.wcg.net To: toasters@mathworks.com Sent: Monday, February 21, 2000 8:24 AM Subject: Re: NVRAM memory
cp_from_timer = 30001 cp_from_snapshot = 21586 cp_from_low_water = 0 cp_from_high_water = 0 cp_from_log_full = 1198160
Yow!
cp_from_timer_nvlog = 19 cp_from_cp = 3312
Not too bad. Your NVRAM is full but not slowing you down all the time.
With so many ops I wonder if any of the counters "rolled over".
This is an F760 in a cluster. Our filers have 32MB NVRAM each. Will NetApp offer more than 32MB per filer?
Systat shows nearly constant write activity.
Netapp will no doubt offer more in the future, but they might not offer more NVRAM in older models, so 32MB may remain the max for your F760.
I think your filer is clearly overloaded with writes. Part of the problem is you set it up in a cluster, so you're really only getting half of the NVRAM. I think your best solution is to just move some of the traffic off to other filers, IF you actually find the filer to be slow. I would not add any additional traffic to it.
Bruce
On Mon, 21 Feb 2000, Bruce Sterling Woodcock wrote:
I think your filer is clearly overloaded with writes. Part of the problem is you set it up in a cluster, so you're really only getting half of the NVRAM. I think your best solution is to just move some of the traffic off to other filers, IF you actually find the filer to be slow. I would not add any additional traffic to it.
I would check how much the disks are working and probably add more traffic until I would come close to saturating the disk interface. :) Remember that NVRAM/caching gives you an advantage over standard disks so that the NFS delay becomes negligible or perhaps even nonexistant. If the netapp caches NFS requests before putting them in the NVRAM log then you should be able to saturate the box to the point that you are using 100% CPU or 100% disk bandwidth whichever comes first.
Tom
----- Original Message ----- From: tkaczma@gryf.net Cc: toasters@mathworks.com Sent: Thursday, February 24, 2000 3:36 AM Subject: Re: NVRAM memory
On Mon, 21 Feb 2000, Bruce Sterling Woodcock wrote:
I think your filer is clearly overloaded with writes. Part of the
problem
is you set it up in a cluster, so you're really only getting half of the NVRAM. I think your best solution is to just move some of the traffic off to other filers, IF you actually find the filer to be slow. I would
not
add any additional traffic to it.
I would check how much the disks are working and probably add more traffic until I would come close to saturating the disk interface. :) Remember that NVRAM/caching gives you an advantage over standard disks so that the NFS delay becomes negligible or perhaps even nonexistant.
Yes, but if that NVRAM is always full, you'll lose that advantage. Besides, I did not suggest he move to local disk... I suggested he move to another filer. Sometimes two filers at 50% CPU are faster than one filer at 100% CPU.
Bruce
On Thu, 24 Feb 2000, Bruce Sterling Woodcock wrote:
Yes, but if that NVRAM is always full, you'll lose that advantage. Besides, I did not suggest he move to local disk... I suggested he move to another filer. Sometimes two filers at 50% CPU are faster than one filer at 100% CPU.
I agree with you Bruce, I was merely suggesting that perhaps some people want NetApps for more then their performance. I, for one, love their FS. It saved my butt more than once. Our performance is dictated more by the network then the filer itself, and we treat NetApps here as NFS servers rather then just intelligent disks with great caching and corruption protection.
Tom
----- Original Message ----- From: tkaczma@gryf.net Cc: toasters@mathworks.com Sent: Thursday, February 24, 2000 1:40 PM Subject: Re: NVRAM memory
On Thu, 24 Feb 2000, Bruce Sterling Woodcock wrote:
Yes, but if that NVRAM is always full, you'll lose that advantage.
Besides,
I did not suggest he move to local disk... I suggested he move to another filer. Sometimes two filers at 50% CPU are faster than one filer at 100% CPU.
I agree with you Bruce, I was merely suggesting that perhaps some people want NetApps for more then their performance. I, for one, love their FS. It saved my butt more than once. Our performance is dictated more by the network then the filer itself, and we treat NetApps here as NFS servers rather then just intelligent disks with great caching and corruption protection.
I agree with this. The Netapp filers have so many advantages, sometimes it is too easy to focus on just one and forget the others! Sometimes I will be describing them to a potential customer and completely forget to mention the Snapshots, because I'm so used to them.
Bruce
On Fri, 18 Feb 2000, Karl Swartz wrote:
to the cluster partner.) Data and metadata writes are logged to the first chunk until one of two events triggers the establishment of a new consistency point on disk, which consists of writing out all of the logged writes while sending new writes to the other NVRAM chunk.
Is it really both data and metadata that gets written to NVRAM? What gets put into the RAM cache, then, just disk reads? Where to the WAFL logs get written?
I was just examining one of my F740's, which is often running 4-5,000 ops/s with 0 or 1 minute cache ages, but the NVRAM stats don't seem as bad as that dude with the F760:
cp_from_timer = 153482 cp_from_snapshot = 94030 cp_from_low_water = 0 cp_from_high_water = 0 cp_from_log_full = 50913 cp_from_timer_nvlog = 0 cp_from_cp = 347
~20% are "log full" writes, higher than Bruce's rule-of-thumb 10%, but the nasty one people are mentioning "cp_from_cp" is pretty small.
Today is a medium load day:
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 82% 1601 3400 0 941 5579 2195 0 0 0 1 85% 1350 3779 0 1198 4413 2343 0 0 0 1 87% 1641 2984 0 1183 4892 3094 377 0 0 1 94% 1275 3082 0 805 5252 2995 3041 0 0 1 92% 1958 3191 0 1141 6778 3735 0 0 0 1 89% 1196 3273 0 2236 4657 3617 0 0 0 1 91% 978 3482 0 2105 6173 2707 0 0 0 1 90% 1405 3746 0 1249 5568 2004 0 0 0 1 94% 802 3730 0 971 3304 3325 1160 0 0 1 97% 1081 3274 0 930 6537 3479 4832 0 0 1 89% 1210 3642 0 2411 5467 2417 0 0 0 1 89% 1324 3333 0 1577 4586 1934 0 0 0 1 91% 1260 3346 0 2342 3744 2036 0 0 0 1 89% 1632 2997 0 1120 4948 1662 0 0 0 1 83% 1358 2681 0 1206 5477 1988 0 0 0 1 82% 694 2300 0 602 1921 4930 5164 0 0 1 94% 1179 3836 0 3925 5718 2449 874 0 0 1 79% 1023 2750 0 1829 6107 3249 0 0 0 1
Having more options regarding NVRAM and RAM cache configurations in filers would be great, not this current "one-size-fits-all" sh^H^Hstuff.
Until next time...
The Mathworks, Inc. 508-647-7000 x7792 3 Apple Hill Drive, Natick, MA 01760-2098 508-647-7001 FAX tmerrill@mathworks.com http://www.mathworks.com ---
----- Original Message ----- From: Todd C. Merrill tmerrill@mathworks.com To: toasters@mathworks.com Sent: Tuesday, February 22, 2000 7:56 AM Subject: Re: NVRAM memory
On Fri, 18 Feb 2000, Karl Swartz wrote:
to the cluster partner.) Data and metadata writes are logged to the first chunk until one of two events triggers the establishment of a new consistency point on disk, which consists of writing out all of the logged writes while sending new writes to the other NVRAM chunk.
Is it really both data and metadata that gets written to NVRAM? What gets put into the RAM cache, then, just disk reads? Where to the WAFL logs get written?
Actually, while the RAM is primarily used for cacheing reads and anything else the OS needs, writes are actually logged to both RAM and NVRAM. The NVRAM is just used for stable storage in case of emergency (crash).
I was just examining one of my F740's, which is often running 4-5,000 ops/s with 0 or 1 minute cache ages, but the NVRAM stats don't seem as bad as that dude with the F760:
cp_from_timer = 153482 cp_from_snapshot = 94030 cp_from_low_water = 0 cp_from_high_water = 0 cp_from_log_full = 50913 cp_from_timer_nvlog = 0 cp_from_cp = 347
~20% are "log full" writes, higher than Bruce's rule-of-thumb 10%, but the nasty one people are mentioning "cp_from_cp" is pretty small.
I agree. Also, from your systat, you're going 5 or more seconds between writes, so you aren't really that write-loaded.
Having more options regarding NVRAM and RAM cache configurations in filers would be great, not this current "one-size-fits-all" sh^H^Hstuff.
I think you clearly need more RAM, but if you're already maxed, you basically need to accept that you need another filer. The filers are fairly highly tuned so the CPU is appropriate for the RAM and NVRAM, so it is a relatively rare environment that maxes out one without coming close to maxing out the others.
Bruce
On Tue, 22 Feb 2000, Bruce Sterling Woodcock wrote:
Having more options regarding NVRAM and RAM cache configurations in filers would be great, not this current "one-size-fits-all" sh^H^Hstuff.
I think you clearly need more RAM, but if you're already maxed, you basically need to accept that you need another filer.
My installation needs ops, and lots of them; we use very little disk (~125 GB per filer). Our best value for ops is not upgrading to an F760, but buying another F740, which we have done. We probably could have squeezed some more performance out of our first F740 if it had more RAM, as you seem to agree with. If we could spend the additional few thousand to bump up the RAM to a full 1 GB, that would be worth it. But, NetApp's rigid solution is: if you need/want more RAM, buy the F760. An F760 gives ~50% more ops performance for ~100% more dollars (with NFS *and* CIFS protocols). We got 100% more ops by spending 100% more dollars...on another F740.
NetApp's price restructuring last year benefitted most people who have high storage requirements, but killed us, who have high ops performance requirements. Hence, my plea for NetApp to be more flexible in the "abnormal" configurations such as ours.
Until next time...
The Mathworks, Inc. 508-647-7000 x7792 3 Apple Hill Drive, Natick, MA 01760-2098 508-647-7001 FAX tmerrill@mathworks.com http://www.mathworks.com ---
----- Original Message ----- From: Jay Orr orrjl@stl.nexen.com To: toasters@mathworks.com Sent: Friday, February 18, 2000 8:47 AM Subject: NVRAM memory
Here's a question - we had to replace some parts in our F330, and ended up only putting 2 of the 4 simms from the NVRAM card back onto the card. What kind of performance does more/less NVRAM have on a filer?
Basically, it will effect your write performance. Less NVRAM means it can cache fewer writes, thus having to write to disk more often. It could also mean that disk allocation will be a little less efficient and possibly increase the chance of disk failure over the long run. (But this last part is purely speculative and nothing to worry about.) It also has some minor performance impact in other situations but otherwise it is negligible.
If you don't really notice the difference on your filer, chances are it is not often very heavily loaded and doesn't experience a lot of heavy write traffic.
Bruce
On Fri, 18 Feb 2000, Bruce Sterling Woodcock wrote:
Basically, it will effect your write performance. Less NVRAM means it can cache fewer writes, thus having to write to disk more often.
This is not necessarily true. It will affect the speed of some bursty writes where there might not be enough space in the NVRAM to log the complete write stream. If the writes are continuous no loss of performance should be noticed, i.e. the bottleneck will be the speed of writing to disk. In fact, since the number of transactions in the NVRAM will be smaller the banks will clear faster allowing more data to be logged quicker. As I understand from NetApp documentation, and please correct me if I'm wrong, at no point is the NVRAM used as a cache during normal operation. It serves only as a log that is written to, but not read unless a reboot occurs. Although the amount of NVRAM does affect the performance of the write cache, especially during frequent rewrites of the same block, I'm not sure how many applications/NFS stacks actually put out such a mix of ops.
It could also mean that disk allocation will be a little less efficient and possibly increase the chance of disk failure over the long run.
Perhaps. This would be especially true in the abovementioned scenario of continually rewriting the same block.
If you don't really notice the difference on your filer, chances are it is not often very heavily loaded and doesn't experience a lot of heavy write traffic.
Or it is heavily loaded all the time, but the same blocks aren't very frequently overwritten, i.e. the write cache serves very little.
Tom
----- Original Message ----- From: tkaczma@gryf.net Cc: toasters@mathworks.com Sent: Thursday, February 24, 2000 2:52 AM Subject: Re: NVRAM memory
On Fri, 18 Feb 2000, Bruce Sterling Woodcock wrote:
Basically, it will effect your write performance. Less NVRAM means it can cache fewer writes, thus having to write to disk more often.
This is not necessarily true. It will affect the speed of some bursty writes where there might not be enough space in the NVRAM to log the complete write stream. If the writes are continuous no loss of performance should be noticed, i.e. the bottleneck will be the speed of writing to disk. In fact, since the number of transactions in the NVRAM will be smaller the banks will clear faster allowing more data to be logged quicker.
Yes necessarily true. The amount of time you have to wait depends on how much you can cache; the disk may be slower, but if I have more NVRAM, then the time it takes for me to fill up, the less time I have to wait until the disk write completes (if I have to wait at all). Also, once you start filling up, your writes won't be "continuous" because the client will start backing off when the filer stops responding.
As I understand from NetApp documentation, and please
correct me if I'm wrong, at no point is the NVRAM used as a cache during normal operation. It serves only as a log that is written to, but not read unless a reboot occurs. Although the amount of NVRAM does affect the performance of the write cache, especially during frequent rewrites of the same block, I'm not sure how many applications/NFS stacks actually put out such a mix of ops.
This is true but irrelevant. The NVRAM itself is not always directly utilized, but the size of the NVRAM dictates the size of the DRAM write cache, so the result is the same. Look, if you don't believe me, feel free to take out half the NVRAM in your filer, write a 100MB file, and see if it takes more or less time.
If you don't really notice the difference on your filer, chances are it is not often very heavily loaded and doesn't experience a lot of heavy write traffic.
Or it is heavily loaded all the time, but the same blocks aren't very frequently overwritten, i.e. the write cache serves very little.
Your conclusion is spurious; rewriting the same block isn't the issue.
Bruce
On Thu, 24 Feb 2000, Bruce Sterling Woodcock wrote:
Yes necessarily true. The amount of time you have to wait depends on how much you can cache; the disk may be slower, but if I have more NVRAM, then the time it takes for me to fill up, the less time I have to wait until the disk write completes (if I have to wait at all).
Unless you are doing cp to cp when you're waiting for disk. This is exactly the scenario I painted. Think about it, at 100% performance, i.e. cp to cp the NVRAM will not be empty for long. If the size of the NVRAM is large then it will take more time to flush the cached data to disk before you have any more space in the NVRAM to put new stuff in. If the NVRAM is smaller then the waits will be shorter.
Also, once you start filling up, your writes won't be "continuous" because the client will start backing off when the filer stops responding.
An with a larger NVRAM you'll have to wait longer for it to become available. The performance will be more choppy. At 100% utilization smaller NVRAM may actually smooth out the performance. With adequate pre-NVRAM caching no requests have to be lost.
The NVRAM itself is not always directly utilized, but the size of the NVRAM dictates the size of the DRAM write cache, so the result is the same.
I mentioned this someplace, perhaps in a latter message.
Look, if you don't believe me, feel free to take out half the NVRAM in your filer, write a 100MB file, and see if it takes more or less time.
Perhaps 100MB is not large enough for new filers which have substantially larger write caches/NVRAM.
Your conclusion is spurious; rewriting the same block isn't the issue.
It is certainly an issue, if you rewrite the same block over and over, you'll be overwriting a small area of write cache, thus leading to small writes, but at the same time doing tons of cp to cp because, as I understand it, the NVRAM records the transaction, not the outcome.
Tom
----- Original Message ----- From: tkaczma@gryf.net Cc: toasters@mathworks.com Sent: Thursday, February 24, 2000 1:14 PM Subject: Re: NVRAM memory
Unless you are doing cp to cp when you're waiting for disk. This is exactly the scenario I painted. Think about it, at 100% performance, i.e. cp to cp the NVRAM will not be empty for long. If the size of the NVRAM is large then it will take more time to flush the cached data to disk before you have any more space in the NVRAM to put new stuff in. If the NVRAM is smaller then the waits will be shorter.
And the amount of data written is also smaller, so it evens out. Except, of course, the overhead for each wait state, the not-quite-context-switch for the Netapp, the backing off of the client, and the final client write (which does not have to wait on disk.)
An with a larger NVRAM you'll have to wait longer for it to become available. The performance will be more choppy. At 100% utilization smaller NVRAM may actually smooth out the performance.
I could believe this, but the final time will still be less, even if it is choppier. And since you're talking about one huge write, users won't notice the choppiness.
With adequate pre-NVRAM caching no requests have to be lost.
Like I said before, it's not a matter of requests being lost. If the filer stops responding, the client will back off sending requests. This is neither here nor there; I'm just pointing out you won't get that continuous throughput.
The NVRAM itself is not always directly utilized, but the size of the NVRAM dictates the size of the DRAM write cache, so the result is the same.
I mentioned this someplace, perhaps in a latter message.
Then you should have realized not to mention it here, because it's irrelevant. (Claiming the size of NVRAM is irrelevant since it's not directly utilized is missing the point.)
Look, if you don't believe me, feel free to take out half the NVRAM in your filer, write a 100MB file, and see if it takes more or less time.
Perhaps 100MB is not large enough for new filers which have substantially larger write caches/NVRAM.
With 32MB NVRAM in the current generation (not the next), it should be large enough. If you prefer, make it a 500MB file.
Your conclusion is spurious; rewriting the same block isn't the issue.
It is certainly an issue,
But it's not THE issue, which is whether or not general write performance is going to be faster with more NVRAM. It is, and not just in the "rewrite the same block" case as you were suggesting.
Bruce
On Thu, 24 Feb 2000, Bruce Sterling Woodcock wrote:
But it's not THE issue, which is whether or not general write performance is going to be faster with more NVRAM. It is, and not just in the "rewrite the same block" case as you were suggesting.
That is just the other extreme. Normally you would have behavior someplace in the middle. All I was trying to say is that the performance will not change as drastically as you described. It will change, but the extent to which one is affected depends on his mix of ops. I haven't gotten around to carefully studying our netapp performance because our network did not perform consistently enough to be able to isolate the netapp factor. In fact, I think the network actually limited the netapp performance.
Tom
Jay Orr writes:
Here's a question - we had to replace some parts in our F330, and ended up only putting 2 of the 4 simms from the NVRAM card back onto the card. What kind of performance does more/less NVRAM have on a filer?
There's the obvious issue (as previously discussed) about performance dropping due to less NVRAM to buffer requests up.
Also, (if I recall correctly), NVRAM is slower than normal RAM, so memory interleaving is used to speed up NVRAM writes. I'm not sure if the NVRAM supports 2-way interleaving, or if interleaving only kicks in when you have 4 SIMMs installed (giving 4 way interleaving).
Maybe one of the NetApp engineers can comment further; this is working from hazy recollections when I was advised that upgrading our F330's from 2MB to 8MB would have a significant performance improvement due to the lack of capacity of the 2MB NVRAM and the lack of interleaving.
Luke.