We tried the flexshare route and it had an impact but only slight.  I suspect it’s because there is still delay for other requests when this group has filled NVRAM and all that has to flush to disk before it can service the comparatively miniscule “other stuff”.  And once that slight interrupt has happened the process repeats.  But that’s only a suspicion.

 

In any case, we’re fast coming to the conclusion that this particular group just needs to be isolated, they do not play well with others.  So we’ll review the tool’s characteristics and determine if high end is needed or if mid-range is ok once isolated.

 

As for a BOGF (Bolt-On-Go-FasterTM) Avere is a front runner for write/read acceleration, which is also a possibility for us with this particular group.

 

Thanks.

 

Jeff Kennedy

Qualcomm, Incorporated

QCT Engineering Compute

858-651-6592

 

From: Jan-Pieter Cornet [mailto:johnpc@xs4all.net]
Sent: Thursday, November 25, 2010 2:59 PM
To: Kennedy, Jeffrey
Cc: NDMP List (toasters@mathworks.com)
Subject: Re: How to design for iops?

 

On 2010 Nov 23, at 21:06 , Kennedy, Jeffrey wrote:

Let me first define iops in this case.

 

Metadata operations with a larger percentage of writes than reads.

 

I have a group that will regularly drive 60-70k metadata operations per second that consist of updates more than half the time.  10Gb and PAM cards will help with reads but the writes are the killer.

 

Today they are on shared 6070’s with both PAM and 10Gb.  I’m using PA to see the “other ops” which is where I got the 60-70k number.  It doesn’t break out read v write for metadata but based on ‘sysstat’ I got the over-half-write info.

 

When they ramp up everyone else on that filer feels it.  Is there something that can be done to improve this other than isolating them?

 

You could give them a lower priority using the 'priority' command (available in ontap 7.2+, known as 'FlexShare'). That way, other operations would have priority over these writes.

 

That said, make sure you have the basics right: the aggregates where the write go should have enough space: at most 80% can be used preferably less. It should have enough spindles. Add more disks if your disk utilisation is high or CP take too long. Move everything else off of that aggregate.

 

Writes mainly put a load on the NVRAM card. A somewhat debatable performance boost would be to remove the clustering and interconnect, making your entire NVRAM card available for writes, instead of taking up half of the NVRAM card for the cluster partner (of course, going from cluster to a stand-alone system will seriously impact your uptime and fault tolerance. Make sure you know what you do).

 

Doing a headswap for a system with more nvram will also be beneficial (but not exactly low cost or easy to implement). I'm not aware of hardware addons that boost write performance, but do ask your netapp sales rep, there might be something available.

 

-- 

Jan-Pieter Cornet <johnpc@xs4all.net>

Systeembeheer XS4ALL Internet bv

Internet: www.xs4all.nl