There is no 'shelf load'..just the # of disks in use for the workload irrespective of where they are.

Shelves just hold disks off the floor and provide power.   :)



On Thu, Dec 27, 2012 at 9:20 PM, Bradley, Shane <shane.bradley@nz.fujitsu.com> wrote:

Hey

 

Any reason why you choose to do it that way?

 

I can’t really see what benefit you’re getting from splitting shelf loads between controllers.

 

Cheers

Shane Bradley
Senior Technical Consultant

Fujitsu New Zealand Limited

Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011
T +64 4 890 9605 M +64 21 229 1563 F +64 4 495 0730
shane.bradley@nz.fujitsu.com
nz.fujitsu.com


From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac
Sent: Friday, 21 December 2012 2:38 p.m.
To: Ray Van Dolson
Cc: toasters@teaparty.net
Subject: Re: Aggregate Best Practices

 

Something I like to do to squeak out max performance is this:

 

Between two heads, say I have 4 shelves.

First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible.

Second, Each head will own half the disks in each shelf.

So, from maintenance mode, on a new filer something along the likes of:

 

> remove all disk ownerships from all heads.

> take owner ship of half the disks on head A

(the logic is smart enough now where it almost always will grab

roughly the same amount of disks from each shelf)

> take owner ship of the other half the disks on head B

 

The only downside I see to this approach is when a disk fails,

the admin must run a "disk assign" command to tell it when filer

to belong to as no filer will own all the disks on any given shelf

defeating the purpose of auto-assign based on ownership.

 

So far, this method has been very good to me.

 

 

 

--tmac

 

Tim McCarthy

Principal Consultant

 

          

 

Clustered ONTAP                                                        Clustered ONTAP

 NCDA ID: XK7R3GEKC1QQ2LVD        RHCE5 805007643429572      NCSIE ID: C14QPHE21FR4YWD4

 Expires: 08 November 2014                 Expires w/release of RHEL7      Expires: 08 November 2014



On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson <rvandolson@esri.com> wrote:

We're revisiting how we set up our aggregates and I want to see how
others out there do it.  Specifically, what strategies do you use for
ensuring certain key applications or environments get the performance
they need in a shared environment.

Typcially we'll create large aggregates based on a homogeneous disk
type.  15K SAS disks in one aggregate, SATA in another.  In some cases
when it's only a single type of disk, we'd have 60 15K disks in one
aggregate and 60 in other (assigned to each controller respectively).

The idea here is that more spindles gives us the most performance.
However, some applications/workloads are more important than others,
and some can be "bullies" impacting the important stuff.  Ideally we'd
try and keep our OLTP random workloads on one filer and heavy
sequential workloads on another (maybe dedicated).

We've also been discussing creating multiple, smaller aggregates that
we then assign to specific workloads guaranteting those spindles for
those workloads.  Lower possible maximum performance, but better
protection against "bullies"[1].

I also know ONTAP has some I/O QoS options.  I'm less inclined to go
that direction however.

Our workloads tend to be ESX VM's using the filers as NFS datastores.

We have the usual budgetary / purchasing cycle constraints, so trying
to minimize pain for as long as possible until we can add resources.

How do folks out there handle this?

Thanks,
Ray

[1] Controller obviously still is shared.
_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters

 


_______________________________________________
Toasters mailing list
Toasters@teaparty.net
http://www.teaparty.net/mailman/listinfo/toasters




--
---
Gustatus Similis Pullus