We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared.
Something I like to do to squeak out max performance is this:
Between two heads, say I have 4 shelves. First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible. Second, Each head will own half the disks in each shelf. So, from maintenance mode, on a new filer something along the likes of:
remove all disk ownerships from all heads. take owner ship of half the disks on head A
(the logic is smart enough now where it almost always will grab roughly the same amount of disks from each shelf)
take owner ship of the other half the disks on head B
The only downside I see to this approach is when a disk fails, the admin must run a "disk assign" command to tell it when filer to belong to as no filer will own all the disks on any given shelf defeating the purpose of auto-assign based on ownership.
So far, this method has been very good to me.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson rvandolson@esri.com wrote:
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Hey
Any reason why you choose to do it that way?
I can't really see what benefit you're getting from splitting shelf loads between controllers.
Cheers
Shane Bradley Senior Technical Consultant
Fujitsu New Zealand Limited Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011 T +64 4 890 9605 M +64 21 229 1563 F +64 4 495 0730 shane.bradley@nz.fujitsu.com mailto:shane.bradley@nz.fujitsu.com nz.fujitsu.com http://nz.fujitsu.com
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Friday, 21 December 2012 2:38 p.m. To: Ray Van Dolson Cc: toasters@teaparty.net Subject: Re: Aggregate Best Practices
Something I like to do to squeak out max performance is this:
Between two heads, say I have 4 shelves.
First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible.
Second, Each head will own half the disks in each shelf.
So, from maintenance mode, on a new filer something along the likes of:
remove all disk ownerships from all heads.
take owner ship of half the disks on head A
(the logic is smart enough now where it almost always will grab
roughly the same amount of disks from each shelf)
take owner ship of the other half the disks on head B
The only downside I see to this approach is when a disk fails,
the admin must run a "disk assign" command to tell it when filer
to belong to as no filer will own all the disks on any given shelf
defeating the purpose of auto-assign based on ownership.
So far, this method has been very good to me.
--tmac
Tim McCarthy
Principal Consultant
http://dl.dropbox.com/u/6874230/na_cert_dma_2c.jpg http://dl.dropbox.com/u/6874230/na_cert_ie-san_2c.jpg
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson rvandolson@esri.com wrote:
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
There is no 'shelf load'..just the # of disks in use for the workload irrespective of where they are.
Shelves just hold disks off the floor and provide power. :)
On Thu, Dec 27, 2012 at 9:20 PM, Bradley, Shane < shane.bradley@nz.fujitsu.com> wrote:
Hey****
Any reason why you choose to do it that way?****
I can’t really see what benefit you’re getting from splitting shelf loads between controllers.****
Cheers****
*Shane Bradley Senior Technical Consultant
Fujitsu New Zealand Limited* Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011 *T* +64 4 890 9605 *M* +64 21 229 1563 *F* +64 4 495 0730 shane.bradley@nz.fujitsu.com nz.fujitsu.com
*From:* toasters-bounces@teaparty.net [mailto: toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Friday, 21 December 2012 2:38 p.m. *To:* Ray Van Dolson *Cc:* toasters@teaparty.net *Subject:* Re: Aggregate Best Practices****
Something I like to do to squeak out max performance is this:****
Between two heads, say I have 4 shelves.****
First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible.****
Second, Each head will own half the disks in each shelf.****
So, from maintenance mode, on a new filer something along the likes of:***
remove all disk ownerships from all heads.****
take owner ship of half the disks on head A****
(the logic is smart enough now where it almost always will grab****
roughly the same amount of disks from each shelf)****
take owner ship of the other half the disks on head B****
The only downside I see to this approach is when a disk fails,****
the admin must run a "disk assign" command to tell it when filer****
to belong to as no filer will own all the disks on any given shelf****
defeating the purpose of auto-assign based on ownership.****
So far, this method has been very good to me.****
--tmac****
*Tim McCarthy*****
*Principal Consultant*****
****
Clustered ONTAP Clustered ONTAP****
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4****
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014****
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson rvandolson@esri.com wrote:
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters****
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Depending on workload, you can max out a single path (I have 4-Gbit DS14's and 3-Gbit DS4243's) Spreading the disks across more loops will give the potential to yield more bandwidth on the bigger heads (like my FAS6280's)
I routinely push 4-5 Gigabytes of bandwidth out of my Clustered Ontap 4-node system.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 27, 2012 at 10:26 PM, Jeff Mohler speedtoys.racing@gmail.comwrote:
There is no 'shelf load'..just the # of disks in use for the workload irrespective of where they are.
Shelves just hold disks off the floor and provide power. :)
On Thu, Dec 27, 2012 at 9:20 PM, Bradley, Shane < shane.bradley@nz.fujitsu.com> wrote:
Hey****
Any reason why you choose to do it that way?****
I can’t really see what benefit you’re getting from splitting shelf loads between controllers.****
Cheers****
*Shane Bradley Senior Technical Consultant
Fujitsu New Zealand Limited* Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011 *T* +64 4 890 9605 *M* +64 21 229 1563 *F* +64 4 495 0730 shane.bradley@nz.fujitsu.com nz.fujitsu.com
*From:* toasters-bounces@teaparty.net [mailto: toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Friday, 21 December 2012 2:38 p.m. *To:* Ray Van Dolson *Cc:* toasters@teaparty.net *Subject:* Re: Aggregate Best Practices****
Something I like to do to squeak out max performance is this:****
Between two heads, say I have 4 shelves.****
First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible.****
Second, Each head will own half the disks in each shelf.****
So, from maintenance mode, on a new filer something along the likes of:** **
remove all disk ownerships from all heads.****
take owner ship of half the disks on head A****
(the logic is smart enough now where it almost always will grab****
roughly the same amount of disks from each shelf)****
take owner ship of the other half the disks on head B****
The only downside I see to this approach is when a disk fails,****
the admin must run a "disk assign" command to tell it when filer****
to belong to as no filer will own all the disks on any given shelf****
defeating the purpose of auto-assign based on ownership.****
So far, this method has been very good to me.****
--tmac****
*Tim McCarthy*****
*Principal Consultant*****
****
Clustered ONTAP Clustered ONTAP****
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4****
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014****
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson rvandolson@esri.com wrote:
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters****
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Hi Tim,
Can you please provide some graphs or statistics on that? How are you measuring that? And you understand that the DS4243's (i.e. SAS) aren't simply 3gbit right? I'll pull some TR numbers for you so you can read up later.
Regards,
Andrew
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Friday, 28 December 2012 11:35 AM To: Jeff Mohler Cc: toasters@teaparty.net Subject: Re: Aggregate Best Practices
Depending on workload, you can max out a single path (I have 4-Gbit DS14's and 3-Gbit DS4243's) Spreading the disks across more loops will give the potential to yield more bandwidth on the bigger heads (like my FAS6280's)
I routinely push 4-5 Gigabytes of bandwidth out of my Clustered Ontap 4-node system.
--tmac
Tim McCarthy Principal Consultant
[http://dl.dropbox.com/u/6874230/na_cert_dma_2c.jpg] [http://dl.dropbox.com/u/6874230/rhce.jpeg] [http://dl.dropbox.com/u/6874230/na_cert_ie-san_2c.jpg]
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 27, 2012 at 10:26 PM, Jeff Mohler <speedtoys.racing@gmail.commailto:speedtoys.racing@gmail.com> wrote: There is no 'shelf load'..just the # of disks in use for the workload irrespective of where they are.
Shelves just hold disks off the floor and provide power. :)
On Thu, Dec 27, 2012 at 9:20 PM, Bradley, Shane <shane.bradley@nz.fujitsu.commailto:shane.bradley@nz.fujitsu.com> wrote: Hey
Any reason why you choose to do it that way?
I can't really see what benefit you're getting from splitting shelf loads between controllers.
Cheers Shane Bradley Senior Technical Consultant
Fujitsu New Zealand Limited Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011 T +64 4 890 9605tel:%2B64%204%20890%209605 M +64 21 229 1563tel:%2B64%2021%20229%201563 F +64 4 495 0730tel:%2B64%204%20495%200730 shane.bradley@nz.fujitsu.commailto:shane.bradley@nz.fujitsu.com nz.fujitsu.comhttp://nz.fujitsu.com
[cid:image001.gif@01CDE513.83553070] [cid:image002.gif@01CDE513.83553070] From: toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.netmailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Friday, 21 December 2012 2:38 p.m. To: Ray Van Dolson Cc: toasters@teaparty.netmailto:toasters@teaparty.net Subject: Re: Aggregate Best Practices
Something I like to do to squeak out max performance is this:
Between two heads, say I have 4 shelves. First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible. Second, Each head will own half the disks in each shelf. So, from maintenance mode, on a new filer something along the likes of:
remove all disk ownerships from all heads. take owner ship of half the disks on head A
(the logic is smart enough now where it almost always will grab roughly the same amount of disks from each shelf)
take owner ship of the other half the disks on head B
The only downside I see to this approach is when a disk fails, the admin must run a "disk assign" command to tell it when filer to belong to as no filer will own all the disks on any given shelf defeating the purpose of auto-assign based on ownership.
So far, this method has been very good to me.
--tmac
Tim McCarthy Principal Consultant
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson <rvandolson@esri.commailto:rvandolson@esri.com> wrote: We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- --- Gustatus Similis Pullus _______________________________________________ Toasters mailing list Toasters@teaparty.netmailto:Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Further to Andrew's reply below SAS shelves have 4 channels, at 3gb/s each. The new shelves are 6gb/s per channel. That's up to 24gb/s bandwidth available on each path!
Of course you would dual path to each controller, so that's a lot of bandwidth available! Really the SAS connectivity should never be the bottleneck (with spinning disk).
-Jonathon
Sent from my iPad
On 28/12/2012, at 3:57 PM, Andrew Werchowiecki < Andrew.Werchowiecki@xpanse.com.au> wrote:
Hi Tim,
Can you please provide some graphs or statistics on that? How are you measuring that? And you understand that the DS4243’s (i.e. SAS) aren’t simply 3gbit right? I’ll pull some TR numbers for you so you can read up later.
Regards,
Andrew
*From:* toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Friday, 28 December 2012 11:35 AM *To:* Jeff Mohler *Cc:* toasters@teaparty.net *Subject:* Re: Aggregate Best Practices
Depending on workload, you can max out a single path (I have 4-Gbit DS14's and 3-Gbit DS4243's)
Spreading the disks across more loops will give the potential to yield more bandwidth on the bigger heads
(like my FAS6280's)
I routinely push 4-5 Gigabytes of bandwidth out of my Clustered Ontap 4-node system.
--tmac
*Tim McCarthy*
*Principal Consultant*
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 27, 2012 at 10:26 PM, Jeff Mohler speedtoys.racing@gmail.com wrote:
There is no 'shelf load'..just the # of disks in use for the workload irrespective of where they are.
Shelves just hold disks off the floor and provide power. :)
On Thu, Dec 27, 2012 at 9:20 PM, Bradley, Shane < shane.bradley@nz.fujitsu.com> wrote:
Hey
Any reason why you choose to do it that way?
I can’t really see what benefit you’re getting from splitting shelf loads between controllers.
Cheers
*Shane Bradley Senior Technical Consultant
Fujitsu New Zealand Limited* Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011 *T* +64 4 890 9605 * M* +64 21 229 1563 * F* +64 4 495 0730 shane.bradley@nz.fujitsu.com nz.fujitsu.com
<image001.gif> <image002.gif>
*From:* toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Friday, 21 December 2012 2:38 p.m. *To:* Ray Van Dolson *Cc:* toasters@teaparty.net *Subject:* Re: Aggregate Best Practices
Something I like to do to squeak out max performance is this:
Between two heads, say I have 4 shelves.
First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible.
Second, Each head will own half the disks in each shelf.
So, from maintenance mode, on a new filer something along the likes of:
remove all disk ownerships from all heads.
take owner ship of half the disks on head A
(the logic is smart enough now where it almost always will grab
roughly the same amount of disks from each shelf)
take owner ship of the other half the disks on head B
The only downside I see to this approach is when a disk fails,
the admin must run a "disk assign" command to tell it when filer
to belong to as no filer will own all the disks on any given shelf
defeating the purpose of auto-assign based on ownership.
So far, this method has been very good to me.
--tmac
*Tim McCarthy*
*Principal Consultant*
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson rvandolson@esri.com wrote:
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- --- Gustatus Similis Pullus _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Yeah, Unfortunately....no one ever told/showed me the four-channels. I had suspicions that it was a lot more than 3Gb...now I am aware and know.
I cannot provide any graphs due to the environment that I work in. Sorry.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Fri, Dec 28, 2012 at 5:10 AM, Jonathon Lanzon jonathon@lanzon.orgwrote:
Further to Andrew's reply below SAS shelves have 4 channels, at 3gb/s each. The new shelves are 6gb/s per channel. That's up to 24gb/s bandwidth available on each path!
Of course you would dual path to each controller, so that's a lot of bandwidth available! Really the SAS connectivity should never be the bottleneck (with spinning disk).
-Jonathon
Sent from my iPad
On 28/12/2012, at 3:57 PM, Andrew Werchowiecki < Andrew.Werchowiecki@xpanse.com.au> wrote:
Hi Tim,
Can you please provide some graphs or statistics on that? How are you measuring that? And you understand that the DS4243’s (i.e. SAS) aren’t simply 3gbit right? I’ll pull some TR numbers for you so you can read up later.
Regards,
Andrew
*From:* toasters-bounces@teaparty.net [mailto: toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Friday, 28 December 2012 11:35 AM *To:* Jeff Mohler *Cc:* toasters@teaparty.net *Subject:* Re: Aggregate Best Practices
Depending on workload, you can max out a single path (I have 4-Gbit DS14's and 3-Gbit DS4243's)
Spreading the disks across more loops will give the potential to yield more bandwidth on the bigger heads
(like my FAS6280's)
I routinely push 4-5 Gigabytes of bandwidth out of my Clustered Ontap 4-node system.
--tmac
*Tim McCarthy*
*Principal Consultant*
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 27, 2012 at 10:26 PM, Jeff Mohler speedtoys.racing@gmail.com wrote:
There is no 'shelf load'..just the # of disks in use for the workload irrespective of where they are.
Shelves just hold disks off the floor and provide power. :)
On Thu, Dec 27, 2012 at 9:20 PM, Bradley, Shane < shane.bradley@nz.fujitsu.com> wrote:
Hey
Any reason why you choose to do it that way?
I can’t really see what benefit you’re getting from splitting shelf loads between controllers.
Cheers
*Shane Bradley Senior Technical Consultant
Fujitsu New Zealand Limited*
Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011 *T* +64 4 890 9605 * M* +64 21 229 1563 * F* +64 4 495 0730 shane.bradley@nz.fujitsu.com nz.fujitsu.com
<image001.gif> <image002.gif>
*From:* toasters-bounces@teaparty.net [mailto: toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Friday, 21 December 2012 2:38 p.m. *To:* Ray Van Dolson *Cc:* toasters@teaparty.net *Subject:* Re: Aggregate Best Practices
Something I like to do to squeak out max performance is this:
Between two heads, say I have 4 shelves.
First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible.
Second, Each head will own half the disks in each shelf.
So, from maintenance mode, on a new filer something along the likes of:
remove all disk ownerships from all heads.
take owner ship of half the disks on head A
(the logic is smart enough now where it almost always will grab
roughly the same amount of disks from each shelf)
take owner ship of the other half the disks on head B
The only downside I see to this approach is when a disk fails,
the admin must run a "disk assign" command to tell it when filer
to belong to as no filer will own all the disks on any given shelf
defeating the purpose of auto-assign based on ownership.
So far, this method has been very good to me.
--tmac
*Tim McCarthy*
*Principal Consultant*
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson rvandolson@esri.com wrote:
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
--
Gustatus Similis Pullus _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Yep.
The idea is to get as many disks on as many shelves on as many loops/stacks as possible.
If I have four loops/stacks with one shelf per loop/stack and each head "owned" two shelves, then each head will only utilize 4 paths (two paths to each of the two loops/stacks it owns). If instead I give roughly half of each shelves disks to each head, each on its own loop/stack, well, now I have 8 paths to utilize to my disks instead of four....potentially doubling my bandwidth to my disks.
Does that make sense...
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014
On Thu, Dec 27, 2012 at 10:20 PM, Bradley, Shane < shane.bradley@nz.fujitsu.com> wrote:
Hey****
Any reason why you choose to do it that way?****
I can’t really see what benefit you’re getting from splitting shelf loads between controllers.****
Cheers****
*Shane Bradley Senior Technical Consultant
Fujitsu New Zealand Limited* Level 12, Fujitsu Tower, 141 The Terrace, Wellington, New Zealand 6011 *T* +64 4 890 9605 *M* +64 21 229 1563 *F* +64 4 495 0730 shane.bradley@nz.fujitsu.com nz.fujitsu.com
*From:* toasters-bounces@teaparty.net [mailto: toasters-bounces@teaparty.net] *On Behalf Of *tmac *Sent:* Friday, 21 December 2012 2:38 p.m. *To:* Ray Van Dolson *Cc:* toasters@teaparty.net *Subject:* Re: Aggregate Best Practices****
Something I like to do to squeak out max performance is this:****
Between two heads, say I have 4 shelves.****
First and foremost, I try and make as many loops(FC)/stacks(SAS) as possible.****
Second, Each head will own half the disks in each shelf.****
So, from maintenance mode, on a new filer something along the likes of:***
remove all disk ownerships from all heads.****
take owner ship of half the disks on head A****
(the logic is smart enough now where it almost always will grab****
roughly the same amount of disks from each shelf)****
take owner ship of the other half the disks on head B****
The only downside I see to this approach is when a disk fails,****
the admin must run a "disk assign" command to tell it when filer****
to belong to as no filer will own all the disks on any given shelf****
defeating the purpose of auto-assign based on ownership.****
So far, this method has been very good to me.****
--tmac****
*Tim McCarthy*****
*Principal Consultant*****
****
Clustered ONTAP Clustered ONTAP****
NCDA ID: XK7R3GEKC1QQ2LVD RHCE5 805007643429572 NCSIE ID: C14QPHE21FR4YWD4****
Expires: 08 November 2014 Expires w/release of RHEL7 Expires: 08 November 2014****
On Thu, Dec 20, 2012 at 8:28 PM, Ray Van Dolson rvandolson@esri.com wrote:
We're revisiting how we set up our aggregates and I want to see how others out there do it. Specifically, what strategies do you use for ensuring certain key applications or environments get the performance they need in a shared environment.
Typcially we'll create large aggregates based on a homogeneous disk type. 15K SAS disks in one aggregate, SATA in another. In some cases when it's only a single type of disk, we'd have 60 15K disks in one aggregate and 60 in other (assigned to each controller respectively).
The idea here is that more spindles gives us the most performance. However, some applications/workloads are more important than others, and some can be "bullies" impacting the important stuff. Ideally we'd try and keep our OLTP random workloads on one filer and heavy sequential workloads on another (maybe dedicated).
We've also been discussing creating multiple, smaller aggregates that we then assign to specific workloads guaranteting those spindles for those workloads. Lower possible maximum performance, but better protection against "bullies"[1].
I also know ONTAP has some I/O QoS options. I'm less inclined to go that direction however.
Our workloads tend to be ESX VM's using the filers as NFS datastores.
We have the usual budgetary / purchasing cycle constraints, so trying to minimize pain for as long as possible until we can add resources.
How do folks out there handle this?
Thanks, Ray
[1] Controller obviously still is shared. _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters****
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters