Hi All,
I'm looking at deploying cDOT cluster based on "FAS 8040 HA" from scratch (to serve both NAS + SAN environment), current plan is to start with 2 HA pairs ie 4 nodes, each node:
DS4246 24*4TB SATA x3 DS2246 24*1.2TB SAS x5
and then continue to grow with another HA pair ie + 2 more nodes. I'd also like to plan for 4th HA pair for purposes of (possible) HW upgrades.
For each HA pair I plan to allocate two racks (a standard 42U rack).
I wonder if anyone has had similar deployment to see what was racks layout design. "FAS 8040 HA" is a single chassis two controllers appliance so it has to be in one rack. How to distribute shelves, given that I plan to add both SAS and SATA shelves later on. Also what would be the best place to put 10Gb cluster interconnect switches?
Cheers, Vladimir
there are limitations on how many nodes you can have in a cluster, depending on what you do with it.
From memory, serving SAN out limits you to 8 nodes, whereas doing NAS only means you can go to
24 nodes. It’s worth thinking about, as if successfully deployed, you may end up wanting to grow beyond those limitations, which could be mildly painful :) It’s also possible that this has changed, I’m not too familiar with the 8000 series kit, it may have greater expansion possibilities than the 6000 series.
Our cluster switches are located in the central comms rooms, and we patch back via the core, so that location is not an issue, and it means we can put new kit anywhere, and just patch it into the cluster via the switches.
There may be operational reasons why you couldn’t do this, or possibly ecumenical arguments, but it works well for us :)
Hope it all goes well,
Mark Flint mf1@sanger.ac.uk
On 2 Jul 2014, at 11:26, Momonth momonth@gmail.com wrote:
Hi All,
I'm looking at deploying cDOT cluster based on "FAS 8040 HA" from scratch (to serve both NAS + SAN environment), current plan is to start with 2 HA pairs ie 4 nodes, each node:
DS4246 24*4TB SATA x3 DS2246 24*1.2TB SAS x5
and then continue to grow with another HA pair ie + 2 more nodes. I'd also like to plan for 4th HA pair for purposes of (possible) HW upgrades.
For each HA pair I plan to allocate two racks (a standard 42U rack).
I wonder if anyone has had similar deployment to see what was racks layout design. "FAS 8040 HA" is a single chassis two controllers appliance so it has to be in one rack. How to distribute shelves, given that I plan to add both SAS and SATA shelves later on. Also what would be the best place to put 10Gb cluster interconnect switches?
Cheers, Vladimir _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Just a quick note: Mark is correct, the 8040 (just like the rest of the 8000 series) is limited to 8 nodes if you're using SAN and 24 nodes if you're using only NAS protocols. (The 2xxx and previous generation models don't all scale to 24 nodes NAS)
But on another note: Did you mean SAS + SATA for every *node*? Why not use (in a HA pair) one node for SAS and one node for SATA? You will get (slightly) better performance compared to using 'mixed nodes' and won't lose flexibility as far as I can see. And then you could 'fill up' one rack with SAS and the other with SATA... (not that you couldn't do that anyway, but the cabling and sorting out which shelf belongs to whom might be easier)
Just my 2c
Sebastian
On 7/2/2014 12:40 PM, Mark Flint wrote:
there are limitations on how many nodes you can have in a cluster, depending on what you do with it. From memory, serving SAN out limits you to 8 nodes, whereas doing NAS only means you can go to 24 nodes. It’s worth thinking about, as if successfully deployed, you may end up wanting to grow beyond those limitations, which could be mildly painful :) It’s also possible that this has changed, I’m not too familiar with the 8000 series kit, it may have greater expansion possibilities than the 6000 series.
Our cluster switches are located in the central comms rooms, and we patch back via the core, so that location is not an issue, and it means we can put new kit anywhere, and just patch it into the cluster via the switches.
There may be operational reasons why you couldn’t do this, or possibly ecumenical arguments, but it works well for us :)
Hope it all goes well,
Mark Flint mf1@sanger.ac.uk
On 2 Jul 2014, at 11:26, Momonth momonth@gmail.com wrote:
Hi All,
I'm looking at deploying cDOT cluster based on "FAS 8040 HA" from scratch (to serve both NAS + SAN environment), current plan is to start with 2 HA pairs ie 4 nodes, each node:
DS4246 24*4TB SATA x3 DS2246 24*1.2TB SAS x5
and then continue to grow with another HA pair ie + 2 more nodes. I'd also like to plan for 4th HA pair for purposes of (possible) HW upgrades.
For each HA pair I plan to allocate two racks (a standard 42U rack).
I wonder if anyone has had similar deployment to see what was racks layout design. "FAS 8040 HA" is a single chassis two controllers appliance so it has to be in one rack. How to distribute shelves, given that I plan to add both SAS and SATA shelves later on. Also what would be the best place to put 10Gb cluster interconnect switches?
Cheers, Vladimir _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
For future growth, I would look at something like this:
Place the Head in the middle of Rack 1. Leave 6U empty (use Rack Filler covers to block).
Place disks aboave and below the head and empty space. You could number the shelves in some order that means something and attach to both heads.
I have done/seen different ways, and no one way is any better. I.E. node 1 might own all odd shelves, node 2 might own all even shelves, or node 1 might own stack 10 and node 2 might own stack 20.
Lots of flexibility to do whatever you like.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141 https://www.redhat.com/wapps/training/certification/verify.html?certNumber=110-107-141&isSearch=False&verify=Verify NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014
On Wed, Jul 2, 2014 at 5:33 PM, Sebastian Goetze spgoetze@gmail.com wrote:
Just a quick note: Mark is correct, the 8040 (just like the rest of the 8000 series) is limited to 8 nodes if you're using SAN and 24 nodes if you're using only NAS protocols. (The 2xxx and previous generation models don't all scale to 24 nodes NAS)
But on another note: Did you mean SAS + SATA for every *node*? Why not use (in a HA pair) one node for SAS and one node for SATA? You will get (slightly) better performance compared to using 'mixed nodes' and won't lose flexibility as far as I can see. And then you could 'fill up' one rack with SAS and the other with SATA... (not that you couldn't do that anyway, but the cabling and sorting out which shelf belongs to whom might be easier)
Just my 2c
Sebastian
On 7/2/2014 12:40 PM, Mark Flint wrote:
there are limitations on how many nodes you can have in a cluster, depending on what you do with it. From memory, serving SAN out limits you to 8 nodes, whereas doing NAS only means you can go to 24 nodes. It's worth thinking about, as if successfully deployed, you may end up wanting to grow beyond those limitations, which could be mildly painful :) It's also possible that this has changed, I'm not too familiar with the 8000 series kit, it may have greater expansion possibilities than the 6000 series.
Our cluster switches are located in the central comms rooms, and we patch back via the core, so that location is not an issue, and it means we can put new kit anywhere, and just patch it into the cluster via the switches.
There may be operational reasons why you couldn't do this, or possibly ecumenical arguments, but it works well for us :)
Hope it all goes well,
Mark Flint mf1@sanger.ac.uk
On 2 Jul 2014, at 11:26, Momonth momonth@gmail.com wrote:
Hi All,
I'm looking at deploying cDOT cluster based on "FAS 8040 HA" from scratch (to serve both NAS + SAN environment), current plan is to start with 2 HA pairs ie 4 nodes, each node:
DS4246 24*4TB SATA x3 DS2246 24*1.2TB SAS x5
and then continue to grow with another HA pair ie + 2 more nodes. I'd also like to plan for 4th HA pair for purposes of (possible) HW upgrades.
For each HA pair I plan to allocate two racks (a standard 42U rack).
I wonder if anyone has had similar deployment to see what was racks layout design. "FAS 8040 HA" is a single chassis two controllers appliance so it has to be in one rack. How to distribute shelves, given that I plan to add both SAS and SATA shelves later on. Also what would be the best place to put 10Gb cluster interconnect switches?
Cheers, Vladimir _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Hi Tim,
Thanks for you input, it gave me some inspiration, I came up with something like that:
Text in bold is what I'm going to rack / cable in the beginning. Rack_0{4,5} are reserved for the 3rd HA pair.
I haven't decided how to distribute shelves across controllers, but looks like rack based affinity is OK-ish and each controller is going to get cabled with DS2246 x10 + DS4246 x5 (or x6 | x4 split).
Cheers, Vladimir
On Wed, Jul 2, 2014 at 11:45 PM, tmac tmacmd@gmail.com wrote:
For future growth, I would look at something like this:
Place the Head in the middle of Rack 1. Leave 6U empty (use Rack Filler covers to block).
Place disks aboave and below the head and empty space. You could number the shelves in some order that means something and attach to both heads.
I have done/seen different ways, and no one way is any better. I.E. node 1 might own all odd shelves, node 2 might own all even shelves, or node 1 might own stack 10 and node 2 might own stack 20.
Lots of flexibility to do whatever you like.
--tmac
Hi,
On Wed, Jul 2, 2014 at 11:33 PM, Sebastian Goetze spgoetze@gmail.com wrote:
Just a quick note: Mark is correct, the 8040 (just like the rest of the 8000 series) is limited to 8 nodes if you're using SAN and 24 nodes if you're using only NAS protocols. (The 2xxx and previous generation models don't all scale to 24 nodes NAS)
Yes, I'm aware of the 8 nodes limit in SAN environment and it's OK for my environment.
But on another note: Did you mean SAS + SATA for every *node*?
Yes, correct. I'm going to build two separate domains (separate SAS HBA ports): one for SAS and one for SATA.
Why not use (in a HA pair) one node for SAS and one node for SATA?
My experience (with workloads I have) is that this approach leads to unbalanced controllers in terms of CPU isage, ie a controller that handles SAS traffic becomes ~ 50% CPU bus way faster than a controller with SATA only workload on it.
You will get (slightly) better performance compared to using 'mixed nodes' and won't lose flexibility as far as I can see. And then you could 'fill up' one rack with SAS and the other with SATA... (not that you couldn't do that anyway, but the cabling and sorting out which shelf belongs to whom might be easier)
Just my 2c
Sebastian