Scott> no symlinks, automounter sub-mount maps. I have 600+ qtrees in Scott> 3 sites with NFS caches in between, in a single name space Scott> using a gory mesh of automount maps and automounter variables Scott> defined on clients.
We've done that, but when a single project out grows a volume, then we start needing to shuffle data... it's a pain.
If I could have more volumes, then I'd use them over qtrees, but then when an Aggr fills, I need to move volumes. It's a pain.
That's why I like the idea of the Acopia product. I wish NetApp would realize that and come out with their own storage virtualization product to put in front of backend NetApps. Would be really nice.
Course I'm a mostly NFS only shop.
Scott> this is worth doing if the data set is A-sis friendly.
It probably is actually, but hard to know.
Scott> Future OnTap versions are going to make A-sis an aggregate Scott> behavior, not a volume one, which removes the volume size Scott> limit.
Scott> and reinforces the silly 16T aggregate limit ;-)
Yeah, that's another silly limit, esp with raid sets and RaidDP stuff. They should just let it scale and scale and scale.
Scott> really a problem with 1 TB disks; 16 drives, onr Raid-DB group Scott> per aggregate. stinky performance.
I personally *like* one big volume, with bunches of qtrees. What I'd really like is qtrees on multiple levels, or the raising of the number of volumes that are supported in an aggregate. That would help.
And speeing up SnapVault. And a pony... :]
John
i really wish a product like acopia could work with live oracle data over nfs as well. we would purchase this in a heartbeat.
-- Daniel Leeds Manager, Storage Operations Edmunds, Inc. 1620 26th Street, Suite 400 South Santa Monica, CA 90404
310-309-4999 desk 310-430-0536 cell
-----Original Message----- From: owner-toasters@mathworks.com on behalf of John Stoffel Sent: Fri 3/28/2008 2:09 PM To: Scott Miller Cc: John Stoffel; toasters@mathworks.com Subject: Re: Some thoughts and questions on A-sis
Scott> no symlinks, automounter sub-mount maps. I have 600+ qtrees in Scott> 3 sites with NFS caches in between, in a single name space Scott> using a gory mesh of automount maps and automounter variables Scott> defined on clients.
We've done that, but when a single project out grows a volume, then we start needing to shuffle data... it's a pain.
If I could have more volumes, then I'd use them over qtrees, but then when an Aggr fills, I need to move volumes. It's a pain.
That's why I like the idea of the Acopia product. I wish NetApp would realize that and come out with their own storage virtualization product to put in front of backend NetApps. Would be really nice.
Course I'm a mostly NFS only shop.
Scott> this is worth doing if the data set is A-sis friendly.
It probably is actually, but hard to know.
Scott> Future OnTap versions are going to make A-sis an aggregate Scott> behavior, not a volume one, which removes the volume size Scott> limit.
Scott> and reinforces the silly 16T aggregate limit ;-)
Yeah, that's another silly limit, esp with raid sets and RaidDP stuff. They should just let it scale and scale and scale.
Scott> really a problem with 1 TB disks; 16 drives, onr Raid-DB group Scott> per aggregate. stinky performance.
I personally *like* one big volume, with bunches of qtrees. What I'd really like is qtrees on multiple levels, or the raising of the number of volumes that are supported in an aggregate. That would help.
And speeing up SnapVault. And a pony... :]
John
We've done that, but when a single project out grows a volume, then we start needing to shuffle data... it's a pain.
If I could have more volumes, then I'd use them over qtrees, but then when an Aggr fills, I need to move volumes. It's a pain.
That's why I like the idea of the Acopia product. I wish NetApp would realize that and come out with their own storage virtualization product to put in front of backend NetApps. Would be really nice.
Course I'm a mostly NFS only shop.
Aren't you basically describing ONTAP GX (on the fly volume moves, 1000 volumes per pair of filers, up to 24 filers using a single namespace which can be used to stitch volumes together etc)?
"Darren" == Darren Sykes Darren.Sykes@csr.com writes:
We've done that, but when a single project out grows a volume, then we start needing to shuffle data... it's a pain.
If I could have more volumes, then I'd use them over qtrees, but then when an Aggr fills, I need to move volumes. It's a pain.
That's why I like the idea of the Acopia product. I wish NetApp would realize that and come out with their own storage virtualization product to put in front of backend NetApps. Would be really nice.
Course I'm a mostly NFS only shop.
Darren> Aren't you basically describing ONTAP GX (on the fly volume Darren> moves, 1000 volumes per pair of filers, up to 24 filers using Darren> a single namespace which can be used to stitch volumes Darren> together etc)?
But GX doesn't support SnapVault at all, and we need (maybe?) it for our DR work across our WAN. Maybe if we consolidated down to just a couple of sites and mirrored them across the country GX would work. I doubt it though.
John
But GX doesn't support SnapVault at all, and we need (maybe?) it for our DR work across our WAN. Maybe if we consolidated down to just a couple of sites and mirrored them across the country GX would work. I doubt it though.
John
That's pretty much what we do. However, the GX training material does suggest that a new replication engine is being developed which will allow interoperability with SnapVault and 7G systems in the future....