Hi all,
IHAC who is searching for a product that can provide HSM/ILM function for massive NAS environment.
This product can be either a hardware appliance, a software, or combination of both that can provide a total transparent solution for our user (i.e. regardless where is the physical data location, user can remain using it old logical path to access the same data) with tiered storage.
So far I can only identify two products one hardware Acopia and the other one is software approach CommVault. Both have it pros and cons but the biggest challenges for both products are not able to manage snapshot and do not go down to qtree level.
Thanks in advance.
Regards, Babar Haq
Babar> IHAC who is searching for a product that can provide HSM/ILM Babar> function for massive NAS environment.
It's not an easy solution space at all. Esp once you start thinking about Backups and restores. We've been down this route with a couple of vendors/solutions and found limitations in all of them.
In our case, we're purely interested in NFS clients and servers. CIFS, iSCSI is a minor part of our operation.
Babar> This product can be either a hardware appliance, a software, or Babar> combination of both that can provide a total transparent Babar> solution for our user (i.e. regardless where is the physical Babar> data location, user can remain using it old logical path to Babar> access the same data) with tiered storage.
These are going to be *tough* to reconcile together. Having a single mount point on client systems, which can have data spread across multiple backend systems and be dynamically moved around isn't simple to accomplish.
Babar> So far I can only identify two products one hardware Acopia and Babar> the other one is software approach CommVault. Both have it Babar> pros and cons but the biggest challenges for both products are Babar> not able to manage snapshot and do not go down to qtree level.
We've used both. Acopia is a *neat* idea, but the problem with it is backups.
You don't want to backup through the acopia, since it's a big bottleneck, so you backup the Filers directly. But then you need to manage and track *where* your files really are stored, and that becomes a nightmare to deal with.
We also ran into some problems with Acopia's and filesystems with large numbers of files (on the order of 10+ million) but those bugs were fixed relatively quickly, and we haven't had major problems since then.
The other issue is .snapshot/ dirs, since those are just so convenient for user's to use and access to do their own restores. We ended up exporting snapshots to another mount path user's could access and giving them direction on how to access snapshots via the alternative path. Not very ideal, and yet another thing to manually manage.
We've now also migrated to CommVault as our backup software, partly because Legato was expensive to bring current, esp with NDMP licensing, etc. We also have been intrigued by the integrated HSM features of CommVault as well.
Note, that CV requires you have CIFS licenses, and a dedicated Windows box (MediaAgent) which handles all the scanning of the filesystem(s) for file(s) to migrate from one tier to another. So if you're an NFS shop, you'll find that you now need CIFS licenses as well from Netapp, which can be a hidden gotcha if you're not careful.
In our preliminary testing, the HSM aspect has worked pretty well. We can stage files to disk/tape, they get recalled automatically and life is good. We can even do a backup of the stub file, move it to another filer/volume, and have access just work. We're still in the initial deployment phase, but we're planning on rolling this out to all our sites.
This is all using CV 7.0, they now have 8.0 out and we might upgrade to that before we roll out HSM. But it can be tricky.
For us, the integration of HSM and backups is the *key* thing. Having a single mountpoint for user data, which doens't change isn't as important in the grand scheme of things. So the Acopia handles the transparent migration of data between backend storage nicely, but impacts backups and .snapshot access. Using CV, we get integrated backups and HSM and regular .snapshots, but not transparent shuffling of data.
To me, the big issue I want to see addressed is the size of NetApp Aggregates. 16Tb aggregates are *stupid* esp since they have Raid Groups.
Some way to span aggregates with volumes, or move volumes live between aggregates would be a godsend, but just bumping the size to 32Tb would be a win too.
Hope this helps. John John Stoffel - Senior Staff Systems Administrator - System LSI Group Toshiba America Electronic Components, Inc. - http://www.toshiba.com/taec john.stoffel@taec.toshiba.com - 508-486-1087
agreed. the 16TB aggregate limits severely limit us as well. for several large image and video workloads we have to create multiple aggregates and shares which is silly.
-- Daniel Leeds Manager, Storage Operations Edmunds.com 310.309.4999 dleeds@edmunds.com ________________________________________ From: owner-toasters@mathworks.com [owner-toasters@mathworks.com] On Behalf Of James Beal [james_@catbus.co.uk] Sent: Tuesday, June 02, 2009 10:22 AM To: toasters@mathworks.com Subject: Re: HSM/ILM for Netapp
To me, the big issue I want to see addressed is the size of NetApp Aggregates. 16Tb aggregates are *stupid* esp since they have Raid Groups.
This is a serious issue for us, one group in particular will not consider any file system less than 50Tb.
I thought they where planning to move to 100TB volumes in the very near future (7.4?)
-----Original Message----- From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of James Beal Sent: Tuesday, June 02, 2009 1:22 PM To: toasters@mathworks.com Subject: Re: HSM/ILM for Netapp
To me, the big issue I want to see addressed is the size of NetApp Aggregates. 16Tb aggregates are *stupid* esp since they have Raid Groups.
This is a serious issue for us, one group in particular will not consider any file system less than 50Tb.
Please be advised that this email may contain confidential information. If you are not the intended recipient, please do not read, copy or re-transmit this email. If you have received this email in error, please notify us by email by replying to the sender and by telephone (call us collect at +1 202-828-0850) and delete this message and any attachments. Thank you in advance for your cooperation and assistance.
In addition, Danaher and its subsidiaries disclaim that the content of this email constitutes an offer to enter into, or the acceptance of, any contract or agreement or any amendment thereto; provided that the foregoing disclaimer does not invalidate the binding effect of any digital or other electronic reproduction of a manual signature that is included in any attachment to this email.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 6/2/09 1:34 PM, Page, Jeremy wrote:
I thought they where planning to move to 100TB volumes in the very near future (7.4?)
DOT 8.0 is supposedly bringing WAFL improvements to allow for 100TB aggrs/vols (rumored to be dropping this summer).
I also hear jumping any 7.x variant to 8.0 will be a disruptive upgrade.
Cheers.
- -- Nick Silkey
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 6/2/09 1:34 PM, Page, Jeremy wrote:
I thought they where planning to move to 100TB volumes in the very near future (7.4?)
DOT 8.0 is supposedly bringing WAFL improvements to allow for 100TB aggrs/vols (rumored to be dropping this summer).
I also hear jumping any 7.x variant to 8.0 will be a disruptive upgrade.
Cheers.
Nick Silkey
I would really, really, really like to see a utility in DOT 8.x where one aggregate can "assimilate" another aggregate. In other words I want to combine aggrA and aggrB into a new aggrA that contains all of the raid groups (and volumes) of the two aggregates, with aggrB disappearing.
I don't see why this would be difficult to do. Just move the raid groups (and hence volumes) from aggrB to aggrA. I understand that there need to be restrictions (all RGs need to have the same raid type, and no mixing of FC and SATA disks, etc.) Anything to avoid massive data copies. It would even be fine if this had to be done from a maintenance boot.
Right now I have 11 identical 10T aggregates on my snapmirror destination filer (Each consists of 1 raid-dp RG of 16 1TB SATA disks. I would have built slightly smaller RGs, but the 16T limit made that too wasteful.) It's a pain to figure out where to snapmirror a new source volume, since everyone seems to want at least 5T volumes now. I occasionally need to reshuffle to make room. I would LOVE to be able combine all these "little" 10T aggrs into one or two bigger aggrs.
I also hope they set the limit way bigger than 100T, which seems like just kicking the can down the road to me.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Greetings, Seems like someone is hearing more rumors than I about DOT 8.0. I assume this feature you are speaking of will be very similar to the "striped volume" feature on GX, and not a true 100T volume
Don't keep your hopes too high. The bottleneck ends up being the MDV -- essentially another volume that keeps the metadata for the striped volumes. I also don't believe we have the ability in GX to snapmirror (or volume mirror) a striped volume.
Did you hear this information from a sales person?
Regards, Douglas Siggins
-----Original Message----- From: owner-toasters@mathworks.com on behalf of Stephen C. Losen Sent: Tue 6/2/2009 4:32 PM To: toasters@mathworks.com Subject: Bigger Aggregates (was Re: HSM/ILM for Netapp)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 6/2/09 1:34 PM, Page, Jeremy wrote:
I thought they where planning to move to 100TB volumes in the very near future (7.4?)
DOT 8.0 is supposedly bringing WAFL improvements to allow for 100TB aggrs/vols (rumored to be dropping this summer).
I also hear jumping any 7.x variant to 8.0 will be a disruptive upgrade.
Cheers.
Nick Silkey
I would really, really, really like to see a utility in DOT 8.x where one aggregate can "assimilate" another aggregate. In other words I want to combine aggrA and aggrB into a new aggrA that contains all of the raid groups (and volumes) of the two aggregates, with aggrB disappearing.
I don't see why this would be difficult to do. Just move the raid groups (and hence volumes) from aggrB to aggrA. I understand that there need to be restrictions (all RGs need to have the same raid type, and no mixing of FC and SATA disks, etc.) Anything to avoid massive data copies. It would even be fine if this had to be done from a maintenance boot.
Right now I have 11 identical 10T aggregates on my snapmirror destination filer (Each consists of 1 raid-dp RG of 16 1TB SATA disks. I would have built slightly smaller RGs, but the 16T limit made that too wasteful.) It's a pain to figure out where to snapmirror a new source volume, since everyone seems to want at least 5T volumes now. I occasionally need to reshuffle to make room. I would LOVE to be able combine all these "little" 10T aggrs into one or two bigger aggrs.
I also hope they set the limit way bigger than 100T, which seems like just kicking the can down the road to me.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Greetings, Seems like someone is hearing more rumors than I about DOT 8.0. I assume this feature you are speaking of will be very similar to the "striped volume" feature on GX, and not a true 100T volume
Don't keep your hopes too high. The bottleneck ends up being the MDV -- essentially another volume that keeps the metadata for the striped volumes. I also don't believe we have the ability in GX to snapmirror (or volume mirror) a striped volume.
Did you hear this information from a sales person?
No, I'm just responding to to the earlier toasters thread.
I'm not talking about striped volumes -- just regular aggregates, volumes, and files, all of which are limited to 16T.
The 16T limit comes from using a 32 bit integer to store block id numbers. In WAFL a block is 4K so 4K * (2 ^ 32) = 16T.
I think an engineer once told me that they now store block id numbers in 64 bit integers, but have not raised the limit because they need to redesign any algorithms that do not scale gracefully. Since DOT 8.x is a major upgrade from 7.x (where aggregates, flex vols, flex clones, etc., were first introduced) I am hoping that the 16T limit will be increased significantly.
Overall I am very pleased with netapp, but this 16T limit has got to be costing them business, since several of their competitors (Isilon, BlueArc) allow single volumes of over a petabyte.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
I'd expect you're right... actual single aggregates of 100GB sound unlikely to be.
However, don't write off striped volumes due to MDV bottlenecks. That issue is likely to go away in the future.
Snapmirror a striped volume, you'd imagine Netapp would be working on fixing that issue in a release of OnTAP 8 too.
Darren
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Douglas Siggins Sent: 02 June 2009 21:52 To: Stephen C. Losen; toasters@mathworks.com Subject: RE: Bigger Aggregates (was Re: HSM/ILM for Netapp)
Greetings, Seems like someone is hearing more rumors than I about DOT 8.0. I assume this feature you are speaking of will be very similar to the "striped volume" feature on GX, and not a true 100T volume
Don't keep your hopes too high. The bottleneck ends up being the MDV -- essentially another volume that keeps the metadata for the striped volumes. I also don't believe we have the ability in GX to snapmirror (or volume mirror) a striped volume.
Did you hear this information from a sales person?
Regards, Douglas Siggins
-----Original Message----- From: owner-toasters@mathworks.com on behalf of Stephen C. Losen Sent: Tue 6/2/2009 4:32 PM To: toasters@mathworks.com Subject: Bigger Aggregates (was Re: HSM/ILM for Netapp)
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 6/2/09 1:34 PM, Page, Jeremy wrote:
I thought they where planning to move to 100TB volumes in the very
near
future (7.4?)
DOT 8.0 is supposedly bringing WAFL improvements to allow for 100TB aggrs/vols (rumored to be dropping this summer).
I also hear jumping any 7.x variant to 8.0 will be a disruptive
upgrade.
Cheers.
Nick Silkey
I would really, really, really like to see a utility in DOT 8.x where one aggregate can "assimilate" another aggregate. In other words I want to combine aggrA and aggrB into a new aggrA that contains all of the raid groups (and volumes) of the two aggregates, with aggrB disappearing.
I don't see why this would be difficult to do. Just move the raid groups (and hence volumes) from aggrB to aggrA. I understand that there need to be restrictions (all RGs need to have the same raid type, and no mixing of FC and SATA disks, etc.) Anything to avoid massive data copies. It would even be fine if this had to be done from a maintenance boot.
Right now I have 11 identical 10T aggregates on my snapmirror destination filer (Each consists of 1 raid-dp RG of 16 1TB SATA disks. I would have built slightly smaller RGs, but the 16T limit made that too wasteful.) It's a pain to figure out where to snapmirror a new source volume, since everyone seems to want at least 5T volumes now. I occasionally need to reshuffle to make room. I would LOVE to be able combine all these "little" 10T aggrs into one or two bigger aggrs.
I also hope they set the limit way bigger than 100T, which seems like just kicking the can down the road to me.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
To report this email as spam click here https://www.mailcontrol.com/sr/09GGQ8zDbgfTndxI!oX7Uoa1VnJPWXo6fVziaKzd JZaqbNfBm0Cn26uIE3Jkf5d0NMZGMWsdFFILtzovJ!KJwA== .