I took a look at storewiz. It sure looks promising. How transparent is it though? Do the nfs and windows clients know that the storewiz appliance exists (proxy) and do clients need to be reconfigured? or do they do some packet switching type of things? How does this work with your snapshots? Frequently changing files may take up more space since they are snapshotted.
Does this work well for random access types of applications? I will need to test this thing like hell; it's either a promotion for cutting storage costs, or I lose my job if we corrupt data ;-)
Thanks for the pointers guys.
On 2/26/07, Skottie Miller Scott.Miller@dreamworks.com wrote:
Wilbur Castro wrote:
Hi toasters,
We have a couple of 100 TBs of heterogenous storage (some from netapp), and see that our storage grows close to 60% over year. We were looking at alternatives for managing this data growth. Compression was one of the techniques we were considering for our nearline and (possibly) primary storage. Our applications cannot change to do their own compression, so it boils down to doing this in the storage layer or through an external device. Also, we'd like to not have any performance impact and compression to happen transparently. Deduplication technology from storage vendors would help, but it is not a hetrogenous solution.
I am not aware of any compression technology from netapp. Are you folks aware of any solutions? Would love to hear your experience with those or other alternative ways you deal with the storage growth problem while managing costs.
Look at StoreWiz. http://www.storewiz.com
in-line, transparent boxes that go between your filers and their switch ports.
I'm evaluating them now, and so far, so good.
They have a software app that emulates their hardware, so you can get a sense of the compression that will result on your data set.
also look at Acopia (http://www.acopia.com), they make an "NFS switch" that lets you do transparent tiering and data migration w/o impact to the applications. consider a deployment where the "hot" few TB of data are on the fastest/costliest filers, and the rest is on lower-cost second-tier storage. put the acopia logicially between the filers and the apps, and you get alot of options, and the apps don't even know.
we have one in-place for some long-term exposure testing, and are considering more for some future roll-outs.
-skottie
Thx, Wilbur
--
Scott Miller skottie@anim.DreamWorks.com