I started testing sis on one of our 3040's last weekend. Sorry if
theres more extensive information on NOW, but the things I've found so
far were fairly rudimentry. Some impressions and questions:
I found out quickly that my 5T test volume was too big. Looked up the
limits because I had no idea there were volume size limits. 3T for 3040
hmm. I have an existing volume that holds 2.5T, so that is pushing it,
and while I would like to split it up, it won't happen overnight. I'm
not desperate to use sis on it, although I am almost done copying it and
so far have realized 26% savings from dedupe for testing. I wonder why
the max size scales with the system model, I haven't thought of a good
reason for this yet since you could easily have lots of 3T volumes. I
was wondering if you can get sis to shrink say 2.8T to 2.0, is the 2.0
what counts for the sis volume limit size? Even so, would I run into
trouble if trying to do a full volume restore and the full data set is
over 3.0 but would have shrunk? Once I write over 3T into a volume, it
sounds like I cannot run sis on it. I'm not too worried about having to
do a full restore of a volume from tape, but I don't want to limit my
future options if the short term payoff isn't worth it. I'm pretty sure
I'll use it on all or most of all of my other volumes since they are
much smaller.
I noticed during my data copy, the snapshot reserve was blowing out big
time. It seemed like almost any savings from sis went into snapshots
for some reason. 100, 200, 400% full, and it started trimming itself
back after a while. Since this was a non-production copy, I didn't
care, and deleted those snapshots this morning, but I thought that was
rather odd, and couldn't think what the snapshots would have to offer
since if the data was unmodified at the filesystem level, the snapshot
should contain the same data. While not a problem for an initial copy,
I would expect the same thing to happen when sis runs during production,
and although it wouldn't be as large and would eventually flush out, why
am I expending space to store duplicate copies of data that I asked it
to deduplicate? :) Maybe its just because wafl identifies them as
"changed blocks" and insists on storing them in the snapshot.
A 1 gig file of zeros still takes several tens of megabytes after sis.
hmm :) And roughly twice as much for a second copy of it.