I just want to point out a few things based on my experience and feedback from other people.
Pure Storage - block only. You do not get any bells and whistles.
XIO – everyone that owns one has something bad to say about it
TinTri – NFS only, each node is a separate unit, VM aware.. has a place for certain markets
SolidFire – amazing potential, you can have disk failures or a complete node fail and stay up and running , but again, block only and no bells and whistles.
Netapp AFF – all protocols supported, the entire snap manager suite (exchange, SQL and sharepoint are absolutely outstanding) and with their 4:1 guarantee they will give you more shelves if they don’t hit that mark. They also give you controller upgrades after 3 years.
In my experience the “bake offs” have been between XIO, Pure Storage and Netapp on critical EMR systems like EPIC and McKesson. Netapp’s AFF has always been the one that came out on top as far as performance. EMC has been the one willing to give their product away just to keep their foot print.
With all that being said, you need to figure out what fits your needs. I have customers that want to boot UCS blades from SAN, Run NFS datastores and migrate their windows file servers to native CIF’s shares on their storage system. The only storage company that can do all of that is Netapp (I won’t bring up Isolation because their block size is terrible with small files)
So, if you are only worried about the 4:1 efficiencies who cares? They will give you more disks and you still get the Cadillac.
From: toasters-bounces@teaparty.net [mailto:toasters-bounces@teaparty.net] On Behalf Of tmac Sent: Friday, September 23, 2016 10:34 AM To: jordan slingerland Cc: toasters@teaparty.net Subject: Re: AFF and 4:1 guaranteed efficiency
I vaguely remember running a test on an ONTAP 9 simulator.
What I found was that if the destination aggr had compaction enabled, then snapmirror into that aggr would also have compacted data.
This DOES NOT work with XDP (snapvault & version-flexible snapmirror)
--tmac
Tim McCarthy, Principal Consultant
Proud Member of the #NetAppATeam https://twitter.com/NetAppATeam
I Blog at https://tmacsrack.wordpress.com/ TMACsRack
Image removed by sender. NetApp - In partnership with Alpine Testing Solutions https://www.certmetrics.com/netapp/public/badge.aspx?t=c&d=2012-11-05&i=35&ci=NETAPP00041276 Image removed by sender. NetApp Certified Data Administrator, ONTAP https://www.certmetrics.com/netapp/public/badge.aspx?t=c&d=2012-11-08&i=36&ci=NETAPP00041276 Image removed by sender. NetApp Certified Implementation Engineer - SAN Specialist, ONTAP https://www.certmetrics.com/netapp/public/badge.aspx?t=c&d=2015-10-13&i=38&ci=NETAPP00041276 Image removed by sender. NetApp Certified Storage Installation Engineer, ONTAP https://www.certmetrics.com/netapp/public/badge.aspx?t=c&d=2015-10-15&i=11&ci=NETAPP00041276 Image removed by sender. NetApp Certified Implementation Engineer - Data Protection SpecialistImage removed by sender.Image removed by sender.
NetApp Candidate ID: NETAPP00041276
FlexPod Design: Oct 2015 - Jan 2018, S0N62WE1BMVEYF3M
FlexPod Implementation: Oct 2015 - Jan 2018, JH3QJT4KLEQ41HPH
RHCE6 https://www.redhat.com/wapps/training/certification/verify.html?certNumber=110-107-141&isSearch=False&verify=Verify 110-107-141
On Fri, Sep 23, 2016 at 11:25 AM, jordan slingerland jordan.slingerland@gmail.com wrote:
My understanding is that both are block based operations so it makes sense to me that the blocks would be put down on disk unchanged. Perhaps it has something to do with the fact that you may want to reverse the snapmirror back to the system that presumably does not have compaction enabled or support the feature, so the blocks are left uncompacted? If the vol move is within the same controller AFF the lack of backward compatibility is not a concern. Just a though, I don't know.
On Fri, Sep 23, 2016 at 10:57 AM, Francis Kim fkim@berkcom.com wrote:
Strange how compaction appears to work with vol move but not with snap mirror.
.
On Sep 23, 2016, at 4:39 AM, Steiner, Jeffrey Jeffrey.Steiner@netapp.com wrote:
A vol-move operation will cause compaction to happen. I know it's not ideal but at least its internal. Obviously a scanner is preferred but functionally it would be a lot like a vol move.
Likewise if you use FLI to import a LUN that will trigger all the efficiency features during the import.
Sent from my mobile phone.
On 23 Sep 2016, at 02:10, Jeffrey Mohler jmohler@yahoo-inc.com wrote:
Compaction in our testing can be good, really good...an additional 20% in many of our test data sets on top of everything else. (SSET diagnosis)
However, there are limitations on how you get data into AFF to get compaction...IE, you cant snapmirror it.
It must transfer, as far as we're told today, via a host/file based migration. Be thinking bout this when you consider an AFF migration. It must be done outside on ONTAP.
We have pushed to get this fixed via a scanner/etc that reads and relays out the file based structure.
_________________________________
Jeff Mohler mailto:jmohler@yahoo-inc.com
Tech Yahoo, Storage Architect, Principal
(831)454-6712 YPAC Gold Member
Twitter: @PrincipalYahoo CorpIM: Hipchat & Iris
On Friday, September 23, 2016 1:53 AM, Mike Gossett cmgossett@gmail.com wrote:
my sales guy basically said the guarantee works as follows - they don't hit the 4:1 target, they buy you whatever amount of shelf\ssd required to makeup shortfall.
The magic has to do with what they call "compaction" - writes <4KB (down to 512B/ea) are "compacted" into a single 4K block... this apparently adds to the std dedupe and compression they use.
On Thu, Sep 22, 2016 at 1:32 PM, jordan slingerland jordan.slingerland@gmail.com wrote:
Sales guys are promising me 4:1 dedup ratios on an AFF with ontap9. They say they grantee it. I have specifically asked what stipulation. In a VDI environment with linked clones, etc. Sales guy tells me none and even specifically says they can dedup compressed video or audio (mp3) 4:1.
I have A LOT of trouble believing that. So, 4:1 dedup guaranteed or what?
Any comments welcome.
--Jordan
______________________________ _________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/ mailman/listinfo/toasters http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters