On Fri, 12 Nov 1999, Alan R. White wrote:
The claims of simplicity, reliability, minimal downtime, performance for filing (compared to NT) and snapshots are what attracted me to look at the boxes - I haven't seen any horror stories in the archives - is this too good to be true? The FC-AL stuff recently looked a bit dodgey.
I think there are definite issues surrounding the QA of LRCs (Loop Resiliency Circuits) which, well, proved to be less than resilient to failures.
How many folks actualy cluster their filers? Claims of 99.997% uptime without clustering sound, once again, too good.
All but one of our FC-AL filers are clustered AFAIK.
Is the clustering simple primary-failover or can we do n-way clusters with load sharing etc?
Yes, and no respectively. Since the two filers in the cluster will have their own filesystems they will be doing their own share, so one head will not be just sitting there. OTOH, when you do fail over you will be putting their combined load on one.
Is the cluster setup really a one-command 15 minute job?
Well, several setup commands, but 15 minutes sounds a bit long.
User restore on their NT clients by mapping the snapshots looks a good idea. Is it usable in the real world? It would save us heaps of hassle with classic 'ask IT to do it' restores.
Well, you have to educate the users. This, I think, is our biggest problem with snapshots. People who have quotas think the snapshots count against their quota.
Any good rule of thumb sizing advice for the amount of space to reserve for snapshots?
This depends on the volatility of your filesystems. For home directories with 100MB quotas and snapshots every 4 hours the default 20% is much more than enough.
Similarly for automated snapshot schemes, does anyone do multiple snapshots intra-day and maybe keep one for a longer period, e.g. keep a midnight snapshot for x days.
This depends on the purpose of the filesystem, but yes.
Is SnapMirror up to the job of keeping an almost real-time remote replica, i.e. snap every minute if the networks up to it? Are there any operational issues around this stuff?
Uggghhh, I don't know about realtime, we do it every hour. This seems to be sufficient for our needs at this time.
Is anyone prepare to comment privately or otherwise on any recent comparisons they've done with Celera and Auspex?
Hmmmm, I would say Auspex and NetApps are equally troublesome. I tend to favor NetApp for their cleaner design. I haven't played with Celera.
I understand the cost thing with EMC but loads of people seem to buy them still. This is not intended as flame bait for all the NetApp advocates.
From what I hear we've also had are share of problems with EMC.
Any advice on what we should really include in our eval to really test the box out?
If you can invest the people and time to put production level load in extreme production-like environment on all of these solution I would do so to determine which one is best for your application in your environment. If you can't invest the time, flip a coin. I think you'll be just as happy with any one of those.
...or indeed comments in general that would be useful for us.
Many UNIX bigots will tend to favor Auspex or EMC on UNIX because of their UNIX interface. I think that if you remember what kind of interface a file server has you're spending too much time with it. The promise of a dedicated NFS server is best expressed by a quote from very annoying infomercials: you should "set it and forget it." If that isn't true dedicated file servers are only as good as conventional servers.
Tom