Hey Tom -
If you propate over an IP network on the front end of the head, then yes there is a performance penalty. If you have the disk shelves handle the propagation, then you can do some interesting things which do not create a performance penalty. But then you're talking about putting intelligence down into the disk shelves themselves.
/Christian Adams
-----Original Message----- From: tkaczma@gryf.net [SMTP:tkaczma@gryf.net] Sent: Monday, March 22, 1999 4:47 PM To: toasters@mathworks.com Subject: RE: NetApp/Auspex killer?
On Mon, 22 Mar 1999, Mohler, Jeff wrote:
Your DR solution should PUSH data out constantly, not request it remotely every 60 seconds, then if a snapcopy fails for that period, it
gets
thrown away on the remote side...two lost minutes of data.
That's true for your application, but I don't want to mirror my disks across the country even every 60 seconds. Once every couple of minutes or hours is perfect for me. If you truly mirror your disks the latency associated with data propagation will kill the performance.
Tom
On Tue, 23 Mar 1999, Adams, Christian wrote:
If you propate over an IP network on the front end of the head, then yes there is a performance penalty.
That was my understanding of the functionality that was described.
If you have the disk shelves handle the propagation, then you can do some interesting things which do not create a performance penalty.
Not quite, it really makes no difference where data is propagated. Whether you need to issue two writes on the client or whether the server issues two writes to disk. Of course if you delay propagation anywhere then you can exploit caching mechanisms for frequently changed data. If you want the client to be assured that both copies of data are integral and several, you need to take a performance hit, i.e. you make sure that data is safe on both systems before you ack the rpc. Either way I look at it you have to send the signal in one direction (hopefully several miles away) and get an acknowledgement (well, we could discuss the potential incompleteness of quantum mechanics here, but I'll spare myself). Light only travels 186 miles per milisecond in vacuum (please correct me if I'm wrong), substantially slower in fiberoptics, and it must make a trip equal to twice the distance. This doesn't even account for the time it takes a device to decode, store, and generate an acknowledgement. (Allright, I'm beginning to sound a bit academic here.) ;)
I think the point I was originally trying to make is that NACs will meet certain data synchronisation needs, for others you should use mirroring. However, asking NAC to do mirroring is a separate issue from asking them to provide periodic incremental duplication, or more precisely synchronisation. Yes, it would be nice if NACs supported mirroring as well. That would make them extremely high end boxes.
But then you're talking about putting intelligence down into the disk shelves themselves.
That's a bit of where SANs come in. I think that in the future drives will become more intelligent giving us more flexibility. To some degree I think that NAS will be compressed into SAN. If the industry comes up with a much denser and faster storage device - e.g. there was talk of using crystal latices several years ago - then there is no reason not to make the drives' front ends more intelligent. Why wouldn't you want to make a several terabyte cube a sovereign and atomic entity?
On a side note, it could really be either the controller, the shelves or the drives themselves making the mirror.
Tom Comming down from his dreamworld. ;)
P.S. Meanwhile, can we have a choice of port selection algorithms for trunking? Least used port would be very nice, but I would really settle for round robin. I really need this. I'll have a couple big bang trunked boxes connected to the same switch as NACs. MAC hashing will not work very well in this scenario. Please put this into 5.4!!!
1999-03-24-06:16:55 Tom:
[...] Yes, it would be nice if NACs supported mirroring as well. That would make them extremely high end boxes.
NACs supporting mirroring, would that be something like Auspex's ServerGuard? I've run that, and though we had a pretty long shakedown time, it eventually stabilized and worked wonderfully well. It's a terrific piece of work, but it pretty clearly reveals that doing network-distributed file server mirroring without a vicious performance hit is _way_hard_.
From the beginning, they were hoping that their solution would scale well to mirroring over a WAN, but last I heard that was still a hope for the future and not a delivered reality.
We used, and loved, ServerGuard within the office, but for off-site replication we settled on rsync. Cool SW that. For a good time, use a find...|cpio -pdl run to build a hard-link tree at the target of replication, then rsync to update that hard-link tree. Rsync breaks the hard links to the files that change. You end up with a really efficient delta snapshot view that you can use for easy user-accessible backups. Just tell your users to e.g.
cd /backup/1998/06/12/their/home/directory
They like that. It appeals. Oh, and back to your original comment, Yupper, if ServerGuard is any example, that would indeed make them extremely high end boxes:-).
-Bennett