Hi July,
We use ARCServe IT for NT with our Filer (F230). We create a snapshot, share the snapshot and backup directly over the network.
Major problem 1: Really slow, especially when lots of small files like we have. Major problem 2: Can't do incrementals (presumably as the Full Backups can't clear the archive bit in the snapshot and sure a new snapshot is created for each backup)
There's a doc on using ARCServe on the Netapp site - see http://www.netapp.com/tech_library/3052.html
I don't know how to use an Intermediatry host with ARCServe. NT 4.0 doesn't allow you to share a drive mapped from another host. Neither does the ARCServe NT Client list the mapped drive in it's list of drives to backup. Am I missing a configuration item?
We run with oplocks disabled as we found that this improved the speed, but we are still only seeing 90MB/min with a PIII 450Mhz with a DLT7000 as the backup host. I'm seeing backup speeds of between 150MB/min & 200MB/min from NT servers using the ARCServe client SW.
I intend having a hard look at this in early Mar., and if I can't solve it usign ARCServe then I intend looking at other backup solutions - QuickRestore (http://www.worksta.com/) looks interesting. Would be interested in any feedback from QuickRestore users.
My gut feeling is that ARCServe is fine for a few NT Servers, but once you have > 50GB of data then it's time to play with the big boys (which means shelling out for the SW). Doing restores from ARCServe Tapes is a doddle. Hopefully won't have to give the ease of use up. I'd love to be able to put my hand on my heart when I go looking for the cash for a F740 (in a few months) that I'll be able to back it up, but that is not the case at the moment using ARCServe.
Regards, GB
-----Original Message----- From: July at Zerowait [SMTP:july@zerowait.com] Sent: Wednesday, January 19, 2000 3:13 PM To: toasters@mathworks.com Subject: ArcServe and Filers
******************************************************************* This email and any files transmitted with it are confidential and are solely for the use of the individual or organisation to whom they are addressed. If you have received this mail in error please notify the system administrator at +353 1 6399700 or by email to hostmaster@msc.ie
This email message has been swept for computer viruses.
Managed Solutions Corporation, Enterprise Customer Relationship Management, Workflow and Contract Administration.
Tel: 353 1 639 9700 Fax: 353 1 639 9701
Don't forget to visit our website at http://www.managed-solutions.com ********************************************************************
Hi -
Mostly by accident, I just noticed that I can essentially max out my 720 over 100bT on writes. I did this a few times with mostly the same results. This is a production environment, but at an idle period. Notes on sysstat output, below: I first copied it back to the Sun box, and then copied it from there to the 720 again. Reads were a bit high, but within the realm of acceptable; writes were... intense. It is pretty clear in the output where the read stopped and the write started, modulo the averaging.
Is this normal for a 720? I don't forsee 5 saturated 100Bt connections in the near future, but... this does make me wonder about scaling.
Thanks for any thoughts.
-j
Random info: all ports manually forced to 100bT/Fd, connected to a Cisco 2940 (all ports forced), connected to a sun E450 (ditto).
Sysstat log:
homer> sysstat CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 9% 204 0 0 41 1234 1160 24 0 0 39 25% 670 0 0 135 5061 4738 45 0 0 22 25% 677 0 0 132 5359 4088 25 0 0 1 23% 569 0 0 116 4333 4065 38 0 0 0 27% 668 0 0 133 5053 4706 19 0 0 0 22% 532 0 0 105 4011 3758 29 0 0 0 30% 533 0 0 1160 2912 2847 1419 0 0 0 75% 833 0 0 6473 230 637 8216 0 0 0 94% 1040 0 0 8356 217 474 11003 0 0 0 98% 1084 0 0 8686 238 490 11014 0 0 0 17% 186 0 0 1066 61 177 1713 0 0 0 5% 84 0 0 16 141 156 47 0 0 0 5% 59 0 0 10 90 76 36 0 0 0 homer>
ls on the file copied:
# ls -l vmcore.tar -rw------- 1 root other 366733312 Jan 19 23:48 vmcore.tar # -- Jamie Lawrence "Self-knowledge is always bad news." -- John Barth __________________________________________________________ Director of Information Technology Third Age Media 415.267.4657 jal@thirdage.com www.thirdage.com
----- Original Message ----- From: Jamie Lawrence jal@thirdage.com To: toasters@mathworks.com Sent: Thursday, January 20, 2000 12:02 AM Subject: Maxing out a 720 over one 100bTnic?
Hi -
Mostly by accident, I just noticed that I can essentially max out my 720 over 100bT on writes. I did this a few times with mostly the same results. This is a production environment, but at an idle period. Notes on sysstat output, below: I first copied it back to the Sun box, and then copied it from there to the 720 again. Reads were a bit high, but within the realm of acceptable; writes were... intense. It is pretty clear in the output where the read stopped and the write started, modulo the averaging.
Is this normal for a 720?
Looks normal to me.
I don't forsee 5 saturated 100Bt connections in the near future, but... this does make me wonder about scaling
Well, unless you think you're going to have 5 clients all writing 20+ MB files at the exact same time regularly, and they can't stand to wait a few seconds for those writes to complete, I don't see the problem.
Bruce
++ 20/01/00 02:38 -0800 - Bruce Sterling Woodcock:
----- Original Message ----- From: Jamie Lawrence jal@thirdage.com To: toasters@mathworks.com Sent: Thursday, January 20, 2000 12:02 AM Subject: Maxing out a 720 over one 100bTnic?
Hi -
Mostly by accident, I just noticed that I can essentially max out my 720 over 100bT on writes.
[...]
Is this normal for a 720?
Looks normal to me.
OK. I didnn't expect quite that level of activiy. After thinking about it some, though, it does make sense.
I don't forsee 5 saturated 100Bt connections in the near future, but... this does make me wonder about scaling
Well, unless you think you're going to have 5 clients all writing 20+ MB files at the exact same time regularly, and they can't stand to wait a few seconds for those writes to complete, I don't see the problem.
I'm probably going to stay concerned until proven wrong, but the point is response time. I think I'm OK, I just didn't expect a client to monopolize the entire bandwidth.
By and large, "I withdraw the question."
I should sleep more.
-j
I'm no expert on the filer, but I would have to expect that since the filer caches write in NVRAM, and then writes the data to disk in chunks to minimize parity disk impacts, that this behavior is not out of the ordinary.
There will always be bottlenecks. The real question is, are they ones you can live with.
Jamie Lawrence wrote:
Hi -
Mostly by accident, I just noticed that I can essentially max out my 720 over 100bT on writes. I did this a few times with mostly the same results. This is a production environment, but at an idle period. Notes on sysstat output, below: I first copied it back to the Sun box, and then copied it from there to the 720 again. Reads were a bit high, but within the realm of acceptable; writes were... intense. It is pretty clear in the output where the read stopped and the write started, modulo the averaging.
Is this normal for a 720? I don't forsee 5 saturated 100Bt connections in the near future, but... this does make me wonder about scaling.
Thanks for any thoughts.
-j
Random info: all ports manually forced to 100bT/Fd, connected to a Cisco 2940 (all ports forced), connected to a sun E450 (ditto).
Sysstat log:
homer> sysstat CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 9% 204 0 0 41 1234 1160 24 0 0 39 25% 670 0 0 135 5061 4738 45 0 0 22 25% 677 0 0 132 5359 4088 25 0 0 1 23% 569 0 0 116 4333 4065 38 0 0 0 27% 668 0 0 133 5053 4706 19 0 0 0 22% 532 0 0 105 4011 3758 29 0 0 0 30% 533 0 0 1160 2912 2847 1419 0 0 0 75% 833 0 0 6473 230 637 8216 0 0 0 94% 1040 0 0 8356 217 474 11003 0 0 0 98% 1084 0 0 8686 238 490 11014 0 0 0 17% 186 0 0 1066 61 177 1713 0 0 0 5% 84 0 0 16 141 156 47 0 0 0 5% 59 0 0 10 90 76 36 0 0 0 homer>
ls on the file copied:
# ls -l vmcore.tar -rw------- 1 root other 366733312 Jan 19 23:48 vmcore.tar
#
Jamie Lawrence "Self-knowledge is always bad news." -- John Barth __________________________________________________________ Director of Information Technology Third Age Media 415.267.4657 jal@thirdage.com www.thirdage.com
-- Matthew Lee Stier * Fujitsu Network Communications Unix Systems Administrator | Two Blue Hill Plaza Ph: 914-731-2097 Fx: 914-731-2011 | Sixth Floor Matthew.Stier@fnc.fujitsu.com * Pearl River, NY 10965
On Thu, 20 Jan 2000, Jamie Lawrence wrote:
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache in out read write read write age 23% 569 0 0 116 4333 4065 38 0 0 0 27% 668 0 0 133 5053 4706 19 0 0 0 22% 532 0 0 105 4011 3758 29 0 0 0 30% 533 0 0 1160 2912 2847 1419 0 0 0 75% 833 0 0 6473 230 637 8216 0 0 0 94% 1040 0 0 8356 217 474 11003 0 0 0 98% 1084 0 0 8686 238 490 11014 0 0 0
This may not help much, since the number of NFS ops isn't terribly high for a 720, but it looks like you are using 8K NFS blocks. Try an NFSv3, UDP, 32K blocksize mount and see if that makes any difference. Your NFS ops count will definitely drop, but as I said, it may not drop your CPU usage much on writes.