The 14 hours window is from filer to tape - a single LTO1 drive via NDMP which I think equates to about 24 MB/s which I think is pretty good?!?
We are not using DFS currently, the big question is regards to how does DAVE and/or OSX native SMB handle DFS - I haven't look at it yet.
Jack
-----Original Message----- From: Skottie Miller [mailto:skottie@anim.dreamworks.com] Sent: Wednesday, May 12, 2004 12:51 PM To: Jack Lyons Cc: 'toasters@mathworks.com' Subject: Re: QTree Size limits
Jack Lyons wrote:
I have a 1.31 TB volume that is 90% full. There are two solutions I have available to me. One is to try to reduce the amount of space on the volume (but meeting resistance by users). The other is to add space. I am trying to get approval for another TB of disk space, but I don't think the best solution is to add it to the existing QTree. My backup window for this volume is 14 -15 hours currently and would only get bigger if I add space and that is not acceptable. I know I can another qtree / cifs share but I was hoping I could do it in such a way that I would still have another Qtree but make it available to the user via a single CIFS Share.
seems you may want to investigate backup system changes; a 14 hour window for a 1.3 TB volume is terrible. what backup product(s) are you using?
For reference, we churn 800 - 1.2 TB per night, out of 40 TB online, and the first-phase backup window (filer to stage pools) is 4 - 6 hours long. Then the data moves from staging pool to tape, outside the backup window. We use Tivoli storage manager off three Linux backup servers, doing file-at-a-time differential backups over NFS.
I was hoping I could add another volume, probably /vol/vol2 with a qtree called /vol/vol2/active clients and some how make it appear to the user as a subdirectory under \server\creative file:///\\server\creative
Do you use DFS to mount shares ? My windows guys think DFS supports nesting shares as you describe.
-skottie