We are backing up a 270c nightly with Netbackup via NDMP to a dual drive LTO3 machine (only one drive is used for to backup the filer). There are 2.25 TB on one head and 1.1TB on the other. For a full backup it takes about 10 hours to write to tape, which I don't think is too bad (larger volumes hit 31200 KB/sec reported by Netbackup), but the differentials take 5 hours to run,even though the delta on our data is very small.
I suspect this is more of a Netbackup issue then a filer based one but wanted to throw it out there incase there was something I should look at on the host side. This is over a normal gigE switched network connection with the default settings on the filer, network-wise. We are not using virtual interfaces, both heads have a single 1 gig connection to the LAN. Any suggestions or inventive flames are welcome.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
Jeremy:
I suspect that the delay is in the scan phase of the NDMP backup, which determines which files need to be sent to netBackup for the differential. If you could see how much throughput is going through the gig/e link during the differential versus the full, I think you'd confirm that the actual movement of data from the filers to NetBackup is not the problem. If you have a large number of files in the volumes, that would certainly allude to this being the problem.
I don't think this is a netBackup issue, it's NDMP.
Glenn from Voyant
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Page, Jeremy Sent: Tuesday, June 19, 2007 1:19 PM To: toasters@mathworks.com Subject: NDMP tuning on a 270c
We are backing up a 270c nightly with Netbackup via NDMP to a dual drive LTO3 machine (only one drive is used for to backup the filer). There are 2.25 TB on one head and 1.1TB on the other. For a full backup it takes about 10 hours to write to tape, which I don't think is too bad (larger volumes hit 31200 KB/sec reported by Netbackup), but the differentials take 5 hours to run,even though the delta on our data is very small.
I suspect this is more of a Netbackup issue then a filer based one but wanted to throw it out there incase there was something I should look at on the host side. This is over a normal gigE switched network connection with the default settings on the filer, network-wise. We are not using virtual interfaces, both heads have a single 1 gig connection to the LAN. Any suggestions or inventive flames are welcome.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
Have you tried backup using standard backup methods (over NFS)?
On 6/19/07, Page, Jeremy jeremy.page@gilbarco.com wrote:
We are backing up a 270c nightly with Netbackup via NDMP to a dual drive LTO3 machine (only one drive is used for to backup the filer). There are 2.25 TB on one head and 1.1TB on the other. For a full backup it takes about 10 hours to write to tape, which I don't think is too bad (larger volumes hit 31200 KB/sec reported by Netbackup), but the differentials take 5 hours to run,even though the delta on our data is very small.
I suspect this is more of a Netbackup issue then a filer based one but wanted to throw it out there incase there was something I should look at on the host side. This is over a normal gigE switched network connection with the default settings on the filer, network-wise. We are not using virtual interfaces, both heads have a single 1 gig connection to the LAN. Any suggestions or inventive flames are welcome.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
The Miserable File History problem....
I actually have a level 0 that takes over 8 hours to pass file history info which means the dump does not start until that point.
Just for kicks, you could modify your policy to have the first line as "set hist = n" (without quotes). This will kill the file history. You will be unable to restore files (directories only) and you will not have DAR which means you will have to read the ENTIRE backup tape or tape set. Instead of looking at pure size, do a "df -h" and a "df -i" to how much and how many (files) you are backing up. You can always dived the size used by the files used to see average file size.
Heck for that matter you could scan the filesystem with the filestats command and you could scan from a NFS client to see which directories are huge (I have a number of directories that are 50MB in size-> this KILLLLS a backup)
On 6/19/07, Page, Jeremy jeremy.page@gilbarco.com wrote:
We are backing up a 270c nightly with Netbackup via NDMP to a dual drive LTO3 machine (only one drive is used for to backup the filer). There are 2.25 TB on one head and 1.1TB on the other. For a full backup it takes about 10 hours to write to tape, which I don't think is too bad (larger volumes hit 31200 KB/sec reported by Netbackup), but the differentials take 5 hours to run,even though the delta on our data is very small.
I suspect this is more of a Netbackup issue then a filer based one but wanted to throw it out there incase there was something I should look at on the host side. This is over a normal gigE switched network connection with the default settings on the filer, network-wise. We are not using virtual interfaces, both heads have a single 1 gig connection to the LAN. Any suggestions or inventive flames are welcome.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
I guess my gripe is why does Netbackup lock a drive when it's doing the scan. I understand it can't do the backup until it knows *what* it's going to backup, the confusion on my part is why that would lock a tape.
Also, for some reason (I'm not the backup guy so I am probably missing something) we need all our tapes to do a single file restore anyways, seems like you are saying that's not normal.
Jeremy M. Page, MCSE, CNA, CCNA _____
* email:Jeremy.Page@gilbarco.com - ( phone: 336.547.5399 - 6 fax: 336.547.5163 - ( cell: 336.601.7274
________________________________
From: tmac [mailto:tmacmd@gmail.com] Sent: Tuesday, June 19, 2007 2:47 PM To: Page, Jeremy Cc: toasters@mathworks.com Subject: Re: NDMP tuning on a 270c
The Miserable File History problem....
I actually have a level 0 that takes over 8 hours to pass file history info which means the dump does not start until that point.
Just for kicks, you could modify your policy to have the first line as "set hist = n" (without quotes). This will kill the file history. You will be unable to restore files (directories only) and you will not have DAR which means you will have to read the ENTIRE backup tape or tape set. Instead of looking at pure size, do a "df -h" and a "df -i" to how much and how many (files) you are backing up. You can always dived the size used by the files used to see average file size.
Heck for that matter you could scan the filesystem with the filestats command and you could scan from a NFS client to see which directories are huge (I have a number of directories that are 50MB in size-> this KILLLLS a backup)
On 6/19/07, Page, Jeremy jeremy.page@gilbarco.com wrote:
We are backing up a 270c nightly with Netbackup via NDMP to a dual drive LTO3 machine (only one drive is used for to backup the filer). There are 2.25 TB on one head and 1.1TB on the other. For a full backup it takes about 10 hours to write to tape, which I don't think is too bad (larger volumes hit 31200 KB/sec reported by Netbackup), but the differentials take 5 hours to run,even though the delta on our data is very small. I suspect this is more of a Netbackup issue then a filer based one but wanted to throw it out there incase there was something I should look at on the host side. This is over a normal gigE switched network connection with the default settings on the filer, network-wise. We are not using virtual interfaces, both heads have a single 1 gig connection to the LAN. Any suggestions or inventive flames are welcome.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to
this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
On Wed, Jun 20, 2007 at 05:26:14AM -0700, Page, Jeremy wrote:
I guess my gripe is why does Netbackup lock a drive when it's doing the scan. I understand it can't do the backup until it knows *what* it's going to backup, the confusion on my part is why that would lock a tape.
Good question. I wouldn't mind a change of behaviour on that one myself.
Also, for some reason (I'm not the backup guy so I am probably missing something) we need all our tapes to do a single file restore anyways, seems like you are saying that's not normal.
If you have DAR enabled (which tends to be the default nowadays) and you choose files to restore, NetBackup will position the tape intelligently to pull those files off. If however you choose a directory, it will linearly scan through your tape set (you'll get the same behaviour with DAR disabled). This is for NetBackup 5.1, I haven't tried yet on our NB 6.0 server to see if it's changed.
Jeremy M. Page, MCSE, CNA, CCNA _____
- email:Jeremy.Page@gilbarco.com - ( phone: 336.547.5399 - 6 fax:
336.547.5163 - ( cell: 336.601.7274
From: tmac [mailto:tmacmd@gmail.com] Sent: Tuesday, June 19, 2007 2:47 PM To: Page, Jeremy Cc: toasters@mathworks.com Subject: Re: NDMP tuning on a 270c
The Miserable File History problem....
I actually have a level 0 that takes over 8 hours to pass file history info which means the dump does not start until that point.
Just for kicks, you could modify your policy to have the first line as "set hist = n" (without quotes). This will kill the file history. You will be unable to restore files (directories only) and you will not have DAR which means you will have to read the ENTIRE backup tape or tape set. Instead of looking at pure size, do a "df -h" and a "df -i" to how much and how many (files) you are backing up. You can always dived the size used by the files used to see average file size.
Heck for that matter you could scan the filesystem with the filestats command and you could scan from a NFS client to see which directories are huge (I have a number of directories that are 50MB in size-> this KILLLLS a backup)
On 6/19/07, Page, Jeremy jeremy.page@gilbarco.com wrote:
We are backing up a 270c nightly with Netbackup via NDMP to a dual drive LTO3 machine (only one drive is used for to backup the filer). There are 2.25 TB on one head and 1.1TB on the other. For a full backup it takes about 10 hours to write to tape, which I don't think is too bad (larger volumes hit 31200 KB/sec reported by Netbackup), but the differentials take 5 hours to run,even though the delta on our data is very small.
I suspect this is more of a Netbackup issue then a filer based
one but wanted to throw it out there incase there was something I should look at on the host side. This is over a normal gigE switched network connection with the default settings on the filer, network-wise. We are not using virtual interfaces, both heads have a single 1 gig connection to the LAN. Any suggestions or inventive flames are welcome.
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to
this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.
-- --tmac
RedHat Certified Engineer
This message (including any attachments) contains confidential and/or proprietary information intended only for the addressee. Any unauthorized disclosure, copying, distribution or reliance on the contents of this information is strictly prohibited and may constitute a violation of law. If you are not the intended recipient, please notify the sender immediately by responding to this e-mail, and delete the message from your system. If you have any questions about this e-mail please notify the sender immediately.