You must take attention with snapmirror. When you do a vol snapmirror from little disks to larger disks, you can have some problem with fragmentation. The free space in every disk will not be on the same place on the new disks (he will use the new disks sequential to load the old disks so that the first disks of your new volume will be totaly full), so it's harder to write new data on the same level. This will mean that when your source volume and destination volume have almost the same size, after migration there is a high change that he will use just one disk of your raid group (the last one) for the new data. All your old data is writen to the first disks. This can give performance issues. Of course, this problem is biger when the source volume is full.
We use QTREE snapmirror, but we are'nt sure that this is better, but we hope that it is. But your source data must be in qtree's ofcourse.
Best regards,
Reinoud UZ Leuven
Belgium
-----Original Message----- From: Joseph Bishop [mailto:jbishop@jpl.nasa.gov] Sent: vrijdag 1 augustus 2003 8:08 To: Ngoh, Clarence Cc: toasters@mathworks.com Subject: Re: NDMPcopy and fibre disk shelve file copy.
Clarence,
IAre the disks within the same box? If not...then snapmirror is a good way to go. You might be able to snapmirror within the same box. Performance really depends on how many disks you have in the volume(s) and what kind of head (700, 800, 900) that you have.
If you are able to snapmirror, then the down time can be on the order of minutes. It has become my favorite method for moving data. I am not positive about the source and destination being on the same box though.
Rough numbers though, you could get 60MByte/sec xfer rate on 820s. If you are on 960s expect to be on the order of 120MBytes/sec. But then it does depend on how many drives in the raid group and volumes.
Joe
Ngoh, Clarence wrote:
Hello good toaster folks
We are planning to move a huge lot of data (> 3.0 T) from our 32 GB drives to 144 GB. I am carrying out some preliminary information about
the quickest path to move the data across - the first option is to use
ndmpcopy
and the second is to copy it from shelf to shelf over fiber. Since
this is
a preliminary investigation, can someone please point me to relevant resources that may be helpful? I am after statistics of transfer
rates
between the two and perhaps other materials that others have found
useful.
I have googled and looked through past toaster archives, and most of it
is either outdated or lack of relevant information.
Thanks.
Clarence.
************** IMPORTANT MESSAGE ************** This e-mail message is intended only for the addressee(s) and contains
information which may be confidential. If you are not the intended recipient please advise the sender by return email, do not use or disclose the contents, and delete the message and any attachments from your system. Unless specifically indicated, this email does not constitute formal advice or commitment by the sender or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries.
snapmirror won't copy data in filing up one disk then another then another and leave the last disk empty . new data won't be written only in the last disk
data are copied and written in all disk in // therefore all disk would have the same free space
the only impact on data there is (and it is a good impact) is the fragmentation that is reduced reading a file can be made faster because blocks of that file are contiguous
Reinoud Reynders wrote:
You must take attention with snapmirror. When you do a vol snapmirror from little disks to larger disks, you can have some problem with fragmentation. The free space in every disk will not be on the same place on the new disks (he will use the new disks sequential to load the old disks so that the first disks of your new volume will be totaly full), so it's harder to write new data on the same level. This will mean that when your source volume and destination volume have almost the same size, after migration there is a high change that he will use just one disk of your raid group (the last one) for the new data. All your old data is writen to the first disks. This can give performance issues. Of course, this problem is biger when the source volume is full.
We use QTREE snapmirror, but we are'nt sure that this is better, but we hope that it is. But your source data must be in qtree's ofcourse.
Best regards,
Reinoud UZ Leuven
Belgium
-----Original Message----- From: Joseph Bishop [mailto:jbishop@jpl.nasa.gov] Sent: vrijdag 1 augustus 2003 8:08 To: Ngoh, Clarence Cc: toasters@mathworks.com Subject: Re: NDMPcopy and fibre disk shelve file copy.
Clarence,
IAre the disks within the same box? If not...then snapmirror is a good way to go. You might be able to snapmirror within the same box. Performance really depends on how many disks you have in the volume(s) and what kind of head (700, 800, 900) that you have.
If you are able to snapmirror, then the down time can be on the order of minutes. It has become my favorite method for moving data. I am not positive about the source and destination being on the same box though.
Rough numbers though, you could get 60MByte/sec xfer rate on 820s. If you are on 960s expect to be on the order of 120MBytes/sec. But then it does depend on how many drives in the raid group and volumes.
Joe
Ngoh, Clarence wrote:
Hello good toaster folks
We are planning to move a huge lot of data (> 3.0 T) from our 32 GB drives to 144 GB. I am carrying out some preliminary information about
the quickest path to move the data across - the first option is to use
ndmpcopy
and the second is to copy it from shelf to shelf over fiber. Since
this is
a preliminary investigation, can someone please point me to relevant resources that may be helpful? I am after statistics of transfer
rates
between the two and perhaps other materials that others have found
useful.
I have googled and looked through past toaster archives, and most of it
is either outdated or lack of relevant information.
Thanks.
Clarence.
************** IMPORTANT MESSAGE ************** This e-mail message is intended only for the addressee(s) and contains
information which may be confidential. If you are not the intended recipient please advise the sender by return email, do not use or disclose the contents, and delete the message and any attachments from your system. Unless specifically indicated, this email does not constitute formal advice or commitment by the sender or the Commonwealth Bank of Australia (ABN 48 123 123 124) or its subsidiaries.