Hi, we are running snapmirror here to move a 350GB volume from our F630 to our new F760. Since we set it up last week, I have noticed a 'bug' in the snapmirror procedure:
We set up the 2 volumes with the same amount of raw space. However, the source volume had a snap reserve of 5%, so we changed to target volume to snap reserve of 5% before taking it offline. We started the snapmirror, and the volume filled up to 103%. I checked, and the snap reserve had been reset to 20%, thus reducing the free space on the target volume, and preventing the mirror from completing. I made the volume writable, and tried to reset the snap reserve to 5%, but the same
thing happened. I only succeeded in completing the mirror by adding a disk to the target volume, thus allowing enough space, even with a snap reserve of 20%. This sounds like a bug - why should snap reserve be reset on the target volume? Has anyone heard of this before?
Thanks,
Moshe
We had a bug that always put the default snap reserve value (20%) on snapmirror destination volumes. SnapMirror doesn't look at the snap reserve value to determine if the mirror can be done. All it did in the past was make the df values look strange. The mirror would complete no matter what snap reserve was set to.
The fix was made (a couple of releases ago) to copy the snap reserve value from the source volume to the destination volume.
I would like to hear which release of Ontap you are running and details as to how you came to believe that this is what caused the mirror not to complete.
I have seen this issue many times and I have never seen it cause the mirror to stop making progress.
Mike Federwisch Network Appliance Inc.
Hi, we are running snapmirror here to move a 350GB volume from our F630 to our new F760. Since we set it up last week, I have noticed a 'bug' in the snapmirror procedure:
We set up the 2 volumes with the same amount of raw space. However, the source volume had a snap reserve of 5%, so we changed to target volume to snap reserve of 5% before taking it offline. We started the snapmirror, and the volume filled up to 103%. I checked, and the snap reserve had been reset to 20%, thus reducing the free space on the target volume, and preventing the mirror from completing. I made the volume writable, and tried to reset the snap reserve to 5%, but the same
thing happened. I only succeeded in completing the mirror by adding a disk to the target volume, thus allowing enough space, even with a snap reserve of 20%. This sounds like a bug - why should snap reserve be reset on the target volume? Has anyone heard of this before?
Thanks,
Moshe
Y'know, I didn't get any error message that the mirror failed, but the df showed the volume at 103% and the used space wasn't equal to the source volume, so I assumed that the mirror had failed due to lack of space. Now you are saying that it might have been only cosmetic - that just the df information was wrong, but the data was copied in full. Maybe, but how am I to know? We are at 5.3.6R2 by the way.
Moshe
Mike Federwisch wrote:
We had a bug that always put the default snap reserve value (20%) on snapmirror destination volumes. SnapMirror doesn't look at the snap reserve value to determine if the mirror can be done. All it did in the past was make the df values look strange. The mirror would complete no matter what snap reserve was set to.
The fix was made (a couple of releases ago) to copy the snap reserve value from the source volume to the destination volume.
I would like to hear which release of Ontap you are running and details as to how you came to believe that this is what caused the mirror not to complete.
I have seen this issue many times and I have never seen it cause the mirror to stop making progress.
Mike Federwisch Network Appliance Inc.
Hi, we are running snapmirror here to move a 350GB volume from our F630 to our new F760. Since we set it up last week, I have noticed a 'bug' in the snapmirror procedure:
We set up the 2 volumes with the same amount of raw space. However, the source volume had a snap reserve of 5%, so we changed to target volume to snap reserve of 5% before taking it offline. We started the snapmirror, and the volume filled up to 103%. I checked, and the snap reserve had been reset to 20%, thus reducing the free space on the target volume, and preventing the mirror from completing. I made the volume writable, and tried to reset the snap reserve to 5%, but the same
thing happened. I only succeeded in completing the mirror by adding a disk to the target volume, thus allowing enough space, even with a snap reserve of 20%. This sounds like a bug - why should snap reserve be reset on the target volume? Has anyone heard of this before?
Thanks,
Moshe
-- ----------------------------------------------------------------------------- Moshe Linzer | On the Internet, Unix Systems Manager | National Semiconductor, Israel | nobody knows you're a moron. Phone: 972-9-970-2247 | Fax: 972-9-970-2001 | - Network Magazine Email: moshel@nsc.com | -----------------------------------------------------------------------------
On Thu, 30 Nov 2000, Moshe Linzer wrote:
Y'know, I didn't get any error message that the mirror failed, but the df showed the volume at 103% and the used space wasn't equal to the source volume, so I assumed that the mirror had failed due to lack of space. Now you are saying that it might have been only cosmetic - that just the df information was wrong, but the data was copied in full. Maybe, but how am I to know? We are at 5.3.6R2 by the way.
If the snap reserve values are different for the two volumes, then that implies that your df output is going to be screwy. SnapMirror will announce when a transfer cannot complete due to lack of space. The output will be in the form of a system error log:
snapmirror: destination volume too small
To verify that data was copied, you can use snap list to ensure that the latest snapshot on the destination matches up with a SnapMirror-created snapshot on the source side. Or you can mount the destination volume and peek at the files there.
As reported in my previous email, this is a bug that was fixed in 6.0. Once again, here is the public report for 18560:
TITLE: Snapreserve value not propagated to snapmirror destination
DESCRIPTION: Snapreserve value not propagated to snapmirror destination. This will cause df numbers to look quite different between the source and destination.
WORKAROUND: Don't worry about it. It actually has no affect on whether SnapMirror will function. It is more of a cosmetic thing.
Hope that helps!
-- Shane
------- It's always a good idea to bypass NVRAM.
Y'know, I didn't get any error message that the mirror failed, but the df showed the volume at 103% and the used space wasn't equal to the source volume, so I assumed that the mirror had failed due to lack of space. Now you are saying that it might have been only cosmetic - that just the df information was wrong, but the data was copied in full. Maybe, but how am I to know? We are at 5.3.6R2 by the way.
Moshe
I would agree with the previous mail. Snap list on the destination volume will show the snapshots that have made it to the destination. The snapshot name has a numeric suffix. That number is bumped up each update. So take a look at the snap list on the source and the dates on those snapshots. The snap list on the destination should have the same snapshots and the date on the newest snapshot is the last time they were updated.
Mike Federwisch
On Wed, 29 Nov 2000, Moshe Linzer wrote:
We set up the 2 volumes with the same amount of raw space.
What was the disk configuration of the volumes? Available space is calculated in different manners for volumes with different disk geometries.
However, the source volume had a snap reserve of 5%, so we changed to target volume to snap reserve of 5% before taking it offline. We started the snapmirror, and the volume filled up to 103%. I checked, and the snap reserve had been reset to 20%, thus reducing the free space on the target volume, and preventing the mirror from completing.
If the SnapMirror initial transfer succeeded, then it should just work from then on, unless you grow the source volume, or delete a SnapMirror snapshot on the source, or something similar to that.
Did you get some sort of error message that signaled that the mirror was prevented from completing due to a lack of space?
I made the volume writable, and tried to reset the snap reserve to 5%, but the same thing happened. I only succeeded in completing the mirror by adding a disk to the target volume, thus allowing enough space, even with a snap reserve of 20%. This sounds like a bug - why should snap reserve be reset on the target volume? Has anyone heard of this before?
The snap reserve "bug" just makes things look weird; it has no practical effect on whether SnapMirror will work or not. This is documented as public burt #18560:
TITLE: Snapreserve value not propagated to snapmirror destination
DESCRIPTION: Snapreserve value not propagated to snapmirror destination. This will cause df numbers to look quite different between the source and destination.
WORKAROUND: Don't worry about it. It actually has no affect on whether SnapMirror will function. It is more of a cosmetic thing.
FIX: Very small fix that just leaves snap reserve percent untouched.
The fix was made for release 6.0. I do not believe it has been made for any of the 5.X releases.
-- Shane
------- It's always a good idea to bypass NVRAM.