We had a bug that always put the default snap reserve value (20%) on snapmirror destination volumes. SnapMirror doesn't look at the snap reserve value to determine if the mirror can be done. All it did in the past was make the df values look strange. The mirror would complete no matter what snap reserve was set to.
The fix was made (a couple of releases ago) to copy the snap reserve value from the source volume to the destination volume.
I would like to hear which release of Ontap you are running and details as to how you came to believe that this is what caused the mirror not to complete.
I have seen this issue many times and I have never seen it cause the mirror to stop making progress.
Mike Federwisch Network Appliance Inc.
Hi, we are running snapmirror here to move a 350GB volume from our F630 to our new F760. Since we set it up last week, I have noticed a 'bug' in the snapmirror procedure:
We set up the 2 volumes with the same amount of raw space. However, the source volume had a snap reserve of 5%, so we changed to target volume to snap reserve of 5% before taking it offline. We started the snapmirror, and the volume filled up to 103%. I checked, and the snap reserve had been reset to 20%, thus reducing the free space on the target volume, and preventing the mirror from completing. I made the volume writable, and tried to reset the snap reserve to 5%, but the same
thing happened. I only succeeded in completing the mirror by adding a disk to the target volume, thus allowing enough space, even with a snap reserve of 20%. This sounds like a bug - why should snap reserve be reset on the target volume? Has anyone heard of this before?
Thanks,
Moshe