Hello Toasters,
We've unfortunately had to reduce the frequency of our volume snapmirror updates in order to allow for our destination aggregate to deswizzle. We highly prefer hourly volume snapmirror updates but it turns our our source volumes are large enough and/or have enough snapshots that the deswizzle process never completes on the destination aggregate. Our volume snapmirror destination aggregate is a single tray of SATA. Prior to reducing the frequency of snapmirror updates, the SATA aggregate was running at 90-100% utilization 24x7 with little to no IO to the filer from active clients. Needless to say, serving data from said aggregate was VERY SLOW despite the light IO (<300 IOPS) required by the clients sourcing their primary data from the SATA aggregate.
We've done what we can to reduce the impact of deswizzling. Namely, cutting down on snapshots and reducing the volume size. I understand reducing volume size doesnt reduce the maxfiles setting which believe ultimately impact the amount of deswizzling necessary on the destination. I'm still digging into other options we can try but reducing the frequency of snapmirror updates seems to have the most impact.
How does one plan for IOPs or disk utilization resulting from the deswizzle process? If I recall correctly, during our planning sessions with NetApp, our Netapp SE never touched on IOPs or number of spindles required to handle deswizzling while serving data from the same aggregate. In fact, I think our aggregates were size purely based on the amount of IO generated from active clients (not active clients + deswizzle).
Thanks, Phil