Mate you have a ~26TB aggregate that has ~1.8TB spare. Just increase the size of the volume by say 200g and immediately start the volume move. The sooner you do this the better off you will be, even though you are close to the limits already you still have some room to move. Leave it any longer and all that space may get eaten up.

Go go go do it now.

Regards,

Andrew

On 5/11/19, 8:53 am, "toasters-bounces@teaparty.net on behalf of John Stoffel" <toasters-bounces@teaparty.net on behalf of john@stoffel.org> wrote:


   Scott> And the underlying aggregate is full too. I guess we relied a
   Scott> bit too much on thin provisioning and now need to get some
   Scott> volumes off the full aggregate.

   This is what scares me about Thin provisioning, because when it does
   go south, you're in a world of hurt until you can reduce the load
   somehow.

   Do you have any snapshots on the other volumes you can delete to make
   space?  Or maybe you have to bite the bullet and add more disks to
   that Aggregate?


   Scott> Cluster::> aggr show-space -aggregate-name cluster_03_sata02_8t

   Scott>       Aggregate : cluster_03_sata02_8t

   Scott>       Feature                                          Used      Used%
   Scott>       --------------------------------           ----------     ------
   Scott>       Volume Footprints                             25.37TB       100%
   Scott>       Aggregate Metadata                            809.8MB         0%
   Scott>       Snapshot Reserve                                   0B         0%
   Scott>       Total Used                                    25.37TB       100%

   Scott>       Total Physical Used                           24.23TB        96%

   Scott>     

Xpanse logo Andrew Werchowiecki
Technical Team Lead
Andrew.Werchowiecki@xpanse.com.au
www.xpanse.com.au
Mobile: +61 422 702 339
Office: +61 8 9322 6767
Fax: +61 8 9322 6077
85 Havelock St
West Perth
WA 6005
Australia
Find out more about the NetApp AFF C190

On Nov 4, 2019, at 12:58 PM, Parisi, Justin <Justin.Parisi@netapp.com> wrote:

   Scott>     Vol move takes a snapshot when doing the move; that's how the transfer happens via snapmirror.

   Scott>     How full is the volume?
   Scott>     ----------------------------------------------------------------------------------------------
   Scott>     From: Scott Classen <sclassen@lbl.gov>
   Scott>     Sent: Monday, November 4, 2019 3:55 PM
   Scott>     To: Parisi, Justin <Justin.Parisi@netapp.com>
   Scott>     Cc: Toasters <toasters@teaparty.net>
   Scott>     Subject: Re: Volume move.... Working?

   Scott>     NetApp Security WARNING: This is an external email. Do not click links or open attachments
   Scott>     unless you recognize the sender and know the content is safe.

   Scott>     This volume has snapshots disabled so I’m not sure what the error is referring to.

   Scott>     So it looks like the job is running… maybe I just need to be patient? I just haven’t done any
   Scott>     volume moves (other than when moving to a new filer) so am not sure what to expect in terms of
   Scott>     speediness.

   Scott>     Cluster::> job show -id 11160
   Scott>                                 Owning
   Scott>     Job ID Name                 Vserver    Node           State
   Scott>     ------ -------------------- ---------- -------------- ----------
   Scott>     11160  Volume Move          Cluster    node-03     Running
   Scott>            Description: Move “bigvol" in Vserver “svm1" to aggregate “cluster_03_sata01_8t"

   Scott>     S

   >> On Nov 4, 2019, at 12:45 PM, Parisi, Justin <Justin.Parisi@netapp.com> wrote:
   >>
   >> If it couldn't create a snapshot, it's probably not doing a vol move.
   >>
   >> What does "event log show" say? Job show?
   >>
   >> -----Original Message-----
   >> From: toasters-bounces@teaparty.net <toasters-bounces@teaparty.net> On Behalf Of Scott
   Scott>     Classen
   >> Sent: Monday, November 4, 2019 3:40 PM
   >> To: Toasters <toasters@teaparty.net>
   >> Subject: Volume move.... Working?
   >>
   >> NetApp Security WARNING: This is an external email. Do not click links or open attachments
   Scott>     unless you recognize the sender and know the content is safe.
   >>
   >>
   >>
   >>
   >> Hello toasters,
   >>
   >> I’m attempting to move a 3TB vol to a new aggregate. The destination aggregate has 100TB
   Scott>     available space. It’s been preparing to transfer for over 3 hours…. Is this normal?
   >>
   >>
   >>
   >> Cluster::> volume move show -vserver svm1 -volume bigvol
   >>
   >> Vserver Name: svm1
   >> Volume Name: bigvol
   >> Actual Completion Time: -
   >> Bytes Remaining: -
   >> Destination Aggregate: cluster_03_sata01_8t
   >> Detailed Status: Volume move job preparing transfer
   >> Error: Creating Snapshot copy with owner tag: Not enough space for Snapshot tags
   >> Estimated Time of Completion: -
   >> Managing Node: node-03
   >> Percentage Complete: -
   >> Move Phase: replicating
   >> Estimated Remaining Duration: -
   >> Replication Throughput: -
   >> Duration of Move: 03:21:53
   >> Source Aggregate: cluster_03_sata02_8t
   >> Start Time of Move: Mon Nov 04 09:10:19 2019
   >> Move State: warning
   >> Is Source Volume Encrypted: false
   >> Encryption Key ID of Source Volume:
   >> Is Destination Volume Encrypted: false Encryption Key ID of Destination Volume:
   >>
   >>
   >>
   >> Should I be concerned about the "Error: Creating Snapshot copy with owner tag: Not enough
   Scott>     space for Snapshot tags” ?
   >>
   >> Is there any way to see if this vol move job is hung or actually doing something?
   >>
   >> Thanks,
   >> Scott
   >>
   >>
   >>
   >>
   >>
   >>
   >> _______________________________________________
   >> Toasters mailing list
   >> Toasters@teaparty.net
   >> http://www.teaparty.net/mailman/listinfo/toasters

   Scott> _______________________________________________
   Scott> Toasters mailing list
   Scott> Toasters@teaparty.net
   Scott> http://www.teaparty.net/mailman/listinfo/toasters

   _______________________________________________
   Toasters mailing list
   Toasters@teaparty.net
   http://www.teaparty.net/mailman/listinfo/toasters