Hi Everyone,

 

 

We have a system built around several IBM N-Series N3220 (FAS2240-2) filers. 

 

When I run snapvault of our biggest cifs share it is running 9-10hours.

The users are running a pgm that is deleting lots documents out of this share.

A week ago the share had +32m inodes.  Today it has 23m inodes.

Just yesterday they deleted 1m inodes.

 

During the entire SV run the disks on the prod filer head are

Running at 96-100% utilization.

This is clearly hammering the disks and they are a big bottleneck.

 

During normal activity on this share usually runs SV in a few min to an hour.

 

My question:  Does this sound reasonable?

              Does a 9-10hr SV run pounding disks at >96% util

               for a high delete rate in the share?

 

Thanks

 

Rick

 

 

==============

= detail info

==============

 

All heads are running Data ONTAP Release 8.1.2P1 7-Mode.

 

The main Prod filer is a dual head (HA) with one head dedicated to FCP/luns,

while the other head is dedicated to CIFS shares.

  Prod-Head-A   16x3tb   drives, raidDP  All CIFS shares 

  Prod-Head-B   22x600gb drives, raidDP  Oracle DB FCP/Luns

 

The SV secondary is a single head that is dedicated as a Snapvault secondary.

  SV-Head-A  18x3tb drives, raidDP   SV Secondary for both Prod heads above

 

Df and df –i for the share in question.  A week ago it had 32m inodes.

  df

    Filesystem              kbytes       used      avail capacity  Mounted on

    /vol/v_fnce22p_cifs/ 3208544256 3143864380   64679876      98%  /vol/v_fnce22p_cifs/

    /vol/v_fnce22p_cifs/.snapshot          0  264078944          0     ---%  /vol/v_fnce22p_cifs/.snapshot

  Df –i

    Filesystem               iused      ifree  %iused  Mounted on

    /vol/v_fnce22p_cifs/   23009838   19147066     55%  /vol/v_fnce22p_cifs/

 

The application folks are running pgms that are deleting documents out of this share.

The result of this activity is that snapvault runs on this share is taking 9-10hr.

The transfer duration is 7hr, but the entire cmd ran over 9hr.

 

Here is the snapvault status from a backup run. 

The SV had a transfer duration of 7hr, but the actual cmd took over 9hr to run.

 

  ==> cmd =  ssh -q  xxxxxxxxa -n snapvault status -l /vol/v_fnce22p_cifs/q_fnce22p_cifs

  Snapvault is ON.

  Source:                 yyyyyyyya:/vol/v_fnce22p_cifs/q_fnce22p_cifs

  Destination:            xxxxxxxxa:/vol/v_fnce22p_cifs/q_fnce22p_cifs

  Status:                 Idle

  Progress:               -

  State:                  Snapvaulted

  Lag:                    10:40:15

  Mirror Timestamp:       Tue Aug  5 06:42:03 EDT 2014

  Base Snapshot:          xxxxxxxxa(1896301030)_v_fnce22p_cifs-base.0

  Current Transfer Type:  -

  Current Transfer Error: -

  Contents:               Replica

  Last Transfer Type:     Update

  Last Transfer Size:     542620 KB

  Last Transfer Duration: 07:11:58

  Last Transfer From:     yyyyyyyya:/vol/v_fnce22p_cifs/q_fnce22p_cifs

 

During this SV run, the disks on the prod head are >95% busy for the duration of the cmd.

Here are a few sysstat lines from during the SV run.

 

CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s

                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out

23%      0   1113      0    1131     482   1600   25612   1674       0      0     1     90%   64%  Ts   98%       8     10      0       0      1       0      0

24%      0   1167      0    1185    1052   4197   16627   2151       0      0    58s    92%   61%  12s  100%       8     10      0       0      1       0      0

28%      0   1177      0    1197    1012   2645   28593   2186       0      0    53s    90%   70%  Ts   96%      10     10      0       0      1       0      0

28%      0   1109      0    1127     890   1478   33394   2336       0      0     5s    90%   89%  14    97%       8     10      0       0      1       0      0



The information contained in this message is intended only for the personal and confidential use of the recipient(s) named above. If the reader of this message is not the intended recipient or an agent responsible for delivering it to the intended recipient, you are hereby notified that you have received this document in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify us immediately, and delete the original message.