I recently attempted to expand a volume on one of our 2240's, but it seems to not have any effect on the free space.
From the random reading I've done, it seems that having volumes and aggregates pushing >90% space utilization is not good in any case. Unfortunately the aggregate that this volume lives on is currently at 96% used (150G free), and the volume that I'm trying grow is at 95% used (124GB free).
I attempted to grow the volume by 100G with the 'vol size +100g $volume_name' command. It took about 30s to come back to the prompt, but it did say that the volume has been resized to 100G larger than what df was showing. However, both df and df -A are not currently showing any change in size.
Am I out of luck here? Is there any way to see a "queue" of things the controller is waiting to run or attempting to run? Feel free to school me on the badness of pushing anything close to 100% usage. This controller is sitting betweeen 70-95% CPU util as well. FWIW, this specific controller is used for VM storage, and I probably have some misalignment going on.
1. was this *ever* a snapmirror destination? if it was there is a "vol options" command to set fs_fixed_size to off (snapmirror turns it on by default)
2. On the netapp, what does "df -h vol-name" show? are you oversubscribed on snapshots?
3. Are you checking df from a client or the netapp itself? If a client, are you on a qtree with a quota? --> if so, then you will need to modify the quotas file and then resize the quotas for that volume.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141https://www.redhat.com/wapps/training/certification/verify.html?certNumber=110-107-141&isSearch=False&verify=Verify NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014
On Fri, Sep 27, 2013 at 11:14 AM, Phil Gardner phil.gardnerjr@gmail.comwrote:
I recently attempted to expand a volume on one of our 2240's, but it seems to not have any effect on the free space.
From the random reading I've done, it seems that having volumes and aggregates pushing >90% space utilization is not good in any case. Unfortunately the aggregate that this volume lives on is currently at 96% used (150G free), and the volume that I'm trying grow is at 95% used (124GB free).
I attempted to grow the volume by 100G with the 'vol size +100g $volume_name' command. It took about 30s to come back to the prompt, but it did say that the volume has been resized to 100G larger than what df was showing. However, both df and df -A are not currently showing any change in size.
Am I out of luck here? Is there any way to see a "queue" of things the controller is waiting to run or attempting to run? Feel free to school me on the badness of pushing anything close to 100% usage. This controller is sitting betweeen 70-95% CPU util as well. FWIW, this specific controller is used for VM storage, and I probably have some misalignment going on.
-- _____________________ Phil Gardner PGP Key ID 0xFECC890C OTR Fingerprint 6707E9B8 BD6062D3 5010FE8B 36D614E3 D2F80538
______________________________**_________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/**mailman/listinfo/toastershttp://www.teaparty.net/mailman/listinfo/toasters
what about "vol size storage" and "rdfile /etc/quotas"
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141https://www.redhat.com/wapps/training/certification/verify.html?certNumber=110-107-141&isSearch=False&verify=Verify NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014
On Fri, Sep 27, 2013 at 12:01 PM, Phil Gardner phil.gardnerjr@gmail.comwrote:
Hmm we don't use snapmirror with this array. Here is the output of the vol options command:
vol options storage
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off, ignore_inconsistent=off, snapmirrored=off, create_ucode=off, convert_ucode=off, maxdirsize=45875, schedsnapname=ordinal, fs_size_fixed=off, guarantee=volume(disabled), svo_enable=off, svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off, no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow, read_realloc=off, snapshot_clone_dependency=off, dlog_hole_reserve=off, nbu_archival_snap=off
Checking df from the netapp itself, snapshot space looks ok:
df -h storage
Filesystem total used avail capacity Mounted on /vol/storage/ 2745GB 2433GB 123GB 95% /vol/storage/ /vol/storage/.snapshot 144GB 102GB 41GB 71% /vol/storage/.snapshot
On 09/27/2013 11:32 AM, tmac wrote:
- was this *ever* a snapmirror destination? if it was there is a "vol
options" command to set fs_fixed_size to off (snapmirror turns it on by default)
- On the netapp, what does "df -h vol-name" show? are you oversubscribed
on snapshots?
- Are you checking df from a client or the netapp itself? If a client,
are you on a qtree with a quota? --> if so, then you will need to modify the quotas file and then resize the quotas for that volume.
--tmac
*Tim McCarthy* *Principal Consultant*
Clustered ONTAP Clustered ONTAP
NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141https://www.redhat.com/wapps/training/certification/verify.html?certNumber=110-107-141&isSearch=False&verify=Verify NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014
On Fri, Sep 27, 2013 at 11:14 AM, Phil Gardner phil.gardnerjr@gmail.comwrote:
I recently attempted to expand a volume on one of our 2240's, but it seems to not have any effect on the free space.
From the random reading I've done, it seems that having volumes and aggregates pushing >90% space utilization is not good in any case. Unfortunately the aggregate that this volume lives on is currently at 96% used (150G free), and the volume that I'm trying grow is at 95% used (124GB free).
I attempted to grow the volume by 100G with the 'vol size +100g $volume_name' command. It took about 30s to come back to the prompt, but it did say that the volume has been resized to 100G larger than what df was showing. However, both df and df -A are not currently showing any change in size.
Am I out of luck here? Is there any way to see a "queue" of things the controller is waiting to run or attempting to run? Feel free to school me on the badness of pushing anything close to 100% usage. This controller is sitting betweeen 70-95% CPU util as well. FWIW, this specific controller is used for VM storage, and I probably have some misalignment going on.
-- _____________________ Phil Gardner PGP Key ID 0xFECC890C OTR Fingerprint 6707E9B8 BD6062D3 5010FE8B 36D614E3 D2F80538
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- _____________________ Phil Gardner PGP Key ID 0xFECC890C OTR Fingerprint 6707E9B8 BD6062D3 5010FE8B 36D614E3 D2F80538
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
As you mentioned high CPU, have you recently freed up space on the aggregate and have block reclamation running?
I've seen it before where I have done a large delete and the free space doesn't show on the aggregate? I am not sure what happens if you resize a vol which needs free space from the aggregate which hasn't been freed yet?
You can see the block reclamation in "priv set advanced" and "wafl scan status".
On earlier version of Ontap the block reclamation could thrash the CPU and on later version it was throttled but the downside is the Filer doesn't free blocks as quick when doing deletes and the Filer CPU is high.
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/vol-size-not-having-an... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
I also noticed in you vol output it shows:
guarantee=volume(disabled)
volume guarantees are disabled because you don't have enough space to guarantee the the space allocated to the volume.
From Netapp doc:
"Note: Space guarantees are honored only for online volumes. If you take a volume offline, any committed but unused space for that volume becomes available for other volumes in that aggregate. When you bring that volume back online, if there is not sufficient available space in the aggregate to fulfill its space guarantees, you must use the force (-f) option, and the volume’s space guarantees are disabled. When a volume's space guarantee is disabled, the word (disabled) appears next to its space guarantees in the output of the vol status command."
-- View this message in context: http://network-appliance-toasters.10978.n7.nabble.com/vol-size-not-having-an... Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
On 09/27/2013 12:18 PM, Martin wrote:
As you mentioned high CPU, have you recently freed up space on the aggregate and have block reclamation running?
I've seen it before where I have done a large delete and the free space doesn't show on the aggregate? I am not sure what happens if you resize a vol which needs free space from the aggregate which hasn't been freed yet?
You can see the block reclamation in "priv set advanced" and "wafl scan status".
On earlier version of Ontap the block reclamation could thrash the CPU and on later version it was throttled but the downside is the Filer doesn't free blocks as quick when doing deletes and the Filer CPU is high.
I hadn't deleted anything else from the aggr, only tried to grow the volume with the space avail.
Here is the output from wafl scan status for the aggregate and volume:
*> wafl scan status -A aggr0 Aggregate aggr0: Scan id Type of scan progress 5 active bitmap rearrangement fbn 2690 of 29504 w/ max_chain_len 19
*> wafl scan status -V storage Volume storage: Scan id Type of scan progress 2 active bitmap rearrangement fbn 23823 of 25786 w/ max_chain_len 3
Both are actively increasing. I wonder if the space will show up after it finishes the rearrangement?
I also noticed in you vol output it shows:
guarantee=volume(disabled)
volume guarantees are disabled because you don't have enough space to guarantee the the space allocated to the volume.
Could this be from me running the vol size command twice?
Volume space guarantees are disabled.
Check your other volumes to see if there are any others disabled and the total size of all your volumes to make sure the total is less than the size of the aggregate.
`aggr show_space -h` is helpful to see it all at once.
If the total size of the volumes is larger than the aggregate, you will need to shrink one or more volumes down until guarantees are reenabled. If the total size of all the volumes is less than the size of the aggregate, only the volume with guarantees disabled needs to be reduced in size until guarantees are reenabled, and then the size can be increased by what space is available.
Volume guarantees get disabled at reboot when the total of all the volumes is close to or exceeds the size of the aggregate. I always check volume guarantees after a reboot since even if the volume total doesn't exceed the size of the aggregate, volume guarantees can be disabled.
John
On Fri, Sep 27, 2013 at 12:01:25PM -0400, Phil Gardner wrote:
Hmm we don't use snapmirror with this array. Here is the output of the vol options command:
vol options storage
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off, ignore_inconsistent=off, snapmirrored=off, create_ucode=off, convert_ucode=off, maxdirsize=45875, schedsnapname=ordinal, fs_size_fixed=off, guarantee=volume(disabled), svo_enable=off, svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off, no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow, read_realloc=off, snapshot_clone_dependency=off, dlog_hole_reserve=off, nbu_archival_snap=off
Checking df from the netapp itself, snapshot space looks ok:
df -h storage
Filesystem total used avail capacity Mounted on /vol/storage/ 2745GB 2433GB 123GB 95% /vol/storage/ /vol/storage/.snapshot 144GB 102GB 41GB 71% /vol/storage /.snapshot
On 09/27/2013 11:32 AM, tmac wrote:
1. was this *ever* a snapmirror destination? if it was there is a "vol options" command to set fs_fixed_size to off (snapmirror turns it on by default) 2. On the netapp, what does "df -h vol-name" show? are you oversubscribed on snapshots? 3. Are you checking df from a client or the netapp itself? If a client, are you on a qtree with a quota? --> if so, then you will need to modify the quotas file and then resize the quotas for that volume. --tmac Tim McCarthy Principal Consultant [na_cert_dm] [rhce] [na_cert_ie] Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014 On Fri, Sep 27, 2013 at 11:14 AM, Phil Gardner <phil.gardnerjr@gmail.com> wrote: I recently attempted to expand a volume on one of our 2240's, but it seems to not have any effect on the free space. From the random reading I've done, it seems that having volumes and aggregates pushing >90% space utilization is not good in any case. Unfortunately the aggregate that this volume lives on is currently at 96% used (150G free), and the volume that I'm trying grow is at 95% used (124GB free). I attempted to grow the volume by 100G with the 'vol size +100g $volume_name' command. It took about 30s to come back to the prompt, but it did say that the volume has been resized to 100G larger than what df was showing. However, both df and df -A are not currently showing any change in size. Am I out of luck here? Is there any way to see a "queue" of things the controller is waiting to run or attempting to run? Feel free to school me on the badness of pushing anything close to 100% usage. This controller is sitting betweeen 70-95% CPU util as well. FWIW, this specific controller is used for VM storage, and I probably have some misalignment going on. -- _____________________ Phil Gardner PGP Key ID 0xFECC890C OTR Fingerprint 6707E9B8 BD6062D3 5010FE8B 36D614E3 D2F80538 _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- _____________________ Phil Gardner PGP Key ID 0xFECC890C OTR Fingerprint 6707E9B8 BD6062D3 5010FE8B 36D614E3 D2F80538
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
Nice, thanks for the explanation, and this solved my problem.
I shrunk another volume in the same aggr down by 200G, and the free space in the problem volume showed up finally.
I'm not sure how the total amount of space in the volumes was ever made to exceed the total of the aggr (or why the netapp even allows it to get that close), but now I know this is something to watch out for.
Thanks for the help tracking this down everyone. Cheers.
-Phil
On 09/27/2013 01:03 PM, John Clear wrote:
Volume space guarantees are disabled.
Check your other volumes to see if there are any others disabled and the total size of all your volumes to make sure the total is less than the size of the aggregate.
`aggr show_space -h` is helpful to see it all at once.
If the total size of the volumes is larger than the aggregate, you will need to shrink one or more volumes down until guarantees are reenabled. If the total size of all the volumes is less than the size of the aggregate, only the volume with guarantees disabled needs to be reduced in size until guarantees are reenabled, and then the size can be increased by what space is available.
Volume guarantees get disabled at reboot when the total of all the volumes is close to or exceeds the size of the aggregate. I always check volume guarantees after a reboot since even if the volume total doesn't exceed the size of the aggregate, volume guarantees can be disabled.
John
On Fri, Sep 27, 2013 at 12:01:25PM -0400, Phil Gardner wrote:
Hmm we don't use snapmirror with this array. Here is the output of the vol options command:
vol options storage
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off, ignore_inconsistent=off, snapmirrored=off, create_ucode=off, convert_ucode=off, maxdirsize=45875, schedsnapname=ordinal, fs_size_fixed=off, guarantee=volume(disabled), svo_enable=off, svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off, no_i2p=off, fractional_reserve=100, extent=off, try_first=volume_grow, read_realloc=off, snapshot_clone_dependency=off, dlog_hole_reserve=off, nbu_archival_snap=off
Checking df from the netapp itself, snapshot space looks ok:
df -h storage
Filesystem total used avail capacity Mounted on /vol/storage/ 2745GB 2433GB 123GB 95% /vol/storage/ /vol/storage/.snapshot 144GB 102GB 41GB 71% /vol/storage /.snapshot
On 09/27/2013 11:32 AM, tmac wrote:
1. was this *ever* a snapmirror destination? if it was there is a "vol options" command to set fs_fixed_size to off (snapmirror turns it on by default) 2. On the netapp, what does "df -h vol-name" show? are you oversubscribed on snapshots? 3. Are you checking df from a client or the netapp itself? If a client, are you on a qtree with a quota? --> if so, then you will need to modify the quotas file and then resize the quotas for that volume. --tmac Tim McCarthy Principal Consultant [na_cert_dm] [rhce] [na_cert_ie] Clustered ONTAP Clustered ONTAP NCDA ID: XK7R3GEKC1QQ2LVD RHCE6 110-107-141 NCSIE ID: C14QPHE21FR4YWD4 Expires: 08 November 2014 Current until Aug 02, 2016 Expires: 08 November 2014 On Fri, Sep 27, 2013 at 11:14 AM, Phil Gardner <phil.gardnerjr@gmail.com> wrote: I recently attempted to expand a volume on one of our 2240's, but it seems to not have any effect on the free space. From the random reading I've done, it seems that having volumes and aggregates pushing >90% space utilization is not good in any case. Unfortunately the aggregate that this volume lives on is currently at 96% used (150G free), and the volume that I'm trying grow is at 95% used (124GB free). I attempted to grow the volume by 100G with the 'vol size +100g $volume_name' command. It took about 30s to come back to the prompt, but it did say that the volume has been resized to 100G larger than what df was showing. However, both df and df -A are not currently showing any change in size. Am I out of luck here? Is there any way to see a "queue" of things the controller is waiting to run or attempting to run? Feel free to school me on the badness of pushing anything close to 100% usage. This controller is sitting betweeen 70-95% CPU util as well. FWIW, this specific controller is used for VM storage, and I probably have some misalignment going on. -- _____________________ Phil Gardner PGP Key ID 0xFECC890C OTR Fingerprint 6707E9B8 BD6062D3 5010FE8B 36D614E3 D2F80538 _______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters
-- _____________________ Phil Gardner PGP Key ID 0xFECC890C OTR Fingerprint 6707E9B8 BD6062D3 5010FE8B 36D614E3 D2F80538
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters