Is it possible that the order the quotas are defined in the quota file are causing this different behavior?
-----Original Message----- From: Adam McDougall [mailto:mcdouga9@egr.msu.edu] Sent: Monday, March 21, 2005 11:18 AM To: Charles Bartels Cc: toasters@mathworks.com Subject: Re: df size reporting differences in ONTAP 7.0
On Wed, Mar 16, 2005 at 10:51:54AM -0800, Charles Bartels wrote:
I may be confused but it sounds like you are describing the old behavior, which 6.5.3 exhibits.
Yes. But certain configurations of 7.0 do *not* have that problem. I have user quotas under a qtree in a traditional volume that show the "correct" answer when the user runs "df", meaning the user sees their partition sized to their quota.
All in all, I was alerted today that a Bug ID 154530 was created and will be "fixed" in a future release.
I took a look at that bug on the NOW site and it was disappointingly vague. It has a title, and nothing else.
-C.
It is publicly available now:
154530 Title 'df' reports size based on all quotas (including user/group) over NFS in a qtree Duplicate of Bug Severity 3 - Serious inconvenience Bug Status Not Fixed Product Data ONTAP Bug Type WAFL Description Formatted
In releases prior to 7.0, running the 'df' command on an NFS mount of an exported qtree shows the export size as the smaller of the volume size or the qtree quota, if one exists. In 7.0, this behavior was inadvertently changed to return the size as the minimum of the volume size or any applicable quota, including user and group quotas for the user running 'df' in addition to any qtree quota.
We have a NetApp that we are retiring and it is no longer under maintenance. A disk failed over the weekend and the 1 spare disk was used. I would like to disable the feature that shuts down the NetApp after a period if another disk should fail. The command is "options raid.timeout <value>" with value being the number of hours the NetApp will continue to run in degraded mode. What is the valid range for <value>? For now I have set it to 672 hours (a month) and it seems to have accepted that, but I am wondering what the valid range is and whether it is possible to disable the automatic shutdown entirely.
Thanks,
-- Mike
Highest value you can modify it to is 4,294,967,295. That's a lot of hours. ;-)
--paul
On Mon, 21 Mar 2005 11:51:28 -0800, Mike Mueller Michael.D.Mueller@jpl.nasa.gov wrote:
We have a NetApp that we are retiring and it is no longer under maintenance. A disk failed over the weekend and the 1 spare disk was used. I would like to disable the feature that shuts down the NetApp after a period if another disk should fail. The command is "options raid.timeout <value>" with value being the number of hours the NetApp will continue to run in degraded mode. What is the valid range for <value>? For now I have set it to 672 hours (a month) and it seems to have accepted that, but I am wondering what the valid range is and whether it is possible to disable the automatic shutdown entirely.
Thanks,
-- Mike
Paul Galjan wrote:
Highest value you can modify it to is 4,294,967,295. That's a lot of hours. ;-)
--paul
I don't know... it's only 0.49 Gy. Not long by astronomical standards...
;-)
Matt Phelps wrote:
Paul Galjan wrote:
Highest value you can modify it to is 4,294,967,295. That's a lot of hours. ;-)
--paul
I don't know... it's only 0.49 Gy. Not long by astronomical standards...
;-)
(sorry... that should read "0.49 My"... damn math ;-)