hi,
i have aggregate with 5 raid groups of 16 drives each total aggregate size of over 8TB
On 10/23/07, Andrew Siegel abs@blueskystudios.com wrote:
You don't say how many spindles you have, but if I had to guess, I would say that your write activity is pretty high, and WAFL is having a bit of trouble finding places to put the data. Increasing the number of spindles in the active aggregate(s) would help.
No More Linux! wrote:
we see this alot on one of our 3050's, the disks are 144gb 10k FCAL, we are very concerned:
CPU Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk ops/s in out read write read write age hit time ty util 33% 1633 14262 1967 2934 5948 0 0 5 99% 33% : 100% 40% 2036 13033 1799 12622 5279 0 0 5 99% 27% D 100% 58% 3800 16660 2534 13262 24247 0 0 5 99% 100% : 100% 40% 3271 17156 2268 10978 18927 0 0 5 100% 100% : 100% 30% 1217 9699 1802 7390 24154 0 0 5 99% 100% : 100% 22% 2318 571 2020 8697 21873 0 0 5 93% 100% : 100% 20% 1428 706 2014 7663 21820 0 0 5 94% 100% : 100% 63% 2728 57180 3438 7852 19920 0 0 5 100% 100% : 100% 76% 2787 30640 17043 44909 38679 0 0 5 99% 77% D 100% 72% 1995 20426 20208 41305 35765 0 0 5 98% 100% : 100% 68% 2049 19219 22977 43481 36810 0 0 5 98% 100% : 100% 66% 2234 23097 20839 39816 38096 0 0 5 98% 100% : 100% 63% 2352 23897 27837 35960 16489 0 0 5 99% 55% : 100% 54% 2478 23635 30499 36404 0 0 0 5 99% 0% - 100% 64% 1644 20728 7715 33592 34764 0 0 5 99% 78% D 100% 46% 2901 15620 2120 15486 35537 0 0 5 99% 100% : 100% 42% 3209 13807 1929 18263 34560 0 0 5 99% 100% : 100% 39% 1118 15173 2115 17007 34335 0 0 5 99% 100% : 100% 37% 1111 15072 1830 3434 10154 0 0 5 99% 43% : 100% 24% 1215 14888 1678 1331 0 0 0 5 99% 0% - 100%
what could be causing this and how can we alleviate it?