Thanks for all the responses. This is to meet an IOPS objective. I've run reallocate scans on the LUN's before, but it is not clear if that has done anything. Does a reallocate scan force the data to spread out on more spindles?
Suresh
-----Original Message----- From: Blake Golliher [mailto:thelastman@gmail.com] Sent: Monday, January 22, 2007 12:06 PM To: Suresh Rajagopalan Cc: Borders, Rich; toasters@mathworks.com Subject: Re: Aggregate expansion
That sounds fair. The thing you'll need to keep in mind is that all your data written to the 14 disks that you first created. After you add on the other set of disks, the data you write will likely be put there, since that's where the most free contiguous space is.
So if you are adding spindles for the space you need, and not to meet an IOP objective, you should be ok. If you are adding more disks, in an attempt to give more disk iop resources to the already existing data set, then it's a bit harder. If you have enough space, you can use reallocate to re spread out the data across the spindles.
Hope that helps, -Blake
On 1/22/07, Suresh Rajagopalan SRajagopalan@williamoneil.com wrote:
I mean with existing data on the aggregate.
To clarify, say I first create an aggregate with 14 disks (and default raid size). Then this aggregate is populated with data. After a period of time the aggregate is expanded by adding disks, say to 56 disks.
The question is, is there a difference (in performance, efficiency) between the aggregate as described above and an aggregate that was originally created from all 56 disks?
Suresh
-----Original Message----- From: Borders, Rich [mailto:Rich.Borders@netapp.com] Sent: Monday, January 22, 2007 11:25 AM To: Suresh Rajagopalan; toasters@mathworks.com Subject: RE: Aggregate expansion
Yes... You can make hot disks happen. Do you mean without adding any data?
Richard D Borders CPR Escalations Engineer RTP, North Carolina USA - Network Appliance, Inc. Email: rborders@netapp.com Phone:(919) 476-5236 Cell: (919) 606-5099 Fax: (919) 476-5608
-----Original Message----- From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] Sent: Monday, January 22, 2007 12:46 PM To: toasters@mathworks.com Subject: Aggregate expansion
Is there any difference between creating an aggregate on a certain number of disks (say n) , and then later expanding the aggregate to
N
disks, as opposed to creating the initial aggregate on N disks?
Suresh
Usually you can tell by monitoring the disk xfers column in statit. Reallocate is designed to move data around to get better utilization of your disk resources. I've used it in cases where I inherited volumes or aggregates with a poor disk layout. It takes time, and you need free space, but other wise it does the trick.
-Blake
ps, you can control the speed with wafl scan speed # where you can set it from 1 to 99,999.
On 1/22/07, Suresh Rajagopalan SRajagopalan@williamoneil.com wrote:
Thanks for all the responses. This is to meet an IOPS objective. I've run reallocate scans on the LUN's before, but it is not clear if that has done anything. Does a reallocate scan force the data to spread out on more spindles?
Suresh
-----Original Message----- From: Blake Golliher [mailto:thelastman@gmail.com] Sent: Monday, January 22, 2007 12:06 PM To: Suresh Rajagopalan Cc: Borders, Rich; toasters@mathworks.com Subject: Re: Aggregate expansion
That sounds fair. The thing you'll need to keep in mind is that all your data written to the 14 disks that you first created. After you add on the other set of disks, the data you write will likely be put there, since that's where the most free contiguous space is.
So if you are adding spindles for the space you need, and not to meet an IOP objective, you should be ok. If you are adding more disks, in an attempt to give more disk iop resources to the already existing data set, then it's a bit harder. If you have enough space, you can use reallocate to re spread out the data across the spindles.
Hope that helps, -Blake
On 1/22/07, Suresh Rajagopalan SRajagopalan@williamoneil.com wrote:
I mean with existing data on the aggregate.
To clarify, say I first create an aggregate with 14 disks (and default raid size). Then this aggregate is populated with data. After a period of time the aggregate is expanded by adding disks, say to 56 disks.
The question is, is there a difference (in performance, efficiency) between the aggregate as described above and an aggregate that was originally created from all 56 disks?
Suresh
-----Original Message----- From: Borders, Rich [mailto:Rich.Borders@netapp.com] Sent: Monday, January 22, 2007 11:25 AM To: Suresh Rajagopalan; toasters@mathworks.com Subject: RE: Aggregate expansion
Yes... You can make hot disks happen. Do you mean without adding any data?
Richard D Borders CPR Escalations Engineer RTP, North Carolina USA - Network Appliance, Inc. Email: rborders@netapp.com Phone:(919) 476-5236 Cell: (919) 606-5099 Fax: (919) 476-5608
-----Original Message----- From: Suresh Rajagopalan [mailto:SRajagopalan@williamoneil.com] Sent: Monday, January 22, 2007 12:46 PM To: toasters@mathworks.com Subject: Aggregate expansion
Is there any difference between creating an aggregate on a certain number of disks (say n) , and then later expanding the aggregate to
N
disks, as opposed to creating the initial aggregate on N disks?
Suresh
This may be more a linux issue than Netapp... anyway I'm trying to get Kerberized NFSv3 going between a 810 cluster running 7.0.5 and Fedora Core 5 and 6 clients. I'm following the writeup in Netapp's TR 3481. FC6 works, but the rpc.gssd daemon dies on startup under FC5 with a segmentation fault, and the traceback doesn't seem to shed any light. Any ideas on what I've missed, or how to get something useful out of the core dump?
Script started on Mon Jan 29 13:23:46 2007 angora$ cd / angora$ uname -r 2.6.18-1.2257.fc5smp angora$ rpm -qf /usr/sbin/rpc.gssd nfs-utils-1.0.8-4.fc5 angora$ sudo /usr/kerberos/sbin/ktutil ktutil: read_kt /etc/krb5.keytab ktutil: l -e slot KVNO Principal ---- ---- --------------------------------------------------------------------- 1 3 nfs/angora.cs.arizona.edu@CS.ARIZONA.EDU (DES cbc mode with CRC-32) ktutil: angora$ angora$ sudo sh-c "ulimit -c unlimited;/usr/sbin/rpc.gssd -f -vvv" Using keytab file '/etc/krb5.keytab' Processing keytab entry for principal 'nfs/angora.cs.arizona.edu@CS.ARIZONA.EDU' We will use this entry (nfs/angora.cs.arizona.edu@CS.ARIZONA.EDU) sh: line 1: 9479 Segmentation fault (core dumped) /usr/sbin/rpc.gssd -f -vvv angora$ sudo gdb /usr/lib/debug/usr/sbin/rpc.gssd.debug /core.9479 GNU gdb Red Hat Linux (6.3.0.0-1.134.fc5rh) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "i386-redhat-linux-gnu"...Using host libthread_db library "/lib/libthread_db.so.1".
warning: core file may not match specified executable file. Failed to read a valid object file image from memory. Core was generated by `/usr/sbin/rpc.gssd -f -vvv'. Program terminated with signal 11, Segmentation fault. #0 0x00428d58 in ?? () (gdb) bt #0 0x00428d58 in ?? () #1 0x00000004 in ?? () #2 0x08019c78 in ?? () #3 0x0042cdc9 in ?? () #4 0x08019c00 in ?? () #5 0xbfb4f27c in ?? () #6 0x00c34918 in ?? () #7 0x00471e00 in ?? () #8 0x08019c2c in ?? () #9 0x00000018 in ?? () #10 0x00000001 in ?? () #11 0xbfb4f25c in ?? () #12 0xbfb4f258 in ?? () #13 0xbfb4f240 in ?? () #14 0x08019c60 in ?? () #15 0xbfb4f204 in ?? () #16 0xbfb4f1f8 in ?? () #17 0xbfb4f228 in ?? () #18 0x08019c08 in ?? () #19 0xbfb4f178 in ?? () #20 0x08019c38 in ?? () #21 0xbfb4f300 in ?? () #22 0x007abc0e in ?? () ---Type <return> to continue, or q <return> to quit--- #23 0xbfb4f2f0 in ?? () #24 0x007abb91 in ?? () #25 0x080169a0 in ?? () #26 0x00000028 in ?? () #27 0xbfb4f178 in ?? () #28 0x00000000 in ?? () (gdb) angora$ angora$ exit exit
Script done on Mon Jan 29 13:25:13 2007
With some help from the linux NFS mailing list, this does look like a definite client-side problem; changing
hosts: files nis dns
to
hosts: files dns
in /etc/nsswitch.conf gets around the rpc.gssd segfault.