Please forgive all these questions to the list -- new toy syndrome.
We have deployed a FAS960c pair in a University wide email system with about 30,000 accounts.
We have 3 large Sun Solaris servers running Communigate Pro and they connect to the filer pair over a FC SAN.
We have mapped 2 LUNs to each server, one from each filer, and striped them with Veritas. This has nicely balanced the load across both filers. We built a Veritas vxfs filesystem out of the two striped LUNs on each mail server and copied the inboxes over. Each mail server holds 1/3 of the inboxes and operates independently of the other two.
It went into production today and the netapps appear to be more than up to the task.
We are using mbox format where an email folder is one sequential file. Consequently our email servers read a LOT more KB/sec than they write, by a factor of more than ten to one. This afternoon I noticed that our filers were each reading about 100 MB/sec to 120 MB/sec from the disks. But I also noticed that this was 25% to 30% more than was being written to the FC SAN.
Each filer has 6G of RAM. Reading disk data at a rate of 100MB/sec should completely turn over the RAM cache in about a minute, and sure enough, the cache age was typically 1 minute or less.
I figured that we were reading 25% to 30% more from the disks than we were writing to the FC SAN because of read ahead. So I set "minra on" on the volume. Immediately we started reading a little less from the disks than we were writing to the FC SAN. Also disk util dropped from around 22% to around 15%. Cache hit percentage dropped a little from 97% to 96%, but then went back up. (How can cache hits be so high with the RAM cache being turned over every minute?)
So it appears that in this situation that minra=on is a winner or is there something I have missed?
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support