If you have different caching requirements for different volumes, you are better off using flexshare than changing mode globally. In this case settings are per volume and will also be in effect during takeover. See TR-3832 for detail description.
________________________________________ From: toasters-bounces@teaparty.net [toasters-bounces@teaparty.net] On Behalf Of Robert McDermott [rmcdermo@fhcrc.org] Sent: Tuesday, February 07, 2012 03:47 To: toasters@teaparty.net Subject: Flash Cache questions: symmetrical cache mode required? Metadata mode only with a 512GB Flash Cache?
Hello,
We a V3170 cluster with a 512GB Flash Cache module installed in each controller. Each module is currently configured to cache normal data (default setting):
flexscale.enable on flexscale.lopri_blocks off flexscale.normal_data_blocks on
We have a vfiler running on one controller that does very little normal IO but has a very heavy metadata load due to poor application design. This vfiler has poor performance and its function is critical. We are thinking about switching to metadata only caching mode (flexscale.lopri_blocks off, flexscale.normal_data_blocks off) to improve its performance but have a couple of questions:
The Flash Cache best practices guide has the following verbage about enabling metadata only mode:
"Because of the much larger size of Flash Cache, this mode is more applicable to PAM I, the original 16GB DRAM–based Performance Acceleration Module, than Flash Cache."
Does that mean that this setting doesn't apply (not recommended/supported) for Flash Cache? but is for PAM I? Is using metadata only mode a bad idea with a large flash cache module? If so why?
The best practices guide also indicates that it's recommended to have a symmetrical number and size of modules between controllers in a cluster, but it doesn't say anything about symmetrical cache mode settings. Is it OK to have one controller's flash cache set to the normal data setting, but the others flash cache set to metadata only? During a failover the cache of the failed controller is essentially lost (doesn't follow it to the remaining controller) so it doesn't seem like it would matter as long as the cluster didn't stay in this failover state for a long period of time. What are your thoughts on this?
Thanks in advance,
-Robert
_______________________________________________ Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters