IMHO, that means that in the much smaller cards, going ONE way or the other was kinda the way to go, but with much larger cards, there should be plenty of room for both.
That'd be a LOT of metadata (disk) blocks to cache on top of main system hashed (processed) data.
On Mon, Feb 6, 2012 at 3:47 PM, Robert McDermott rmcdermo@fhcrc.org wrote:
Hello,
We a V3170 cluster with a 512GB Flash Cache module installed in each controller. Each module is currently configured to cache normal data (default setting):
flexscale.enable on flexscale.lopri_blocks off flexscale.normal_data_blocks on
We have a vfiler running on one controller that does very little normal IO but has a very heavy metadata load due to poor application design. This vfiler has poor performance and its function is critical. We are thinking about switching to metadata only caching mode (flexscale.lopri_blocks off, flexscale.normal_data_blocks off) to improve its performance but have a couple of questions:
The Flash Cache best practices guide has the following verbage about enabling metadata only mode:
"Because of the much larger size of Flash Cache, this mode is more applicable to PAM I, the original 16GB DRAM–based Performance Acceleration Module, than Flash Cache."
Does that mean that this setting doesn't apply (not recommended/supported) for Flash Cache? but is for PAM I? Is using metadata only mode a bad idea with a large flash cache module? If so why?
The best practices guide also indicates that it's recommended to have a symmetrical number and size of modules between controllers in a cluster, but it doesn't say anything about symmetrical cache mode settings. Is it OK to have one controller's flash cache set to the normal data setting, but the others flash cache set to metadata only? During a failover the cache of the failed controller is essentially lost (doesn't follow it to the remaining controller) so it doesn't seem like it would matter as long as the cluster didn't stay in this failover state for a long period of time. What are your thoughts on this?
Thanks in advance,
-Robert
Toasters mailing list Toasters@teaparty.net http://www.teaparty.net/mailman/listinfo/toasters