Et al.
I am not trying to vendor bash. I just want people to understand that life is not all clean and rosy with StorageX/VFM. Yes, it is a MUCH better DFS management interface for DFS than what comes with Windows 2000 or 2003. Today. Yes, it has some functionality that doesn't otherwise exist with DFS as a stand alone and you may very well find that functionality necessary or useful.
It brings functionality into a DFS environment that you can only get in the DFS 2.0 beta. Today. But it also has a few aspirations beyond its capabilities and it has some fairly significant architectural weaknesses which lead to more problems should you choose to implement them. This isn't vendor bashing. It is architecture bashing.
I would also like to state up front here that I am not necessarily a Microsoft technology lover. I just happen to be forced to work closely with it on a daily basis. If you really don't care about CIFS and DFS it might just be better to move along. Hopefully only a few more messages will be needed on this thread.
I have no choice but to give specifics and ask for additional detail to support the responses that have been given. I doubt anyone will want to see response generalities and marketing fluff like "look at the awards this product has won". See below.
-----Original Message----- From: Louis Elias Sent: Wednesday, May 04, 2005 6:55 PM
Guys,
I have no intention of vendor slamming or trying to sell anything. However, once again, for the sake of posterity (and accurate technical knowledge) I am compelled to set the record straight. This unsolicited slam on VFM, DFS, or whoever the intended target was is very inaccurate. I will only respond to the inaccuracies. Please see my responses below.
1.==================================================================== [Original] VFM or NuView StorageX (they are the exact same product) is an enhanced management GUI for Microsoft DFS with some additional hooks for automating failover in a NetApp(and other?) environment. [Response] Actually, StorageX is an integrated set of applications that enables enterprises to build unified, global namespaces across CIFS and NFS protocols and enables administrators to perform policy-based heterogeneous data management activities without interrupting end user access to data. These policies automate data and storage services, such as: heterogeneous network data management, data migration and consolidation, business continuity, storage optimization, data lifecycle management, remote site data management, and data classification and reporting. Yes, VFM/StorageX supports Microsoft DFS, as well as NFS. It is used to manage data on platforms such as Windows, NetApp, Linux, Unix, Netware and much more from a single, easy-to-use GUI.
NuView has a white paper on their Global Namespace advantage available on their website: http://www.nuview.com/Resources/public_downloads/PDF/gnsadv.pdf Nowhere in it do they mention DFS, NIS, NIS+, LDAP nor automount as dependencies. You are given the impression that they are completely and wholly the total solution for a multi-protocol global name space. In fact they state specifically under "Standards based platform", "unlike many Global Namespace solutions, StorageX does not require the introduction of a new protocol on the network". This white paper leads to several interesting questions.
In "How DFS Works", DFS is specifically defined as a protocol layered on top of and using CIFS for communication, but it is a separate protocol and it must be enabled for it to be active and useable. Obviously, StorageX doesn't require adding a new protocol to build their global namespace, correct?
Network Ports Used by DFS DFS uses the following network ports.
Service Name Relevant Computers UDP TCP NetBIOS Name Service Domain controllers; root servers 137 137 that are not domain controllers; servers acting as link targets; client computers acting as link targets NetBIOS Datagram Service Domain controllers; root servers 138 that are not domain controllers; servers acting as link targets; client computers acting as link targets
NetBIOS Session Service Domain controllers; root servers 139 that are not domain controllers; servers acting as link targets; client computers acting as link targets
LDAP Server Domain controllers 389 389
Remote Procedure Call Domain Controllers 135 (RPC) endpoint mapper
Server Message Block Domain controllers; root servers 445 445 (SMB) that are not domain controllers; servers acting as link targets; client computers acting as link targets
On to the specific questions.
Can StorageX/VFM build a CIFS global name space without DFS? If so, please describe how this is accomplished.
Please describe how an NFS global name space is built and maintained without requiring the addition of NIS, NIS+ or LDAP protocols or the automount service in an NFS and NetApp or Celerra only environment using StorageX/VFM.
Are there any reference able, enterprise class customers using your product to build an NFS global name space? If so, please name one.
How does your data movement methodology handle open and locked files?
How much down time is needed from your last point in time copy or snapshot until all users are successfully using only the remote location?
Does StorageX/VFM have the ability to tell you all of the clients that are accessing the current location and the ability to insure that they have all been moved to use the new location? If so, how is this accomplished?
How does StorageX/VFM handle SID/ACL translation when performing CIFS data copies across multiple Resource or Active Directory domains?
Are all creation, modification and last accessed date and time stamps retained while moving data using StorageX/VFM?
2.==================================================================== [Original] If there is anything Microsoft hates more than anything else, it is telling a customer that they need to purchase a third party product to manage a Microsoft service.
This means that either A) Microsoft is going to purchase technology so that they can give it to you for free or B) they are going to develop technology so that they can give it to you for free. Either way a commercial third party product for managing DFS has a relatively short shelf life with a fairly steep price tag if you have significant amount of storage.
[Response] What backup software do you use and why aren't you using the built-in Windows backup software? Using VFM/StorageX reaps many benefits for organizations, including those that are currently running Microsoft DFS. The truth is that VFM/StorageX is not limited to managing only DFS, nor does it come with a steep price tag, despite the author's assertions.
We have many advocates at Microsoft that like us because we make DFS scalable and offer an alternative to FRS (which caused their field much pain). Microsoft likes anyone who promotes their technologies.
Your response doesn't address the question and I will let those who are truly interested determine for themselves whether VFM/StorageX has a steep price tag. See the links below. Pricing varies widely based on platform and the amount of storage.
http://www.computerworld.com/printthis/2004/0,4814,94408,00.html "$4000 per NAS system" - StorageX 5.5
http://www.enterprisestorageforum.com/sans/news/article.php/1581541 "Pricing for StorageX 3.0 starts at $2,000 per managed server..." StorageX 3.0
http://www.storagepipeline.com/showArticle.jhtml?articleId=20300129&pgno =2 "StorageX costs $2,000 per node. Pricing for File Lifecycle Manager varies with the type of Network Appliance Filer on which it's deployed."
Immix posts the GSA schedule on a public web site - see link below. Click NetApp under this web site. Search "vfm". There's no pricing on FLM. That comes for free with VFM for the US Government. http://www.immixtechnology.com/contracts/gsaschedule.cfm
I will let these following two lines speak for themselves: 132-33 SW-980C-VFM VFM SW for - FAS 980c $84,997.74 132-33 SW-980C-VFM-C VFM SW for - FAS 980c $84,997.74
I spoke with one customer who was told that he had to license FLM per terabyte of storage at a cost of $7500 per terabyte per year. He expected his storage to grow by 5TB per year with 20TB in place. I believe that would be a $150K base plus $35K incremental every year. OUCH!
Speaking of backup, how much do customers normally have to spend to upgrade and/or migrate their backup environment to support a stubs based File Lifecycle Manager solution? In the "Advantages of the StorageX Global Namespace" white paper under the section "Not a Proprietary File System" it specifically states "StorageX does not require any changes to network operations, such as snapshot and backup processes". Is FLM a part of StorageX's integrated suite of applications or not? Just curious.
I talked to one customer who had to purchase an entirely new backup infrastructure with all new backup software and licenses for all new clients and had to still maintain their complete old backup infrastructure in case they needed to retrieve old data for compliance. They had to re-train all of their backup administrators, certify the software for all of their operating systems and create a whole new set of processes to manage the environment. But that was just one customer, right?
3.==================================================================== [Original] NuView knows this and has started bundling additional "modules" with their StorageX/VFM product to make it a more attractive solution.
[Response] We never set out to be a DFS management tool. VFM/StorageX is an integrated suite of applications, which are not bundled modules, rather they are included as part of the product as a whole. There are instances of customers who initially purchased VFM/StorageX for a migration project and later found that they didn't have to purchase additional software for archiving, or to create global namespaces for uninterrupted user access to their data, since it was already in StorageX. The reality is that hundreds of customers love the integrated applications and continue to leverage it, while managing and moving multiple terabytes of data.
StorageX/VFM has historically been a stand alone product and each add-on, such as FLM, has had additional cost associated with it. If this model has changed, you have only confirmed what I have said above.
From the NuView website: http://nuview.com/products/storagex.asp At the core of StorageX is the Global Namespace, a logical representation of file system and storage devices which creates a unified view of data distributed across heterogeneous storage platforms.
Based upon answers concerning how StorageX/VFM creates and manages a CIFS or NFS global name space I guess will determine whether or not my statement holds validity.
There is also a specific whitepaper on DFS management using StorageX: http://nuview.com/registration/resources/whitepapers/checkpdf.asp?p=dfs
FLM is listed as a separate product on the NuView website as is UNC update.
Ask any enterprise class StorageX/VFM customer what happens when you do a search or a virus scan on an FLM controlled/managed directory structure. Ask them how it impacted their online storage tier. Ask them how much data got pulled back from the secondary tier. Ask them about the configurable search results limits StorageX/VFM has implemented to try to resolve this. Ask what happens when a user doesn't find their search results within the configured limit?
4.==================================================================== [Original] All I can say is that you need to take a close look at where it has been installed and ask to talk to customers that have implemented the add-on modules to see what level of success they have seen.
[Response] There are numerous cases studies and customers who freely talk about the immediate and long-term benefits that their organizations have gained, since implementing VFM/StorageX. Another aspect that one needs to look at is how long a vendor has been providing these solutions, as well as what the industry says about them. Finally, one only needs to look at the recent industry awards that VFM/StorageX have received to see that VFM/StorageX is a cost-effective, scalable solution that can manage the unstructured CIFS and NFS data of medium and enterprise customers.
Ask to speak to a customer who hasn't renewed their support agreement. Ask to speak to 5 enterprise, globally distributed customers who have fully implemented all or at least a majority of the "integrated suite of applications". All I am saying is ask to talk to someone actually using more than just the DFS administration GUI.
5a.===========================================================
=========
[Original] If you are considering a 3rd party tool to configure and manage your DFS environment you should understand a couple of things.
- You MUST be running MS Server 2003 for your DFS root.
Win200 is missing some key functionality and stability enhancements which will create a less than stable and less than scaleable DFS environment. Even VFM/StorageX require a Windows 2003 Server./ Their GUI is just that, a "better" GUI for administering DFS. It still requires MS Windows DFS.
[Response] The author would do well to familiarize himself with chapter 17 of the 2000 resource kit before making assertions about DFS.
Most of our 300+ customers are running DFS on 2000. 2003 lets you host more than one root on a single box and improves site referrals. I don't know how it is more stable. DFS is stable. It is part of the CIFS protocol.
I didn't ever say that Windows 2000 didn't support DFS. I said, speaking to large, globally distributed, high performance, enterprise class customers (The Toaster user community), that if you are going to install DFS today, you MUST implement Windows 2003 for it. If you do not, do not say I didn't warn you.
Yes, Windows 2000 Server (not Pro, which cannot be a DFS server at all) DFS is more than capable of handling the small work group, limited location, small client count environments. It works beautifully in my lab of 15 systems. Win2000 DFS is only capable of hosting one stand-alone namespace or domain-based namespace per server. Server 2003 Enterprise or Data Center edition can host multiple stand-alone namespaces and multiple domain-based namespaces per server.
Root servers and domain controllers running Windows 2000 Server retrieve the target's site information from the DFS metadata if the target was created by using Windows 2000 Server. However, if the target was created by using Windows Server 2003, no site information is stored for that target in the DFS metadata. As a result, Windows 2000 Server cannot determine the site of such a target. If this occurs, a referral from a Windows 2000 Server could lead the client computer to a random target, possibly outside of the client's site.
Root servers running Windows 2000 Server or Windows Server 2003 and that are not domain controllers cannot determine a DFS client computer's site when the restrictanonymous registry entry is set to 2 on domain controllers running Windows 2000 Server. (This registry entry is located at HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/Lsa.) As a result, these DFS root servers sort targets in link referrals and root referrals randomly, regardless of the namespace type (stand-alone or domain-based), target selection method, or client operating system.
Same-site target selection works differently in Windows 2000 Server and Windows Server 2003. In Windows 2000 Server, if this setting is enabled on a root, the setting applies to all links but not the root itself. If a client attempts to access a namespace but no root targets exist in the client's site, the out-of-site root targets are returned in the root referral. Like Windows Server 2003, DFS in Windows 2000 Server does not return link referrals if no link targets exist in the client computer's site.
To maintain a consistent namespace across root servers, DFS depends on the domain controller acting as the primary domain controller (PDC) emulator master to be the gatekeeper for all updates to the namespace. Having all root servers running Windows Server 2003 poll the PDC emulator master after the namespace changes ensures that root servers can obtain the updated DFS metadata relatively quickly without needing to wait for Active Directory replication to replicate the DFS object to all domain controllers.
When root servers running Windows Server 2003 receive the change notification message, they poll the PDC emulator master to obtain an updated version of the DFS metadata. (Root servers running Windows 2000 Server ignore change notification messages and poll the PDC emulator master every hour.) If a Windows Server 2003 root server does not receive the change notification message, for example, if the root server is temporarily offline, it will poll the PDC emulator master at the next polling interval.
See "How DFS works" http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/T echRef/87b2da50-f5d4-471d-a103-6efde69580cd.mspx
See the DFS FAQ: http://www.microsoft.com/windowsserver2003/techinfo/overview/dfsfaq.mspx
See the following specific questions: How do target selection and site discovery differ in Windows 2000 Server and Windows Server 2003? How do I enable least expensive target selection (site-costing) in DFS? What can cause clients to be referred to unexpected targets? What are the DFS size limits and recommendations for Windows Server 2003? (Compare it to Win2000)
And specifically: How can I work around the DFS size limits? Migrate root servers running Windows 2000 Server to Windows Server 2003. Root servers running Windows Server 2003 do not add site information to the DFS Active Directory object. As a result, if all root servers run Windows Server 2003, DFS can store more root and link information to the DFS Active Directory object before reaching the recommended 5-MB limit. After you migrate your Windows 2000 root servers to Windows Server 2003, you can remove the static site information from the DFS Active Directory object by using the /PurgeWin2kStaticSiteTable parameter in Dfsutil.exe.
You might also want to read the question: What tools should I use to manage DFS when I have root servers that run Windows 2000 Server and Windows Server 2003? And What are the issues to consider when I use multiple servers running Windows 2000 Server and Windows Server 2003 to host a domain-based DFS root?
I would also STRRONGLY suggest you read the question: How do I ensure the availability of a DFS namespace?
For stand-alone DFS namespaces, you ensure the availability of a stand-alone DFS root by creating it on the cluster storage of a clustered file server by using the Cluster Administrator snap-in.
For domain-based DFS namespaces, you ensure the availability of domain-based DFS roots by creating multiple root targets on nonclustered file servers or on the local storage of the nodes of server clusters. (Domain-based DFS roots cannot be created on cluster storage.) All root targets must belong to the same domain. To create root targets, use the Distributed File System snap-in or the Dfsutil.exe command-line tool.
To ensure the availability of domain-based DFS roots, you must have at least two domain controllers and two root targets within the domain that is hosting the root. If you have only one domain controller and it becomes unavailable, the namespace is inaccessible. Similarly, if you have only a single root target, and the server hosting the root target is unavailable, the namespace is also unavailable.
5b.===========================================================
==========
[Original] 2) Your DFS root will either need a Active Directory Domain Controller(s) or a dedicated DFS root server (cluster) depending upon the size and complexity of your environment and the level of authority you want to assign to your DFS administrators.
[Response] DFS roots are stored in the Active Directory (domain-based roots) or in the registry of a server (stand-alone roots). This does not mean the roots are on DCs.
The benefit is that VFM/SX is software and therefore you can leverage resources you already own, rather than purchase additional hardware or an appliance. The other benefit is that you can adjust the configuration and storage of your resource as you see fit and without affecting your CIFS/NFS management. VFM/StorageX supports and can leverage Active Directory, but it isn't required.
Again see the faq referenced above, specifically the question: What are the factors to consider when hosting DFS roots on domain controllers? A. When deciding whether to host a DFS root on a domain controller, consider the following factors: . Only members of the Domain Admins group can manage a DFS namespace hosted on a domain controller. . If you plan to use a domain controller to host a DFS root, the server hardware must be sized to handle the additional load. As described in the previous question, root servers that host large or multiple namespaces require additional memory.
And this question specifically agrees with my statement: What permissions are required to manage a namespace? How can I delegate authority to manage a DFS namespace?
I would paste the table here but I won't in order to save space.
You may also wish to look at the questions: What DFS structures are stored locally on root servers? And What are the hardware requirements for root servers?
5c.===========================================================
===========
[Original] 3) If you want to delegate the administration of DFS to less than domain administrators, you cannot install your DFS root on your AD domain controllers. You will be implementing distributed domain DFS. This means you need one or more clusters of DFS root servers to insure the availability of your DFS resources.
[Response] It is not best practice to load a DFS root on a DC as field experience has shown. You can grant a user rights to the DFS configuration bucket in AD without making that user an admin. But most of our customers are domain admins so that seldomly comes up anyway.
It is common to use one domain admin-like account to run the VFM service under and then delegate authorities through VFM. (pretty cool as MS did not provide this granularity.)
I agree it is not best practice to load it on a DC. Which means the only way to insure global name space availability is distributed domain based DFS. You can delegate administrative DFS authority per leaf to an enterprise admin account in Server 2003 with the December patches. Again, another reason why if you want to implement DFS you should do so with Server 2003 so you don't need to pay for a 3rd party tool. Also, again see the FAQ and the permissions table referenced above under the question: What permissions are required to manage a namespace? How can I delegate authority to manage a DFS namespace?
You might also want to look at the "Windows Rights Management Services with Server 2003 SP1: http://www.microsoft.com/windowsserver2003/technologies/rightsmgmt/defau lt.mspx
5.d===========================================================
[Original] 4) If you install DFS on your domain controllers, you must create a domain administrator account to administer DFS.
[Response] Not a true statement although it does make things easier. We have customers not using domain admin accounts to manage DFS. See the above response.
Absolutely a true statement. Saying it is not factual directly contradicts Microsoft. Saying it may be possible with your tool does not make my statement false. Again, see the FAQ above and specifically reference the question below and look at the last line in the table.: What permissions are required to manage a namespace? How can I delegate authority to manage a DFS namespace?
Ill paste it here for your reference: Task: Performing any of the tasks in this table on a domain controller
Permissions or Group Membership Required: Membership in the Domain Admins group
5.e===========================================================
[Original] 5) DFS R2 is currently in beta. Patches have been released in December and again in 2005 (March?) which have significantly improved the Microsoft tools for configuring and managing DFS. If you are seriously looking gat DFS, contact Microsoft and ask them to come in and show you what they have currently available for free before you spend any significant time or $$ or a tool you will have to pay for.
[Response] Since R2 is still in beta we can't discuss specifics, but we agree that customers should evaluate R2 and they'll find that VFM/StorageX fully supports and enhances R2. It's important to recognize the environments that R2 supports and for those that aren't supported under R2 or those that desire additional value-add can leverage VFM/StorageX. Finally, benefits have been realized by customers who initially chose DFS and realized that they could immediately and seamlessly manage their roots after installing VFM/StorageX.
I will again agree that StorageX/VFM is a better management interface for DFS today. I have never said that it was not. All I have said is that before you consider spending money on a third party DFS management solution, you need to consider that its shelf live will be limited, again for one of two reasons. Either Microsoft will buy it, so they can give it to you for free, or they will develop it, so they can give it to you for free. The added benefit for the third party tool, whatever it is, then needs to stand on whatever functionality it has left.
Rs beta information: http://www.betaone.net/forum/thread-16344.html
R2 technology chat: http://www.microsoft.com/technet/community/chats/trans/windowsnet/ec_050 203.mspx
Microsoft Storage Technologies Home http://www.microsoft.com/windowsserver2003/technologies/storage/default. mspx
DFS Home http://www.microsoft.com/windowsserver2003/technologies/storage/dfs/defa ult.mspx
DFS Technical Reference http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/T echRef/7cb7e9f7-2090-4c88-8d14-270c749fddb5.mspx
Server 2003 DFS demo: http://www.microsoft.com/windowsserver2003/docs/dfs.swf
Windows Server 2003 R2 from the product Road Map http://www.microsoft.com/windowsserver2003/evaluation/overview/roadmap.m spx#EEAA
Server 2003 Feature Packs available today: http://www.microsoft.com/windowsserver2003/evaluation/overview/roadmap.m spx#EBAA
DFS Tools and Settings - Microsoft techNet http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/T echRef/87b2da50-f5d4-471d-a103-6efde69580cd.mspx
A MUST read for anyone looking to implement DFS: How DFS Works: http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/T echRef/87b2da50-f5d4-471d-a103-6efde69580cd.mspx
Specifically see "Overview of Clients and Roles",
6.============================================================
============
[Original] If what you are really looking for is a tool to transparently monitor, manage and optimize your storage environment, both CIFS and NFS and support multiple vendors without having to install software on your filers or clients, without having to install Windows servers and 3rd party software to manage your heterogeneous environment, there are significantly more robust solutions which integrate into industry standard global name spaces and which ELIMINATE downtime, not just reduce it.
[Response] A final thought regarding product viability is further evidenced by the recent industry awards that StorageX/VFM has received in the past couple of months, including:
- Network Magazine - 2005 Innovation Award for the "Most
Influential" solution in the storage product category (April, 2005)
- Redmond Magazine - Redmond Most Valuable Product award
(May, 2005)
- TechTarget's Storage Magazine and SearchStorage.com -
2004 "Product of the Year" Silver Medal in the category of storage management (December, 2004)
Sorry I had to do this, Louis
Awards are great. Congratulations. If you are interested in StorageX/VFM for anything more than the current capabilities for DFS management, ask to speak to an enterprise class customer who has globally implemented the suite of products. Specifically, ask to speak to an enterprise class, globally distributed NetApp or EMC customer who has implemented the suite of applications.
YMMV. I look forward to your detailed responses.
Eric.