As I understand it, when the block level transfer of the blocks that changed between last transfer and the time the new snapshot was taken on source is completed, Netapp makes this just transferred snapshot on the target the active file system and removes previous snapshot from the source - the target at that point is synchronized with the source at the time of the last snapshot.
When the snapshot on target volume becomes the active file system, what happens to files that reside on data blocks that changed in case these files happen to be locked by clients?
Does the question make sense?
If it does not make any sense, let me try to ask it differently... Unlike typical NFS, when CIFS clients read the file, the file is actually locked until read operation completes. In other words, other processes cannot update the file while the file is being read. What happens in case of Netapp if CIFS clients read the file from the snapmirrored volume and the snapmirror transfer completes *while* the file is being read?
thanks again, Mark
-----Original Message----- From: Eric Kimminau [mailto:ekimminau@rainfinity.com] Sent: Wednesday, May 11, 2005 12:40 PM To: Umerov, Mark Cc: toasters@mathworks.com Subject: RE: Snapmirror and locked files
Hi!
I know that snapmirror is a block level solution, which, to my knowledge, has no real knowledge of the files using those blocks. Therefore I don't believe that it has no knowledge of file related locks. Then again I could be completely wrong. I would be very interested in hearing if you hear anything other that what I have below.
From the iSCSI best practices paper: http://www.netapp.com/tech_library/3250.html
3.9. File Systems That Are Not Capable of Snapshots Applications that are implemented on top of file systems that are not capable of Snapshots, such as NTFS, represent the simplest scenario from a backup and recovery perspective. When backing up these applications, applications must be first quiesced, or taken offline, in order to avoid open files or files changing during the backup operation. Then, file system caches must be first committed before the backup operation commences. The application remains quiesced, or offline, until the backup is completed, at which point normal application operation can resume.
This can result in a significant period of unavailability for the application. Some applications have a built-in hot backup mode, allowing a backup to occur while the application operates at a reduced efficiency and often, limited capabilities. This type of mode is typical in messaging and database applications such as Microsoft Exchange and OracleR. It will result in higher overall application availability than not using hot backup mode. However, it can potentially still result in a long interval of reduced efficiency and limited performance.
One final alternative is to use an open file manager from a backup software vendor. These applications are designed to handle backup operations on files that are still locked by an application. They work well for simple applications, such as home directories and shared documents. However, they should be avoided with complex applications, such as messaging applications or databases. ================================================================= This of course assumes that snapmirror does not use the SecureShare methodology described in this paper which handles multi-protocol opportunistic locks but I don't know if snapmirror interacts in any way:
http://66.102.7.104/search?q=cache:QtaPZkw0OCwJ:www.nluug.nl/events/vj99 /papers/pawlowski.ps+snapmirror+locked+files&hl=en&client=firefox-a Data integrity A thorny problem in the multiprotocol file access space is managing shared access to data in the face ofrestrictions imposed by client locking. The technology in Network Appliance filers that accomplishes this is called SecureShare, and is described in [Borr98]. SecureShare enables UNIX and Windows based applications to concurrently access and update sharedfiles, with the integrity and cache coherency of the shared data being protected by system-enforced locking and file-open semantics. The locking semantics between Windows NT and UNIX differsignificantly. Heuristics control simultaneous access to a file by both UNIX and Windows NT. Simultaneous access in a multiprotocol environment might compromise the data integrity of a file(when locked against unwanted modification by other clients) if not properly managed. UNIX clients have rather lax standards of locking compared to the more restrictive lock semantics of Windows NT. SecureShare also implements a multiprotocol extrapolation of the Windows networking performanceoptimization known as "opportunistic locks" (oplocks). Oplocks allow a Windows NT client to aggressively cache data and lock state in the absence of sharing. A Windows NT server will extend anoplock to a Windows NT client when no other client is sharing a file. When another client attempts access to a file on which an oplock is held, a Winndows NT server will "break the oplock" forcing allmodified data and lock state back to the server, and the server will then negotiate file access between clients engaged in data sharing. The SecureShare implementation of oplocks provides Windows-basedapplications the performance benefits of aggressive client-side caching and the assurance that the same "oplock break" protocol occurs in the event a UNIX-based application attempts to access an oplockedfile.
---------1---------2---------3---------4---------5---------6---------7 Eric Kimminau Email: ekimminau@rainfinity.com Senior Sales Engineer Office:248.766.9921 Rainfinity Fax: 248.393.8037 www.rainfinity.com
________________________________
From: owner-toasters@mathworks.com [mailto:owner-toasters@mathworks.com] On Behalf Of Umerov, Mark Sent: Wednesday, May 11, 2005 2:58 PM To: toasters@mathworks.com Subject: Snapmirror and locked files
greetings,
Does Netapp allow locking of files on snapmirrored (read-only) volumes? If so, how does Snapmirror behave if there are locked files on the target? Please advise if you can...
thanks much in advance, Mark
As I understand it, when the block level transfer of the blocks that changed between last transfer and the time the new snapshot was taken on source is completed, Netapp makes this just transferred snapshot on the target the active file system and removes previous snapshot from the source - the target at that point is synchronized with the source at the time of the last snapshot.
When the snapshot on target volume becomes the active file system, what happens to files that reside on data blocks that changed in case these files happen to be locked by clients?
Does the question make sense?
If it does not make any sense, let me try to ask it differently... Unlike typical NFS, when CIFS clients read the file, the file is actually locked until read operation completes. In other words, other processes cannot update the file while the file is being read. What happens in case of Netapp if CIFS clients read the file from the snapmirrored volume and the snapmirror transfer completes *while* the file is being read?
thanks again, Mark
As I understand it, file locking only pertains to files in the active filesystem. File locks do not exist in snapshots. When you run snapmirror, the source volume is snapshotted first and the snapshot is the input to snapmirror. CIFS clients can continue to read, write, and lock files in the active filesystem without interfering with snapmirror, because snapmirror uses a snapshot instead of the active filesystem.
Steve Losen scl@virginia.edu phone: 434-924-0640
University of Virginia ITC Unix Support
Great question!
I don't know the answer. I am at the limits of my knowledge regarding SnapMirror and file locks.
I will guess again and say that, because snap mirror is block level, it has no knowledge of the files. It will happily replace the blocks mid-read, mid-write, whatever, regardless of lock, unless, as I have said before, somehow SnapMirror somehow communicates with secure share.
Please don't take my guess as fact. We really need someone more intimately familiar to answer the question for both of us.
Thanks!
Eric.