Thanks for your response Marty & Glenn,
This KB looks like it should tackle this issue. Windows can't
synchronize files into the cache for a share that essentially doesn't
exist -- the share is only accessible when authenticated as that
particular user. But not sure if it's practical to stamp primary SIDs on
the desktops ... the users are relatively mobile.
I'm far from a strong Windows admin (Unix background) so this is a bit
of learning for me.
I may suggest to create a top-level share, homedirs$ and reference each
user's directory as \\filer\homedir$\%username% instead of
\\filer\%username% (using the CIFS.HOMEDIR functionality). That way the
share always exists ... and Windows can blindly sychronize stuff that
doesn't need to be. :-D
C
Christopher Mende
Systems Engineer
Infinity Solutions Ltd
P O Box 3323, Auckland
Ph: +64 9 921 8039, Mob: +64 21 227 7590
Fax: +64 9 309 4142
www.infinitysolutions.co.nz
> -----Original Message-----
> From: Marty Wise [mailto:Marty.Wise@jlab.org]
> Sent: Thursday, 10 August 2006 11:10 p.m.
> To: Christopher Mende; toasters(a)mathworks.com
> Subject: RE: CIFS homedir offline file synchronization
>
> Chris,
>
> I am not familiar with the error masking behavior you describe, but I
have
> done some tinkering with this issue from another perspective.
>
> Apparently, Windows uses a global offline file cache on a system. A
> synchronization event will trigger a synchronization of files for all
> users
> by default. This is made even more annoying by various files which
cannot
> be
> synchronized (MS Access files among others) that generate errors
during
> the
> synchronization attempt. In our specific situation, it is users who
only
> occasionally log in to a system (mostly admins and support folks) that
get
> synchronized needlessly during sync events triggered by the real user
of
> the
> system. A bit of digging revealed that Windows provides controls that
> alleviates much of this problem for us.
>
> There are registry entries (that can be set via group policy) that
allow
> you
> to specify a list of "Primary" users of a system for offline files
(other
> entries allow you to ignore specific file extensions during
> synchronization). Offline files uses the list of primary users to
attempt
> synchronization of only files for those users. In our case, this
avoids
> synchronizing files for our admins, etc. The configuration is
described in
> MS KnowledgeBase article ID # 811660.
> (http://support.microsoft.com/?kbid=811660)
>
> I'm not sure if this is useful in your situation.
>
> Regards,
>
> Marty Wise
> Computer Center, Windows Systems Team
> Thomas Jefferson National Accelerator Facility
> 12000 Jefferson Ave.
> Newport News, VA, 23606
>
>
> ________________________________________
> From: owner-toasters(a)mathworks.com
[mailto:owner-toasters@mathworks.com]
> On
> Behalf Of Christopher Mende
> Sent: Thursday, August 10, 2006 12:49 AM
> To: toasters(a)mathworks.com
> Subject: CIFS homedir offline file synchronization
>
> Hi All,
>
> Anyone run into issues with offline file synchronization for homedirs?
>
> Have an issue where user1 logs in and out, and sync works just fine.
>
> User 2 logs in and out & syncs, plus it tries to sync user1 again,
which
> fails.
>
> According to M$, if you were using one of their servers, the share
> permissions on the user share would be set to Full Control, and
Windows
> would actually mask the failure.
>
> Seems this isn't happening when using the filer - any ideas short of
> scrapping the auto-homedir feature? How does one change the share
> permissions on these auto-gen'd shares which aren't accessible via
> traditional methods? If that's really the fix.
>
> Christopher Mende
>
> The information contained in this email is privileged and confidential
and
> intended for the addressee only. If you are not the intended
recipient,
> you
> are asked to respect that confidentiality and not disclose, copy or
make
> use
> of its contents. If received in error you are asked to destroy this
email
> and contact the sender immediately. Your assistance is appreciated.
The information contained in this email is privileged and confidential and
intended for the addressee only. If you are not the intended recipient, you
are asked to respect that confidentiality and not disclose, copy or make use
of its contents. If received in error you are asked to destroy this email
and contact the sender immediately. Your assistance is appreciated.
Currently when a Windows user exceeds their home directory user disk quota a
Windows message is sent out that says, "Error Copying File or Folder\nThere is
not enough free disk space". That is starting to cause many users to call our
Help Desk telling us we are out of space when in fact we have all sorts of
space, we just don't want them using it (I believe a 10GB home directory limit
is more than fair). Before we quota the other 14,000 users it would be nice if
we could customize that message (netsend message?) to say something like, "You
have exceeded your 10GB quota, please delete your mp3's and wedding pictures to
free up space". If we could do this for the soft_disk quota/threshold that
would be helpful as well. We have DFM but I think that can only email and
can't send a message to a persons Windows ID...of course I'm not going to link
16,000 email address with their Windows ID's.
On a related topic that was recently touched upon in toasters...what is the
difference between a threshold and a soft_disk quota? Neither send a netsend
windows message, neither restrict the write, and both put a message in the
messages file regardless of which is higher than the other. The description in
the NetApp Manual is below and it sounds like 2 ways of saying the exact same
thing.
threshold (optional) is the disk space usage point at which warnings of
approaching quota limits are issued.
soft_disk (optional) is a soft quota space limit that, if exceeded, issues
warnings rather than rejecting space requests.
Thanks,
Jeff
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
We use offline folders with synchronization and have no issues - it
sounds like it may be related to permissions on the actual user share
itself - can you elaborate as to the configuration there?
Also - what exactly does MS mean by 'masking' the failure? That it
still fails but they don't inform you?
The other issue that I can think of - if user1 and user2 were test
accounts, and user2 was actually synchronizing to user1 at some point in
the past, there is a cache folder on the local client that probably
needs to be cleaned out: this folder is responsible for retaining the
files\cache data that refers to the location of the data on the network
- I've seen this become corrupted before (switching servers where the
home dir is located, for example) such that the offline folder
synchronization process still attempts to look in the old location as
well as the new. Just a hunch.
http://support.microsoft.com/?kbid=230738
Glenn
________________________________
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Christopher Mende
Sent: Thursday, August 10, 2006 12:49 AM
To: toasters(a)mathworks.com
Subject: CIFS homedir offline file synchronization
Hi All,
Anyone run into issues with offline file synchronization for homedirs?
Have an issue where user1 logs in and out, and sync works just fine.
User 2 logs in and out & syncs, plus it tries to sync user1 again, which
fails.
According to M$, if you were using one of their servers, the share
permissions on the user share would be set to Full Control, and Windows
would actually mask the failure.
Seems this isn't happening when using the filer - any ideas short of
scrapping the auto-homedir feature? How does one change the share
permissions on these auto-gen'd shares which aren't accessible via
traditional methods? If that's really the fix.
Christopher Mende
The information contained in this email is privileged and confidential
and
intended for the addressee only. If you are not the intended recipient,
you
are asked to respect that confidentiality and not disclose, copy or make
use
of its contents. If received in error you are asked to destroy this
email
and contact the sender immediately. Your assistance is appreciated.
Hi! I have several NetApp filers with close to 25TBs of user data stored on
them (Best guess is that there is some 50 million files, but we aren't sure
as our backup software only backups the current data set, not the
snapshots). We are looking to consolidate down the environment given the
latest storage capacities and thought I would ask the community here what
tools that they used to understand what data needed to be migrate, what data
needed to be destroyed, and what the data costs in terms of per user and
department? We have the costing for the storage down to a per $ fee per
MB/GB.
Any thoughts? We have tried some freeware and shareware programs, but they
require weeks, if not months of personnel time to make sense out of the data
that they collect. We are looking for a packaged solution.
Thank you for your help.
--
View this message in context: http://www.nabble.com/Data-Classification-tf2023386.html#a5563372
Sent from the Network Appliance - Toasters forum at Nabble.com.
Virus warning
by MAILER-DAEMON@cgpfe2.candwall.com
02 Aug '06
02 Aug '06
A message which was sent to you was rejected by the AntiVirus plugin.
The details of what was detected and the original message header are below.
For your protection, the original message has been destroyed.
Virus(es) found:
Email-Worm.Win32.Mydoom.m
Viruses: 1
The sender is <selbyim(a)rogers.com> (may be faked)
--- The message header follows: ---
Received: from [72.22.151.35] (HELO rogers.com)
by cgpfe2.candwall.com (CommuniGate Pro SMTP 5.0.8)
with ESMTP id 27888169 for toasters(a)mathworks.com; Wed, 02 Aug 2006 09:30:56 -0400
From: selbyim(a)rogers.com
To: toasters(a)mathworks.com
Subject: report
Date: Wed, 2 Aug 2006 09:25:09 -0400
MIME-Version: 1.0
Content-Type: multipart/mixed;
boundary="----=_NextPart_000_0011_A4AA6D73.27C0A378"
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Message-ID: <auto-000027888169(a)cgpfe2.candwall.com>
1
0
Virus warning
by MAILER-DAEMON@cgpfe4.candwall.com
02 Aug '06
02 Aug '06
A message which was sent to you was rejected by the AntiVirus plugin.
The details of what was detected and the original message header are below.
For your protection, the original message has been destroyed.
Virus(es) found:
Email-Worm.Win32.Mydoom.m
Viruses: 1
The sender is <postmaster(a)mathworks.com> (may be faked)
--- The message header follows: ---
Received: from [72.22.139.193] (HELO mathworks.com)
by cgpfe4.candwall.com (CommuniGate Pro SMTP 5.0.8)
with ESMTP id 21379075 for toasters(a)mathworks.com; Wed, 02 Aug 2006 09:20:03 -0400
From: "MAILER-DAEMON" <postmaster(a)mathworks.com>
To: toasters(a)mathworks.com
Subject: DELIVERY REPORTS ABOUT YOUR E-MAIL
Date: Wed, 2 Aug 2006 09:34:36 -0500
MIME-Version: 1.0
Content-Type: multipart/mixed;
boundary="----=_NextPart_000_0004_0A07D034.3B2AAA9B"
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MIMEOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Message-ID: <auto-000021379075(a)cgpfe4.candwall.com>
You are indeed my hero! The answer was so obvious I couldn't see it.
When I turned this off on my volume, my backup throughput went from
1.5GB/5min to 17GB/5min
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Brian Pascal
Sent: Wednesday, August 02, 2006 6:45 AM
To: Greg Wilson
Cc: toasters(a)mathworks.com
Subject: Re: Snapmirror and backup weirdness
Check the vol options for "minra" in that volume. ideally it should be
"off" for backup purposes. You need to change the vol options of the
source volume to affect the destination, since it is volme level
mirrored.
Btw. "minra" vol option has a direct impact to the volume's I/O
performance.
Brian.
Greg Wilson
<gwilson@connect. To:
toasters(a)mathworks.com
com.au> cc:
Sent by: Subject: Snapmirror and
backup weirdness
owner-toasters@ma
thworks.com
08/02/2006 12:05
PM
Hello
Im having some really weird problems with backing up our snapmirrors and
wondering if anyone has any ideas.
Here is the first setup..
We have a 3050 filer (filerB) with 16TB split up in 2 aggrs
filer B is used as a snapmirror source where our other filers snapmirror
all their data onto.
Problem:
on filer A we have 3 sepearte flexvols with 1 qtree in each flex vol.
Each qtree has around 160gb and 4 million files in it.
eg.
/vol/vola/vola
/vol/volb/volb
/vol/volc/volc
on filer B i created a 600gb flex vol called filerA to snapmirror into
We were using Qtree snapmirrors onto filer B so the results looked like
this
qtree: This command is deprecated; using qtree status.
Volume Tree Style Oplocks Status
-------- -------- ----- -------- ---------
vol0 unix enabled normal
filerA unix enabled normal
filerA vola unix enabled snapmirrored
filerA volb unix enabled snapmirrored
filerA volc unix enabled snapmirrored
filerB> df
/vol/filerA/ 838860800 488597596 350263204 58% /vol/filerA/
/vol/filerA/.snapshot 209715200 14934800 194780400 7%
/vol/filerA/.snapshot
filerB> df -i
Filesystem iused ifree %iused Mounted on
/vol/filerA/ 17202153 14674536 54% /vol/filerA/
filerB> snapmirror status
Source Destination State Lag
Status
filerA:/vol/vola/vola filerB:/vol/filerA/vola Snapmirrored 152:02:24
Idle
When we backup /vol/filerA we are getting around 20-26meg a second to
tape.
After a few weeks we see that the snapmirrors are taking a very long
time
(2-3 hours) to sync as each time it snapmirrors its doing a file copy.
Ie works out what inodes have changed then copies data over.
So we decide to cutover to Volume based snapmirrors as its heaps faster.
now on filerB i create 3 restricted volumes and we are snapmirroring to
each volume to their restricted volume.
This is very very fast as its block level and its syncing in under 15
min
this is now how looks like the following on filerB
/vol/filerAvola/ 251658240 153210208 98448032 61%
/vol/filerAvola/
/vol/filerAvola/.snapshot 62914560 14631824 48282736 23%
/vol/filerAvola/.snapshot
/vol/filerAvolb/ 251658240 154023020 97635220 61%
/vol/filerAvolb/
/vol/filerAvolb/.snapshot 62914560 14196416 48718144 23%
/vol/filerAvolb/.snapshot
/vol/filerAvolc/ 251658240 154578208 97080032 61%
/vol/filerAvolc/
/vol/filerAvolc/.snapshot 62914560 14897572 48016988 24%
/vol/filerAvolc/.snapshot
a vol status looks like the following
filerAvola online raid_dp, flex snapmirrored=on,
create_ucode=on,
snapmirrored maxdirsize=20971,
read-only fs_size_fixed=on,
guarantee=volume(disabled)
we have 3 of these snapmirrored volumes..
filerB:vola filerA:vola Snapmirrored 04:20:26 Idle
filerB:volb filerA:volb Snapmirrored 04:20:26 Idle
filerB:volc filerA:volc Snapmirrored 04:20:26 Idle
now the problem is when we go to back these upto tape we are only
getting 3
- 7 meg a second to tape.
This is the problem.
How come when we backup the large volume (/vol/filerA/) we get 26 meg a
sec
but when we backup the smaller restricted volumes (/vol/filerAvola/) we
are only getting 3-7 ?
we are using Netbackup NDMP to a local tape drive
we have another restricted volume on filerB which is 140Gb of oracle
data and that gets 30-50meg a second to tape.
--
Greg Wilson Senior System Administrator
greg.wilson(a)aapt.com.au
This mail has been scanned by InterScan-MSS/kbsl
This Mail Has Been Scanned For Virus By Scanmail For Lotus Notes
Enterprise/Kbsl
This depends - I'd recommend viewing a sysstat -x 1 during the backup.
If no other activity is going on (can be measured by looking at the net
out, in this particular case) and disk reads are lots higher than tape
writes, minra may help. If disk reads are approximately equal to tape
writes, you'll likely see no improvement.
It should be noted that in the later 7.x versions of ONTAP, the read
ahead algorithm is greatly improved - unsure what version you are
running, but it's probably pretty recent given the minimum ONTAP version
that the 3000 series supports.
(BTW - qsm does indeed lay out data much more friendly - it's a
file-level logical replication based on blocks changed per file)
Glenn
-----Original Message-----
From: owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com]
On Behalf Of Brian Pascal
Sent: Wednesday, August 02, 2006 6:45 AM
To: Greg Wilson
Cc: toasters(a)mathworks.com
Subject: Re: Snapmirror and backup weirdness
Check the vol options for "minra" in that volume. ideally it should be
"off" for backup purposes. You need to change the vol options of the
source
volume to affect the destination, since it is volme level mirrored.
Btw. "minra" vol option has a direct impact to the volume's I/O
performance.
Brian.
Greg Wilson
<gwilson@connect. To:
toasters(a)mathworks.com
com.au> cc:
Sent by: Subject: Snapmirror and
backup weirdness
owner-toasters@ma
thworks.com
08/02/2006 12:05
PM
Hello
Im having some really weird problems with backing up our
snapmirrors and wondering if anyone has any ideas.
Here is the first setup..
We have a 3050 filer (filerB) with 16TB split up in 2 aggrs
filer B is used as a snapmirror source where our other
filers snapmirror all their data onto.
Problem:
on filer A we have 3 sepearte flexvols with 1 qtree in each flex vol.
Each qtree has around 160gb and 4 million files in it.
eg.
/vol/vola/vola
/vol/volb/volb
/vol/volc/volc
on filer B i created a 600gb flex vol called filerA to snapmirror into
We were using Qtree snapmirrors onto filer B so the results looked like
this
qtree: This command is deprecated; using qtree status.
Volume Tree Style Oplocks Status
-------- -------- ----- -------- ---------
vol0 unix enabled normal
filerA unix enabled normal
filerA vola unix enabled snapmirrored
filerA volb unix enabled snapmirrored
filerA volc unix enabled snapmirrored
filerB> df
/vol/filerA/ 838860800 488597596 350263204 58% /vol/filerA/
/vol/filerA/.snapshot 209715200 14934800 194780400 7%
/vol/filerA/.snapshot
filerB> df -i
Filesystem iused ifree %iused Mounted on
/vol/filerA/ 17202153 14674536 54% /vol/filerA/
filerB> snapmirror status
Source Destination State Lag
Status
filerA:/vol/vola/vola filerB:/vol/filerA/vola Snapmirrored 152:02:24
Idle
When we backup /vol/filerA we are getting around 20-26meg a second to
tape.
After a few weeks we see that the snapmirrors are taking a very long
time
(2-3 hours) to sync as each time it snapmirrors its doing a file copy.
Ie
works out what inodes have changed then copies data over.
So we decide to cutover to Volume based snapmirrors as its heaps faster.
now on filerB i create 3 restricted volumes and we are snapmirroring to
each volume to their restricted volume.
This is very very fast as its block level and its syncing in under 15
min
this is now how looks like the following on filerB
/vol/filerAvola/ 251658240 153210208 98448032 61%
/vol/filerAvola/
/vol/filerAvola/.snapshot 62914560 14631824 48282736 23%
/vol/filerAvola/.snapshot
/vol/filerAvolb/ 251658240 154023020 97635220 61%
/vol/filerAvolb/
/vol/filerAvolb/.snapshot 62914560 14196416 48718144 23%
/vol/filerAvolb/.snapshot
/vol/filerAvolc/ 251658240 154578208 97080032 61%
/vol/filerAvolc/
/vol/filerAvolc/.snapshot 62914560 14897572 48016988 24%
/vol/filerAvolc/.snapshot
a vol status looks like the following
filerAvola online raid_dp, flex snapmirrored=on,
create_ucode=on,
snapmirrored maxdirsize=20971,
read-only fs_size_fixed=on,
guarantee=volume(disabled)
we have 3 of these snapmirrored volumes..
filerB:vola filerA:vola Snapmirrored 04:20:26 Idle
filerB:volb filerA:volb Snapmirrored 04:20:26 Idle
filerB:volc filerA:volc Snapmirrored 04:20:26 Idle
now the problem is when we go to back these upto tape we are only
getting 3
- 7 meg a second to tape.
This is the problem.
How come when we backup the large volume (/vol/filerA/) we get 26 meg a
sec
but when we backup the smaller restricted volumes (/vol/filerAvola/)
we are only getting 3-7 ?
we are using Netbackup NDMP to a local tape drive
we have another restricted volume on filerB which is 140Gb of oracle
data
and that gets 30-50meg a second to tape.
--
Greg Wilson Senior System Administrator
greg.wilson(a)aapt.com.au
This mail has been scanned by InterScan-MSS/kbsl
This Mail Has Been Scanned For Virus By Scanmail For Lotus Notes
Enterprise/Kbsl