Snapvault is not EOL. The next version of SMVI will be including snapvault
support. In the interim, Matt Robinson of NetApp has written SV SMVI
(Snapvault for Snapmanager for Virtual Infrastructure). I've tried it in
our development lab and it seems to work really well. Take a look at it at
http://communities.netapp.com/docs/DOC-1868;jsessionid=A4775C7EF68BD0E628BE…
The S550 is being discontinued and therefore snapvault primary for it is
EOL. However, snapvault for the FAS series is …
[View More]not EOL.
On Feb 9, 2009 11:40am, "Klise, Steve" <klises(a)pamf.org> wrote:
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> I think snapvault is EOL but is supported
>
> through 7.3. For some reason, the backup guys don’t want to do it, not
>
> sure about the specifics, but our VAR (who is great btw), has been trying
to
>
> push us in that direction.
>
>
>
>
>
>
>
>
>
> IF it is EOL, that is a nail in that
>
> coffin.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> From:
>
> owner-toasters(a)mathworks.com [mailto:owner-toasters@mathworks.com] On
Behalf Of Bill Holland
>
>
> Sent: Sunday, February 08, 2009
>
> 4:00 AM
>
>
> To: Klise,
>
> Steve; toasters(a)mathworks.com
>
>
> Subject: Re: Avamar VS SMVI
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Have you looked at Snapvault from NetApp?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ----- Original Message -----
>
>
>
>
>
>
>
>
>
>
>
>
> From: steve
>
> klise
>
>
>
>
>
>
>
>
>
>
>
>
>
> To: toasters(a)mathworks.com
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Sent: Saturday, February
>
> 07, 2009 2:16 PM
>
>
>
>
>
>
>
>
>
>
>
>
>
> Subject: Avamar VS SMVI
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> We are starting an eval of SMVI and Avamar for backup/recovery of VM's.
>
> We are planning on having snapmirrors to another filer for backup (NDMP)
and or
>
> DR purposes, but are doing the sm to a local filer for now. Avamar has
some
>
> good selling point, but it seems to be a waste to buy yet another storage
>
> platform. We are heavily invested in netapp now with a VTL (1400), and
>
> NetBackup. Does anyone have any feed back? It seems to make sense to to
SMVI
>
> but with Avamar I get longer retention. I was only going to keep a weeks
worth
>
> of SMVI snapshots. With Avamar, it seem indefinite. The downside of the
avamar
>
> is you seem to need two for going to tape. Avamar can do the single file
>
> restores, which SMVI 2.0 is supposed to do. Anyhow, any feedback is
>
> appreciated.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> View this message in context: Avamar VS
>
> SMVI
>
>
> Sent from the Network Appliance
>
> - Toasters mailing list archive at Nabble.com.
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
[View Less]
I have not opened a case with Netapp yet but probably will if no one has
any good ideas; I just like to pick people's brains before going
official. Thanks for any input.
A few months ago we moved a file share off a Windows server onto our
FAS3040 Netapp running 7.2.4 and shared it out via CIFS. It contains
software install files and scripts and depending on scheduled jobs, it
can get hit pretty hard and pushes out approximately 1 Gbit/sec, which
has been drastically affecting the …
[View More]service times for our other shares on
that filer, and its namely response-sensitive NFS shares we care about
that are affected the most such as mail and web files. It doesn't
really seem to be a disk bottleneck because the disk read/sec in sysstat
is usually only half of what the filer is pushing out to the network, so
I assume its reading some data from cache. The CIFS software install
share can either get hit by 1-60+ CIFS clients where each client reads
files on and off for hours at a time, or sometimes we have hundreds of
clients hitting the share at once for a smaller set of files (such as to
update one software package across a large set of PCs). I've been able
to reproduce the slowdown with just 4 CIFS clients on gigabit
downloading a large file from the share. Sometimes it only causes a
modest slowdown in the NFS response time but sometimes email messages
being moved between folders will stall for 8 seconds or much more, which
is pretty much unacceptable. I don't think its a bottleneck in my core
network because I've done tests where the slow nfs client is on the same
switch as the filer, which is connected via two gig links using LACP.
Also, in the normal situation where the slowdown is encountered, mail
(NFS) traffic is flowing through a different gig uplink than the hungry
CIFS clients.
Goal: reduce the impact of greedy clients (primarily known ones, but
hopefully unexpected ones too) on the response time of the rest of the
filer's clients. I don't care if the CIFS software share must accept
slower data rates, and I'd rather not run away from the problem by
avoiding it but rather learn what I can do to prevent my filer from
being held hostage by greedy clients. I do have another 3040 I could
move the share to, but that filer also has volumes that would be
affected negatively in the same way, and I'd rather not concede defeat
and go back to hosting the share on a dedicated windows server. I can
try different code versions in a test environment if I need to, but I'd
like to think this kind of situation would have come up already and have
a solution at hand.
I've played around with na_priority trying to set the mail and website
volumes to high or veryhigh priority and the software share to low or
verylow but that isn't making a measurable impact. I'm not really sure
what to tweak or check next.
Here is an example from sysstat when I am simulating the slowdown
condition with 4 CIFS clients on gigabit fetching the same file.
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s
Cache
in out read write read
write age
6% 2058 167 0 751 1543 2196 0 0
0 11
6% 2590 164 0 699 2238 2904 32 0
0 11
10% 2183 223 0 1241 4471 5072 17872 0
0 11
11% 3299 799 0 1577 22194 4935 1183 0
0 11
22% 3298 3072 0 3005 107869 9128 24 0
0 11
18% 2532 1986 0 2270 87651 2078 0 0
0 11
18% 2198 2200 0 1696 105941 8032 8 0
0 11
16% 3597 1650 0 1890 84691 3528 24 0
0 11
23% 4946 2216 0 2604 112741 14664 0 0
0 11
22% 4075 2041 0 2324 100380 21568 0 0
0 11
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s
Cache
in out read write read
write age
21% 3272 2246 0 2862 115380 4688 24 0
0 11
21% 4117 2092 0 2686 109165 3864 8 0
0 11
26% 4188 2136 0 3436 115081 21900 0 0
0 11
......(skip)
30% 7487 1773 0 4261 93385 10156 3328 0
0 6
25% 4566 1900 0 3339 96655 13764 9808 0
0 7
24% 2965 2202 0 2477 111493 11772 5475 0
0 8
23% 5256 1986 0 3093 102409 10508 24 0
0 8
19% 2979 2068 0 1810 102282 9926 0 0
0 8
20% 3164 2323 0 2301 111209 1560 8 0
0 8
23% 7082 2165 0 2322 103816 2292 24 0
0 8
22% 11780 1158 0 2763 55501 1760 0 0
0 8
20% 12032 675 0 3820 36504 2452 0 0
0 8
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s
Cache
in out read write read
write age
23% 16269 1122 0 3914 54034 4460 24 0
0 6
18% 8991 1030 0 2739 48400 4568 8 0
0 6
10% 3903 237 0 1346 4494 3828 0 0
0 6
11% 3912 219 0 1623 4301 3808 6508 0
0 6
8% 2402 224 0 868 2027 2744 8712 0
0 6
[View Less]
Hi
Thanks to everyone for the support in 2008 and best wishes to all for 2009!
Not sure if this is the correct place to post this maybe some knows of a
better forum for Netapp VTL's
We have a client that is looking into a VTL solution, however they are also
looking into SyncSort and I was wondering if anyone out there has work with
this combination and what your expierence has being like, pro's and con's
are welcome.
I know the combination is supported,would like some feedback and to avoid a
…
[View More]gotcha when doing the deployment should is be needed.
Thank
Ifstat,
--
View this message in context: http://www.nabble.com/VTL-and-SyncSort-tp21910525p21910525.html
Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
[View Less]
We are starting an eval of SMVI and Avamar for backup/recovery of VM's. We
are planning on having snapmirrors to another filer for backup (NDMP) and or
DR purposes, but are doing the sm to a local filer for now.
Avamar has some good selling point, but it seems to be a waste to buy yet
another storage platform. We are heavily invested in netapp now with a VTL
(1400), and NetBackup.
Does anyone have any feed back? It seems to make sense to to SMVI but with
Avamar I get longer retention. I …
[View More]was only going to keep a weeks worth of
SMVI snapshots. With Avamar, it seem indefinite. The downside of the
avamar is you seem to need two for going to tape.
Avamar can do the single file restores, which SMVI 2.0 is supposed to do.
Anyhow, any feedback is appreciated.
--
View this message in context: http://www.nabble.com/Avamar-VS-SMVI-tp21891851p21891851.html
Sent from the Network Appliance - Toasters mailing list archive at Nabble.com.
[View Less]
Hello all...
recently i ran into a strange problem...i have a fas 3040 in active-active configuration...everything running fine untill one fine morning the NVRAM charging stopped...the filer gave warning messages and then out of a sudden halted...and then the failover tookover the filer...
now the problem is that i dont want the cluster to takeover...since netapp has promised replacements within 5 days...when i disable failover after some time, the filer automatically halts/shut down...
how can …
[View More]i disable this auto shutting down...and has anybody else have this NVRAM charging failure??? reasons ofr this???
Thanx
[View Less]
Ed Wilts wrote:
> On Thu, Feb 5, 2009 at 11:53 PM, Nicholas Bernstein
> <nick(a)nicholasbernstein.com <mailto:nick@nicholasbernstein.com>> wrote:
> :: snip::
>
> There are some pretty serious bugs with netgroups in 7.3. It's a lot
> better with 7.3.1 but still not 100% right yet. We had a high
> priority case open with NetApp on this because we couldn't even start
> to get a new filer into production on 7.3 because of the multiple issues.
>
> It …
[View More]was another guy in my group that worked the specific issue but I'll
> dig up his notes when I get to the office (he doesn't work Fridays).
>
> Is upgrading to 7.3.1 an option for you?
Yeah, this is in a lab environment, so 7.3.1 is a definite option. Thank
you for confirming this, I appreciate it. I had started to thing I was
loosing it a little bit. :)
-Nick
[View Less]
Hi,
This article:
https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb24492
talks about troublshooting LUN alignment issues. It mentions some
statistics one can get from
the stats command. Does anyone know how to interpret these, eg.:
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.0:5%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.1:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.2:0%
lun:/vol/esx_lun_data1/…
[View More]luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.3:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.4:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.5:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.6:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_align_histo.7:81%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.0:2%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.1:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.2:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.3:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.4:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.5:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.6:0%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_align_histo.7:70%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:read_partial_blocks:13%
lun:/vol/esx_lun_data1/luns/esx_lun1.lun-P3NsiJNHOSbN:write_partial_blocks:24%
How does one interpret the various *_align_histo.* counters ?
Regards,
Filip
[View Less]
Hi all,
I'm doing a very basic local /etc/netgroups to give out access via the
/etc/exports file. (eg rw=trustedhosts (also @trustedhosts was tried)).
If I give out access via the hostname/ip everything works fine. Add
netgroups and it stops working. This is the fist time I've seen issues
with a local netgroups file, and it's on 7.3 ontap. Has anyone else
tried using netgroups on 7.3? Is anyone aware of any issues?
netgroups:
----------------------------
trustedhosts (…
[View More]adminhost,,)
untrusted_hosts (,,)
all_hosts trusted_hosts untrusted_hosts
----------------------------
exports:
----------------------------
/vol/vol2/netgroupsq rw=trustedhosts,rw=trustedhosts
Thanks in advance,
Nick
[View Less]
(I'm a little out of my depth here, so there may be some bits needing
lateral interpretation...)
We already use bonded/trunked ethernet from the NetApp to a switch.
That's fine.
We are planning a network upgrade, which gives us a chance to do this
bonding/trunking to a logically linked pair of switches (Cisco 3750E) that
operate as a single virtual switch, using "MEC". (The theory is that if a
switch in the MEC-pair fails, the bond/trunk link continues working
through the other switch, …
[View More]albeit with degraded throughput.)
Does anyone know whether MEC does/doesn't (will/won't) work? Is there
anything MEC-specific that needs to be done in NetApp? Issues? (Other
questions I should be asking? Etc.)
Pointers to existing information would probably be fine.
Thanks in advance.
--
: David Lee I.T. Service :
: Senior Systems Programmer Computer Centre :
: UNIX Team Leader Durham University :
: South Road :
: http://www.dur.ac.uk/t.d.lee/ Durham DH1 3LE :
: Phone: +44 191 334 2752 U.K. :
[View Less]
Hi Everyone.
I am having a quota issue on qtrees that are in flexvols. I've been
noticing one particular qtree growing over the past several weeks. It
grows until it hits the quota, users scream, I increase the quota and so
on... It got to the point where I just didn't believe that much space
was being consumed. I started doing du commands all over the place, but
could not find where that qtree was growing. A du at the root of that
qtree did not show it growing either.
Here's some …
[View More]output from quota reports for a particular qtree:
Yesterday morning:
tree 45 group ideas 535695812 576716800 432819
Yesterday at 5pm:
tree 45 group ideas 550047436 576716800 432915
A du of that qtree actually went down between those times.
So I did a quota off/on and let it re-initialize.
This morning:
tree 45 group ideas 414589436 576716800 432904
I know that area did not fall by over 100GB overnight.
And a few hours later:
tree 45 group ideas 418067772 576716800 433545
The quota report usage shows it has grown by several GB. The du of the
qtree is about flat.
I thought maybe the snapshot space is being counted against the quotas.
I didn't think that was the way it was supposed to work. I maintain 8
hourly, 5 nightly, and 3 weekly snapshots of this flexvol. So yesterday
I deleted the weekly.2 early, waited a while, and then ran the quota
report. It went up.
I don't know where to go from here. I opened a case with NetApp. They
were going to look at it in the lab.
Does anyone have any suggestions ?
FAS3040. OnTap 7.2.6.1
Last month after all the talk about this release and quotas and panics,
I opened a case with NetApp. They had me change setflag
wafl_enable_allocation_size from a 1 to a 0.
Thanks,
Paul
[View Less]