I don't know that error, but you can see the constituents by:
vol show -vserver vserver_name -is-constituent true
-jeff
From: Toasters <toasters-bounces(a)teaparty.net> on behalf of "Rue, Randy" <randyrue(a)gmail.com>
Date: Wednesday, May 26, 2021 at 11:28 AM
To: Toasters <toasters(a)teaparty.net>
Subject: [SUSPECTED SPAM]WAFL error on a flexgroup?
External Email - Use Caution
Hello All,
Seeing this error repeatedly in the logs:
wafl.vol.fsp.full:error]: volume
…
[View More]scharp_systems__0002@vserver:51aa02de-9c6e-11eb-9f37-d039ea257af4:
insufficient space in FSP wafl_remote_reserve to satisfy a request of 0
holes and 12 overwrites.
An online search is giving me nothing meaningful, not even NetApp
specific results.
Note that this is a constituent volume for a larger flexgroup (which are
new to us).
Is the constituent volume full? If so, is there some reason data isn't
balanced automatically across the constituents of a flexgroup?
Hope to hear from you,
Randy in Seatlle
_______________________________________________
Toasters mailing list
Toasters(a)teaparty.net<mailto:Toasters@teaparty.net>
https://www.teaparty.net/mailman/listinfo/toasters
**********************************************************
Electronic Mail is not secure, may not be read every day, and should not be used for urgent or sensitive issues
[View Less]
Hello All,
Seeing this error repeatedly in the logs:
wafl.vol.fsp.full:error]: volume
scharp_systems__0002@vserver:51aa02de-9c6e-11eb-9f37-d039ea257af4:
insufficient space in FSP wafl_remote_reserve to satisfy a request of 0
holes and 12 overwrites.
An online search is giving me nothing meaningful, not even NetApp
specific results.
Note that this is a constituent volume for a larger flexgroup (which are
new to us).
Is the constituent volume full? If so, is there some reason data isn't …
[View More]
balanced automatically across the constituents of a flexgroup?
Hope to hear from you,
Randy in Seatlle
[View Less]
Toasters,
I am setting up a new A400 that came with 3.5 disk shelves (3 shelves with 24 disks and one shelf with 12 disks), for a total of 84 disks. Setting the partitioning bit aside (I think I have that right..), I configured raid group sizes of 20, two groups each into two aggregates, one on each controller. This uses 2 x 2 x 20 or 80 disks and leaves 2 spare disks on each controller, which are actually 4 partitions per side spread out on the SSDs.
There will be an issue if we …
[View More]order SSDs to fill the half shelf. There would be no way to add these disks to the existing aggregates and not have really lopsides RG sizes. But if I expanded each of these raid groups by 2 or 3 disks and added the new disks manually, it would work out great.
Is this the best practice for adding disks? Wouldn't that scheme leave data "stranded" on the existing disks? Does this matter even with SSDs in light of the performance per disk.
Thanks,
Fred
[View Less]
We have 2node FAS2520 (no switches).
How to migrate SVMs to a new cluster? Want to preserve CIFS server names and IP addresses.
Last time we used 7MTT to do the migrations.. is there similar tool for cluster mode migrations?
br,
sk
Hi all,
We're in the middle of hell, where our 4-node FAS8060 cluster was
shutdown cleanly for a move, but only one pair made it onto the truck
to the new DC. Luckily I have all the volumes snapmirrored between
the two pairs of nodes and their aggregates.
But now I need to bring up the pair that made the trip, figure out
which mirrors are source and which are destination on this pair, and
then break the destination ones so I can promote them to read-write.
This is not something I've …
[View More]practiced, and I wonder that if I have
volume foo, mounted on /foo, and it's snapmirror is volume foo_sm,
when I do the break, will it automatically mount to /foo? I guess
I'll find out later tonight, and I can just unmount and remount.
I think this is all good with just a simple 'snapmirror break ...' but
then when we get the chance to rejoin the other two nodes into the
cluster down the line, I would asusme I just have to (maybe) wipe the
old nodes and rejoin them one at a time. Mostly because by that point
I can't have the original source volumes come up and cause us to lose
all the writes that have happened on the now writeable destination
volumes.
And of course there's the matter of getting epsilon back up and
working on the two node cluster when I reboot it. Along with all the
LIFs, etc. Not going to be a fun time. Not at all...
And of course we're out of support with Netapp. Sigh...
And who knows if the pair that came down won't lose some disks and end
up losing one or more aggregates as well. Stressful times for sure.
So I'm just venting here, but any suggestions or tricks would be
helpful.
And of course I'm not sure if the cluster switches made it down here
yet.
Never put your DC on the second floor if there isn't a second freight
elevator. Or elevator in general. Sigh...
John
[View Less]
Greetings,
Is there a way to change the SID of a local NetApp user account?
We have some instances where we archive old data from a primary cluster to
a secondary cluster. We use local accounts instead of domain in many
cases. We create the new local account on the secondary NetApp, but we now
have a user account that shows as a SID and not as the actual user name.
If we use SnapMirror, it tramples the top level permissions and we have to
add the new local account permissions for the tree.
…
[View More]It is obviously an inconvenience and not a deal breaker.
Thanks,
Jeff
--
** Please note my email address has changed to jeff.cleverley(a)broadcom.com
Jeff Cleverley
Factory Systems Engineer
4380 Ziegler Road
Building 1, Dock 1
Fort Collins, Colorado 80525
970-288-4611
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
[View Less]
One thing that I have always thought about with SVM root vol protection is,
if it is an operational recommendation, why aren’t they automatically
created in a System Manager workflow when a NAS SVM is created? Things to
make you say, Hmmmmmmmn
From: Scott M Gelb via Toasters <toasters(a)teaparty.net>
<toasters(a)teaparty.net>
Reply: Scott M Gelb <scottgelb(a)yahoo.com> <scottgelb(a)yahoo.com>
Date: April 28, 2021 at 16:46:47
To: Parisi, Justin <justin.parisi(a)…
[View More]netapp.com> <justin.parisi(a)netapp.com>, John
Stoffel <john(a)stoffel.org> <john(a)stoffel.org>
Cc: toasters(a)teaparty.net <toasters(a)teaparty.net> <toasters(a)teaparty.net>
Subject:
I have used both DP and LS over the years and am back to using LS more
often for reasons Justin wrote, and also for an NAE/NVE workaround where DP
make-vsroot had some hoops to jump through to re-create the mirrors after a
failover test. LS mirror promote and recreate after had no issues with
NAE/NVE in my testing. In all the years doing this, I've never had to
recover svm root, but to follow best practices for NAS, still implement
them. I don't create mirrors on all nodes and use 1-2 copies depending on
cluster size.
An interesting test in mirror activation, is that the mirror picks up the
existing SVM junctions regardless of the state of SVM root mirror. For
example:
1) An SVM has 4 junction paths
2) SVM root mirror LS or DP to protect SVM root
3) unmount 3 of the junction paths leaving 1 junction path
4) failover to the root mirror (promote LS or break/make-vsroot DP)
5) SVM root running on the failed over volume has the 1 junction path, not
the 4 that existed at the time of the mirror... there was no real failure,
and the procedure with the SVM running keeps the current state. If a real
disaster, I would expect recovery to what was in the mirror, but have never
had to recover svm root.
An RFE on my wish list is to have the SVM root virtualized in the RDB, then
we don't need to manage, replicate or ever move SVM root. I know this isn't
an easy task and would use mroot/vol0, and cause more cluster traffic, but
still would welcome seeing a change to do this if feasible. Not a show
stopper or requirement, nor high priority.
On Wednesday, April 28, 2021, 11:24:12 AM PDT, John Stoffel <
john(a)stoffel.org> wrote:
Justin> Another pretty major difference between LS and DP methods;
Justin> DP method requires manual intervention when a failover/restore is
needed.
This is fine in my case, because I'm really trying to protect against
a shipping failure, though it's tempting to do more to protect against
root volume failures as well. Though I've honestly never had one, nor
had a netapp fail so badly in 22+ years of using them that I lost data
from hardware failures.
Closest I came was on a F740 (I think) using the DEC StorageWorks
canisters and shelves. I had a two disk failure in an aggregate. One
disk you could hear scrapping the heads on the platter, the other was
a controller board failure. Since I had nothing to lose, I took the
good controller board off the head crash drive and put it onto the
other disk. System came up and found the data and started
rebuilding. Whew! Raid-DP is a good thing today for sure.
Justin> LS Mirrors are running in parallel and incoming reads/access
Justin> requests (other than NFSv4) hit the LS mirrors rather than the
Justin> source volume, so if one fails, you don’t have to do anything
Justin> right away; you’d just need to resolve the issue at some
Justin> point, but no interruption to service.
That's a decent reason to use them.
Justin> LS mirrors can also have a schedule to run to avoid needing to
Justin> update them regularly. And, if you need to write to the SVM
Justin> root for some reason, you’d need to access the .admin path in
Justin> the vsroot; LS mirrors are readonly (like DP mirrors).
The default for 9.3 seems to be 1 hour, but I bumped it to every 5
minutes, because I have Netbackup backups which use snapshots and 'vol
clone ...' to mount Oracle volumes for backups. I had to hack my
backuppolicy.sh script to put in a 'sleep 305' to make it work
properly.
Trying to make it work generically with 'snapmirror update-ls-set
<vserver>:<source>' wasn't working for some reason, so the quick hack
of a sleep got me working.
But I am thinking of dropping the LS mirrors and just going with DP
mirrors of all my rootvols instead, just because of this issue.
But let's do a survey, how many people on here are using LS mirrors of
your rootvols on your clusters? I certainly wasn't across multiple
clusters.
Jhn
_______________________________________________
Toasters mailing list
Toasters(a)teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________
Toasters mailing list
Toasters(a)teaparty.net
https://www.teaparty.net/mailman/listinfo/toasters
[View Less]