Greetings fellow toaster admins
I hope someone can shed some light on the IP distribution function for lacp ifgrps.
We have a four port, multi mode lacp ifgrp, a0a, using interfaces e2a-e2d. We observe SnapMirror traffic egressing port e2c.
The XOR of the last two bits of the source and destination IPs are either x0 or x3 so I am expecting traffic to egress either the first or fourth port in the ifgrp.
Assuming e2a is port 0, e2b is port 1, etc., I would expect traffic to egress either e2a (…
[View More]0x0) or e2d (0x3).
What am I missing? I can’t find any details about the actual hashing function for IP distribution or port member indexing in an ifgrp to confirm my assumptions.
Both clusters are a single A250 HA pair running 9.15.1.
I’m typing this with my thumbs so apologies for any typos or insufficient detail.
Best wishes
Stephen
[View Less]
Hey folks,
I got a bunch of A220s here which I want to update to the latest SP firmware:
CLUSTER::*> system service-processor show
IP Firmware
Node Type Status Configured Version IP Address
------------- ---- ----------- ------------ --------- -------------------------
Node1 BMC online true 11.9P1 127.0.0.1
Node2 BMC online true 11.9P1 127.0.0.2
2 entries were displayed.
But …
[View More]everytime I try to install the update from the Ontap prompt, I get the following message:
CLUSTER::*> system service-processor image update -node * -package SP_FW_308-04195_11.11.tar.gz
Note: Firmware update will need to reboot the SP on completion. If your console
connection is through the SP, it will be disconnected
Do you want to proceed with the firmware update ? {y|n}: y
Error: command failed on node "Node1": Unable to schedule Service
Processor update.
The only KB article I found for this issue is this one:
https://kb.netapp.com/on-prem/ontap/OHW/OHW-KBs/BMC_upgrade_fails_with_Erro…
I can verify that the file is not corrupted (compared the MD5 sums) and I tried to reboot the SP before the update, but all to no avail.
The only way I can get the BMC firmware update installed is by using "bmc update -f" directly on the bmc shell, but that's harder to automate.
Has anyone ever stumbled upon that and has an idea?
SP Auto Updates are enabled, but they're not starting either - waited for a day for them to do their job.
Thanks,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser(a)anexia.com<mailto:AGriesser@anexia.com>
Web: https://www.anexia.com<https://www.anexia.com/>
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
[View Less]
Hi Everyone. I was wondering what is being done to backup S3 buckets
hosted on NetApp. Can anyone share what they are doing, aside from the
normal NetApp snapshot and mirroring things.
Thanks !
This mailing list, like a number of others, is hosted on my colocated box,
using mailman2. The box is running CentOS 7, which goes end-of-life
fairly soon, and I plan to rebuild it running Debian 12 some time in the
next month or two (more solid notice will be given nearer the time!).
There have been no posts to this list since Oct 2023, so although we still
have a goodly number of subscribers, I wonder whether, after 26 years, the
list has run its course. I have enough other lists that …
[View More]I'll still have
to put mailman3 on the new box, but migrating a large list with a big
archive from mailman2 to mailman3 is still quite an undertaking. If it
turned out that it was pointless, it would be a large enough job that not
having to do it would be a noticeable time-saver.
Let me clarify that I'm not looking to offload the list, or to move it
again. If it's still useful, I'll do the legwork and migrate it; if it's
not, then let's just turn it off.
So I seek guidance. Let's not have the discussion on-list, but if you
have any thoughts about whether this list is still useful, please let me
know privately. Both positive and negative thoughts are valuable, so if
you have any strong feelings either way, I'd like to hear them.
Thanks!
--
Tom Yates - https://www.teaparty.net
[View Less]
On 2023-10-17 14:54, Florian Schmid via Toasters wrote:
> Hi Johan,
> thank you very much for your help.
>
> No, we don't have the disks yet for which the flash-pool should be used.
> Not all SSDs will be used for flash-pool, only some for cache and the rest
> for fast SSD storage.
So you're thinking to have several different "physical tiers" with different
characteristics (performance, inherent latency) for different workloads --
in the same HA-pair? Several different …
[View More]Aggrs with differing performance and
behaviour in the same node, FAS8300? (it's a fairly powerful machine so it
can do this adequately in many smaller workload cases).
Or do you mean in different 8300 nodes in a X-node cluster (what's X?)
This idea is much harder to make successful than you probably think. It
requires you to know very much about your workloads, your applications, what
they do so that you can place the correct data in the right place and you
have to have the ability to do this over time as data volumes grow. Assuming
they do... It's very hard indeed to automate so you need people who can baby
watch this continuously and move data around. Yes that's mostly
non-disruptive, but it's still quite a lot of work.
It also pretty much assumes for it to be successful in the longer run that
your applications do not change their workload patterns and/or pressure more
than very slowly.
Is this the case?
All in all FabricPool is much much more automatic. It just does the job
itself, pretty much w/o fuss once you've tuned it a bit w.r.t. cool down
period(s) and things. It "just works". You do need an S3 target system, but
as has already been pointed out it can be ONTAP with NL-SAS drives, if you
already have a bunch of these lying about you can repurpose those and
instead use new Cx00 (or Ax00) nodes in the "front end".
The challenge with FabricPool is the network: the connection between the
front end and the S3 back end needs to be very good and solid. You have to
understand it fully and know every detail how it's built so you know you can
trust it's capacity and latency; traffic can be quite bursty.
I'm not very positive to your idea here I'm afraid:
"Not all SSDs will be used for flash-pool, only some for cache and the rest
for fast SSD storage."
it's just my (long) experience of this that it's not very productive in
reality and it costs a lot of operations (manual work, skilled personnel).
It also tends to give you various problems when you need to do HW LCM
(upgrade your controllers and disk back ends). It inevitably leads to
stranded capacity in more than once dimension as time passes.
/M
--
Sr Human ;-) Alt: r.m.bergman(a)gmail-DEL_THIS-.com
--
"Qui vicit non est victor nisi victus fatetur." - Ennius
[View Less]
Hi Florian,
If you have the NL-SAS aggregate in place already I’d recommend having a look at using AWA to figure out what the optimal cache size would be:
https://docs.netapp.com/us-en/ontap/disks-aggregates/determine-flash-pool-c…
That would give you an idea of how much Flash Pool capacity you’d really benefit from.
Regarding 3.8TB vs 7.6 TB – if the 7.6 TB drives are not listed as supported in the HWU, it’s most likely not supported.
There’s also a note in the HWU for the 3.8TB drives …
[View More]under “System Cache Limits” – “Max number of Flash Pool Data SSDs: 20” (this is excluding RAID parity & hot spares) – if I’m intepreting that note correcly there’d be no point in adding more than one shelf of 3.8TB drives.
Regards, Johan
On 2023-10-17, 09:57, "Florian Schmid" <fschmid(a)ubimet.com> wrote:
Hi Sebastian and Michael,
thank you very much for your help.
We haven't bought the SSD shelves yet.
We don't know, if we should take 2 shelves with 3.8 TB or 1 shelf with 7.6 TB.
I tend to one shelf with 7.6 TB...
We are using flash-pool already with Raid-4, exactly for saving space.
My problem is now, that I'm concerned about the HWU statement, that there is no SSD listed for using in flash-pool over 3.8 TB.
This let me think, that flash-pool is not supported for SSD with 7.6 TB or greater.
I haven't found anything else about it on NetApp site, only in the HWU for system cache limits for FAS 8300
Same limits are also for 8700 or 9000 series, no difference here.
Really strange.
Best regards,
Florian
----- Ursprüngliche Mail -----
Von: "Sebastian Goetze" <spgoetze(a)gmail.com<mailto:spgoetze@gmail.com>>
An: "toasters" <toasters(a)teaparty.net<mailto:toasters@teaparty.net>>
Gesendet: Montag, 16. Oktober 2023 15:35:36
Betreff: Re: Question about flash pool maximum SSD size and local tiering
Hi Florian,
On 16.10.2023 15:01, Fenn, Michael wrote:
> FlashPool SSDs (unlike FlashCache) are attached to aggregates as normal RAID groups, so you can use as many 3.8 TB drives in RAID-DP as you like to hit the maximum FlashPool capacity.
Consider using RAID4 for the SSD RaidGroups... More Cache, and for the
Write-cached blocks, not really less safety. (SSDs are more reliable,
blocks are usually written fairly soon to HDD anyway)
Regarding sizes: any size that's supported for the HW should be fine
(3.8/15.3TB). Within the Cache-RG, you should use the same size disks,
however...
>
> Note that FlashPool and FabricPool use the same underlying tiering metadata structures, so you can only have one or the other enabled on any given aggregate.
>
> Thanks,
> Michael
>
> On 10/16/23, 8:03 AM, "Toasters on behalf of Florian Schmid via Toasters" <toasters-bounces(a)teaparty.net<mailto:toasters-bounces@teaparty.net> <mailto:toasters-bounces@teaparty.net<mailto:toasters-bounces@teaparty.net>> on behalf of toasters(a)teaparty.net<mailto:toasters@teaparty.net> <mailto:toasters@teaparty.net<mailto:toasters@teaparty.net>>> wrote:
>
>
> Hi Alexander,
>
> this is a very good tip! Tank you very much.
> I will have a look on this.
>
> Best regards,
> Florian
>
> ----- Ursprüngliche Mail -----
> Von: "Alexander Griesser" <AGriesser(a)anexia.com<mailto:AGriesser@anexia.com>>
> An: "Florian Schmid" <fschmid(a)ubimet.com<mailto:fschmid@ubimet.com>>, "toasters" <toasters(a)teaparty.net<mailto:toasters@teaparty.net>>
> Gesendet: Montag, 16. Oktober 2023 12:07:53
> Betreff: AW: Question about flash pool maximum SSD size and local tiering
>
> Hi Florian,
>
> I cannot answer the question with the SSD sizes, I'm not sure if this is really a hard requirement or if the slices just may not be bigger than 3.8TB (in that case, you could probably manually partition the SSDs), maybe someone else has more insights into this.
>
> As for your second question: You can spin up OnTap's integrated S3 server on your old boxes and use them as fabric pool targets:
> https://www.netapp.com/media/17219-tr4814.pdf
>
> Best,
>
> Alexander Griesser
> Head of Systems Operations
>
> ANEXIA Internetdienstleistungs GmbH
>
> E-Mail: AGriesser(a)anexia.com<mailto:AGriesser@anexia.com>
> Web: https://www.anexia.com
>
> Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
> Geschäftsführer: Alexander Windbichler
> Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
>
> -----Ursprüngliche Nachricht-----
> Von: Florian Schmid <fschmid(a)ubimet.com<mailto:fschmid@ubimet.com>>
> Gesendet: Montag, 16. Oktober 2023 11:53
> An: toasters(a)teaparty.net<mailto:toasters@teaparty.net>
> Betreff: Question about flash pool maximum SSD size and local tiering
>
> Hi,
>
> I have checked NetApp HWU for a FAS 8300 and system cache limits.
>
> Ok, so far, max flash-pool is 72 TB, which is a way more than I want to use, but I haven't seen usable SSDs greater than 3.8 TB.
>
> Is that really true, that I can't use a 7.6 TB or 15.3 TB SSD for flash-pool?
>
> It would be nice, if someone has here a deeper understanding than I have about this and can give me here some clarifications.
>
> May I ask a second question?
> Is flash-pool still the way to go for speeding up NL-SAS aggregates?
> I had a look on fabric-pool tiering, but it seems like, that this only works to S3 storage, which we don't have.
> We have plenty of NL-SAS storage and also of SSDs and it would be great to have a tiering between them or at least use them for caching.
>
> Best regards,
> Florian
>
>
>
> _______________________________________________
> Toasters mailing list
> Toasters(a)teaparty.net<mailto:Toasters@teaparty.net>
> https://www.teaparty.net/mailman/listinfo/toasters
_______________________________________________
Toasters mailing list
Toasters(a)teaparty.net<mailto:Toasters@teaparty.net>
https://www.teaparty.net/mailman/listinfo/toasters
[View Less]
FlashPool SSDs (unlike FlashCache) are attached to aggregates as normal RAID groups, so you can use as many 3.8 TB drives in RAID-DP as you like to hit the maximum FlashPool capacity.
Note that FlashPool and FabricPool use the same underlying tiering metadata structures, so you can only have one or the other enabled on any given aggregate.
Thanks,
Michael
On 10/16/23, 8:03 AM, "Toasters on behalf of Florian Schmid via Toasters" <toasters-bounces(a)teaparty.net <mailto:toasters-bounces@…
[View More]teaparty.net> on behalf of toasters(a)teaparty.net <mailto:toasters@teaparty.net>> wrote:
Hi Alexander,
this is a very good tip! Tank you very much.
I will have a look on this.
Best regards,
Florian
----- Ursprüngliche Mail -----
Von: "Alexander Griesser" <AGriesser(a)anexia.com>
An: "Florian Schmid" <fschmid(a)ubimet.com>, "toasters" <toasters(a)teaparty.net>
Gesendet: Montag, 16. Oktober 2023 12:07:53
Betreff: AW: Question about flash pool maximum SSD size and local tiering
Hi Florian,
I cannot answer the question with the SSD sizes, I'm not sure if this is really a hard requirement or if the slices just may not be bigger than 3.8TB (in that case, you could probably manually partition the SSDs), maybe someone else has more insights into this.
As for your second question: You can spin up OnTap's integrated S3 server on your old boxes and use them as fabric pool targets:
https://www.netapp.com/media/17219-tr4814.pdf
Best,
Alexander Griesser
Head of Systems Operations
ANEXIA Internetdienstleistungs GmbH
E-Mail: AGriesser(a)anexia.com
Web: https://www.anexia.com
Anschrift Hauptsitz Klagenfurt: Feldkirchnerstraße 140, 9020 Klagenfurt
Geschäftsführer: Alexander Windbichler
Firmenbuch: FN 289918a | Gerichtsstand: Klagenfurt | UID-Nummer: AT U63216601
-----Ursprüngliche Nachricht-----
Von: Florian Schmid <fschmid(a)ubimet.com>
Gesendet: Montag, 16. Oktober 2023 11:53
An: toasters(a)teaparty.net
Betreff: Question about flash pool maximum SSD size and local tiering
Hi,
I have checked NetApp HWU for a FAS 8300 and system cache limits.
Ok, so far, max flash-pool is 72 TB, which is a way more than I want to use, but I haven't seen usable SSDs greater than 3.8 TB.
Is that really true, that I can't use a 7.6 TB or 15.3 TB SSD for flash-pool?
It would be nice, if someone has here a deeper understanding than I have about this and can give me here some clarifications.
May I ask a second question?
Is flash-pool still the way to go for speeding up NL-SAS aggregates?
I had a look on fabric-pool tiering, but it seems like, that this only works to S3 storage, which we don't have.
We have plenty of NL-SAS storage and also of SSDs and it would be great to have a tiering between them or at least use them for caching.
Best regards,
Florian
[View Less]
Hi,
I have checked NetApp HWU for a FAS 8300 and system cache limits.
Ok, so far, max flash-pool is 72 TB, which is a way more than I want to use, but I haven't seen usable SSDs greater than 3.8 TB.
Is that really true, that I can't use a 7.6 TB or 15.3 TB SSD for flash-pool?
It would be nice, if someone has here a deeper understanding than I have about this and can give me here some clarifications.
May I ask a second question?
Is flash-pool still the way to go for speeding up NL-SAS …
[View More]aggregates?
I had a look on fabric-pool tiering, but it seems like, that this only works to S3 storage, which we don't have.
We have plenty of NL-SAS storage and also of SSDs and it would be great to have a tiering between them or at least use them for caching.
Best regards,
Florian
[View Less]
Folks something tells me this is going to be a "D'Oh!" moment, but I'm baffled.
I had a simple requirement to support an ongoing project that requires the creation of a snapshot, and a CIFS share referencing that snapshot, over a series of days. Snapshot and share name naming convention is a simple <YYYYmmdd-<string> and <YYYYmmdd-<string>$> respectively.
The intention is to create both snapshot/share a little after midnight each day until the project ends and they are no …
[View More]longer needed. Simple enough, right?
Well, when I run these commands via SSH from a prompt on a Linux box, everything works just fine:
[root@foo01 ~]# /bin/ssh admin@foo'snapshot create -vserver foo3 -volume Foo04 -snapshot 20231003-foo -expiry-time 11/02/2023 00:00:00'
Last login time: 10/2/2023 12:26:31
[root@foo01 ~]# /bin/ssh admin@foo 'snapshot show -vserver foo3 -volume Foo04 -snapshot 20231003-foo '
Last login time: 10/2/2023 12:46:12
Vserver: foo3
Volume: Foo04
Snapshot: 20231003-foo
Creation Time: Mon Oct 02 12:46:13 2023
Snapshot Busy: false
List of Owners: -
Snapshot Size: 14.76MB
Percentage of Total Blocks: 0%
Percentage of Used Blocks: 0%
Comment: -
7-Mode Snapshot: false
Label for SnapMirror Operations: -
Snapshot State: -
Constituent Snapshot: false
Expiry Time: 11/2/2023 00:00:00
SnapLock Expiry Time: -
However, running a script (sh or bash, makes no difference in this case), I get this when it runs. It actually doesn't matter what ONTAP command I send, they all give the same "Vserver name: Invalid." error (the /bin/ssh lines are just echo statements in the script so I can see that it's building and sending the right command line):
/bin/ssh admin@foo 'snapshot create -vserver foo3 -volume Foo04 -snapshot 20231003-foo -expiry-time 11/02/2023 00:00:00'
Last login time: 10/2/2023 12:46:39
Error: Vserver name: Invalid. The Vserver name must begin with a letter or an
underscore. Maximum supported length: 41 if Vserver is type
"sync-source", 47 otherwise.
/bin/ssh admin@foo 'cifs share create -vserver foo3 -path /Foo04/.snapshot/20231003-foo -share-name 20231003-foo$ -share-properties oplocks,browsable,changenotify,show-previous-versions -symlink-properties symlinks -offline-files manual -vscan-fileop-profile standard -max-connections-per-share 4294967295 -comment pst migrations ok to delete after 11/02/2023 00:00:00'
Last login time: 10/2/2023 12:51:11
Error: Vserver name: Invalid. The Vserver name must begin with a letter or an
underscore. Maximum supported length: 41 if Vserver is type
"sync-source", 47 otherwise.
The script is so simple it's embarrassing. Define a few variables and optargs, build 3 simple SSH commands to send to the toaster, and run them. Everything up until these commands reach the toaster is working as expected. It just breaks when it gets there.
--
Kevin Davis
Systems Administration | Sr. Storage Engineer | Information Services
UMass Memorial Health | 100 Front St. Fl 1, Worcester, MA 01608 | mailto:kevin.davis@umassmemorial.org
The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, transmission, re-transmission, dissemination or other use of, or taking of any action in reliance upon this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.
[View Less]