On 2020-11-06 17:12, Alexander Griesser wrote:
I can limit the number of volumes already, just can't limit the size...
I can, however, automate the creation of an SVM, so currently my only good option seems to be to provide one SVM per customer volume and limit the size oft he volume the SVM can create to whatever I want to sell them and to set max-volumes to 1 (or two, if the root volume also counts).
Yeah, one could do that (one vserver per customer volume) but... it's feels a bit... ridiculous..? Something better is needed, I do agree with that. Hope there will be something in the not too distant future.
Can't remember now how many vservers is possible in a large ONTAP cluster, is it 1000? There's some limit. It depends on how many nodes or some other factors I think.
But this is a nice workaround -- I have to admit I still don't fully understand how it works, because I'm not too familiar with the 'security login role stuff... (need to read up on it!)
::> vserver modify -vserver vs1 -max-volumes 50 ::> security login role create -vserver <name> -role restricted -cmddirname "volume" -access all -query "-size <=50G"
Norbert Geissler wrote:
you should also ask them to not create 500 volumes of size 1GB .
So with the above set and assuming no K8s "customer" can override it in any way, you're good right? It will limit things for sure. Or did I miss something? Some other disadvantage or side effect of doing that 'role' command?
Some thoughts. The next step down this path is when the customer has MANY K8s clusters, they are an internal "ISP" of sorts. So they want one vserver per their K8s cluster and they want to create and remove them themselves together with K8s clusters. I.e.: be vserver admin and API control it too from their "portal" w their own automation.
But then you can no longer do anything like this, it's not in your control anymore to limit things in a vserver in any way by force:
vserver modify -vserver vs1 -max-volumes 10
What to do then? If you relinquish vserver creation control, then the K8s cluster admin ppl can do anything they like and the only option that remains is to control things at the Aggr level. As best one can... If they run out of Aggr space then... *boom*. Their problem. Still, there will be some sort of disruption and some sort of Incident mgmt there one would think
So finally we have the last step: the K8s cluster team purchases their own ONTAP cluster(s), all the NetApp HW on their CAPEX budget and they own the whole thing; all the HW = all the OPEX created by the depreciation. The cost reclaim model is their problem, not mine. The only thing a Storage Ops Teams does in the scenario (e.g. where I work internally at Ericsson in our R&D) is set up the baseline ONTAP cluster as it should be in the internal Network and manage/support the HW (replace broken things etc), up to creating Aggrs probably because K8s ppl don't want to do that.
Then, from vserver level of abstraction and up, they do whatever they want. 100% automated under their control. A-hm. Did I just make myself (almost) obsolete..? ;-)
Yes, that's what Trident is for really from a K8s PoV, is it not? Making the storage Operations Team nearly obsolete. Infrastructure as Code, etc. For this to work out properly, the financial control (CAPEX, bugeting, OPEX, cost reclaim) has to be put in the hands of the K8s admin ppl. Each K8s cluster created automatically creates a vserver for it in some or other of the ONTAP clusters set up and available for this purpose. For this sole purpose. All the volume and performance mgmt (Demand- & Capacity Mgmt) has to be done by the K8s admin ppl.
Right. I, the Storage Architect, go join that team (K8s guys) instead, to plan and handle the ONTAP storage for PVC for all the umpteen K8s clusters and server "POD"s [a future vision for my working life..?]
/M