"Jeff" == Jeff Mohler via Toasters toasters@teaparty.net writes:
Jeff> From: Jeff Mohler jmohler@yahooinc.com Jeff> To: Toasters toasters@teaparty.net Jeff> Date: Fri, 5 Aug 2022 10:15:13 -0700 Jeff> Subject: CVO on Standard/SATA PD
Jeff> Does anyone here have experience on a standard PD layer using CVO?
Jeff, I know you're deep in the weeds, but for those of us who aren't... can you spell out those acronyms please?
PD = Purple Dinosaur? CVO = Cheery Victory Orange?
*grin*
Jeff> If so, I am looking for -throughput- maximus, limitations, Jeff> lessons learned, on per volume write workloads.
Jeff> We are in a mode where we believe we were told there are Jeff> 100-200MB/sec write "edges" of throughput to CVO on standard PD Jeff> storage...just we're not quite in shape here to test that yet Jeff> ourselves.
Yeah, you need to translate this section as well please. Or at least give a little more context so people have a hope of understanding. Or better yet, learning from your experiences!
Cheers, John
I didn't understand the Q either if it's any consolation (or... something) I don't know what "PD layer" means. It's some Public Cloud notion? I'm not using any such myself but "D" is for Disk I'm guessing I just can't think now what "P" stands for and in *which* of the Hyperscale Clouds
The notion "Standard/SATA PD" I just can't suss off the top of my head
I don't understand what this means either I have to admit:
'we were told there are 100-200MB/sec write "edges" of throughput to CVO on standard PD storage'
What's a 'write edge' exactly? There it is again: "standard PD storage". It's something you can order in some or other Cloud which supports CVO of course, I get that
/M
On 2022-08-05 22:24, John Stoffel wrote:
PD = Purple Dinosaur? CVO = Cheery Victory Orange?
*grin*
Jeff> If so, I am looking for -throughput- maximus, limitations, Jeff> lessons learned, on per volume write workloads.
Jeff> We are in a mode where we believe we were told there are Jeff> 100-200MB/sec write "edges" of throughput to CVO on standard PD Jeff> storage...just we're not quite in shape here to test that yet Jeff> ourselves.
Yeah, you need to translate this section as well please. Or at least give a little more context so people have a hope of understanding. Or better yet, learning from your experiences!
Cheers, John
CVO == Cloud Volumes Ontap, the software version of OnTap that you can run in the various CSP's (Cloud Service Providers) aka GCP, AWS, Azure
https://cloud.netapp.com/ontap-cloud
PD == Persistent Disk, what GCP (Google Compute Platform) calls its, uh, persistent disks.
https://cloud.google.com/compute/docs/disks
And to answer Jeffs question, we've not done any CVO testing in GCP, so I can't answer the throughput questions.
We have found that, in general, the VM Instance type that hosts the CVO instances matters, since the Instance type also determines networking connectivity.
Disk type (in this case GCP PDs) matters, too; I'd test with Local SSD to see where that gets you.
Lastly, Throughput is dependent on workload, which determines the IO pattern. Maybe create an FIO config that generates traffic close to what you expect, and use that to drive various CVO configurations.
-Skottie
On Fri, Aug 5, 2022 at 1:26 PM John Stoffel john@stoffel.org wrote:
"Jeff" == Jeff Mohler via Toasters toasters@teaparty.net writes:
Jeff> From: Jeff Mohler jmohler@yahooinc.com Jeff> To: Toasters toasters@teaparty.net Jeff> Date: Fri, 5 Aug 2022 10:15:13 -0700 Jeff> Subject: CVO on Standard/SATA PD
Jeff> Does anyone here have experience on a standard PD layer using CVO?
Jeff, I know you're deep in the weeds, but for those of us who aren't... can you spell out those acronyms please?
PD = Purple Dinosaur? CVO = Cheery Victory Orange?
*grin*
Jeff> If so, I am looking for -throughput- maximus, limitations, Jeff> lessons learned, on per volume write workloads.
Jeff> We are in a mode where we believe we were told there are Jeff> 100-200MB/sec write "edges" of throughput to CVO on standard PD Jeff> storage...just we're not quite in shape here to test that yet Jeff> ourselves.
Yeah, you need to translate this section as well please. Or at least give a little more context so people have a hope of understanding. Or better yet, learning from your experiences!
Cheers, John
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters
Cloud volumes ontap Physical disk
:)
Looking to see if any MB/sec on writes to sata physical disk backends have been touched.
Hearing that there may be (in some clouds) a write throughout barrier that CVO may run into.
On Fri, Aug 5, 2022 at 13:46 Scott Miller scott.miller@dreamworks.com wrote:
CVO == Cloud Volumes Ontap, the software version of OnTap that you can run in the various CSP's (Cloud Service Providers) aka GCP, AWS, Azure
https://cloud.netapp.com/ontap-cloud https://urldefense.com/v3/__https://cloud.netapp.com/ontap-cloud__;!!Op6eflyXZCqGR5I!Alqj9mIGNgheNL-OIGe0v4vw5faKBaoHGvf6rlLhXJ3GxqKvAiqlPhxgDV21YF6zaUnJeOh-AVk1Pg94z79Mp8b1lw$
PD == Persistent Disk, what GCP (Google Compute Platform) calls its, uh, persistent disks.
https://cloud.google.com/compute/docs/disks https://urldefense.com/v3/__https://cloud.google.com/compute/docs/disks__;!!Op6eflyXZCqGR5I!Alqj9mIGNgheNL-OIGe0v4vw5faKBaoHGvf6rlLhXJ3GxqKvAiqlPhxgDV21YF6zaUnJeOh-AVk1Pg94z790Xq9hyA$
And to answer Jeffs question, we've not done any CVO testing in GCP, so I can't answer the throughput questions.
We have found that, in general, the VM Instance type that hosts the CVO instances matters, since the Instance type also determines networking connectivity.
Disk type (in this case GCP PDs) matters, too; I'd test with Local SSD to see where that gets you.
Lastly, Throughput is dependent on workload, which determines the IO pattern. Maybe create an FIO config that generates traffic close to what you expect, and use that to drive various CVO configurations.
https://github.com/axboe/fio https://urldefense.com/v3/__https://github.com/axboe/fio__;!!Op6eflyXZCqGR5I!Alqj9mIGNgheNL-OIGe0v4vw5faKBaoHGvf6rlLhXJ3GxqKvAiqlPhxgDV21YF6zaUnJeOh-AVk1Pg94z7_9oTAI2Q$
-Skottie
On Fri, Aug 5, 2022 at 1:26 PM John Stoffel john@stoffel.org wrote:
> "Jeff" == Jeff Mohler via Toasters toasters@teaparty.net writes:
Jeff> From: Jeff Mohler jmohler@yahooinc.com Jeff> To: Toasters toasters@teaparty.net Jeff> Date: Fri, 5 Aug 2022 10:15:13 -0700 Jeff> Subject: CVO on Standard/SATA PD
Jeff> Does anyone here have experience on a standard PD layer using CVO?
Jeff, I know you're deep in the weeds, but for those of us who aren't... can you spell out those acronyms please?
PD = Purple Dinosaur? CVO = Cheery Victory Orange?
*grin*
Jeff> If so, I am looking for -throughput- maximus, limitations, Jeff> lessons learned, on per volume write workloads.
Jeff> We are in a mode where we believe we were told there are Jeff> 100-200MB/sec write "edges" of throughput to CVO on standard PD Jeff> storage...just we're not quite in shape here to test that yet Jeff> ourselves.
Yeah, you need to translate this section as well please. Or at least give a little more context so people have a hope of understanding. Or better yet, learning from your experiences!
Cheers, John
Toasters mailing list Toasters@teaparty.net https://www.teaparty.net/mailman/listinfo/toasters https://urldefense.com/v3/__https://www.teaparty.net/mailman/listinfo/toasters__;!!Op6eflyXZCqGR5I!Alqj9mIGNgheNL-OIGe0v4vw5faKBaoHGvf6rlLhXJ3GxqKvAiqlPhxgDV21YF6zaUnJeOh-AVk1Pg94z79DFGd7dw$
--
If it were me flying, Goose would be alive today.
On 2022-08-05 22:46, Scott Miller wrote:
And to answer Jeffs question, we've not done any CVO testing in GCP, so I can't answer the throughput questions.
Neither have I/we. And won't do either, no time soon.
Scott Miller wrote:
We have found that, in general, the VM Instance type that hosts the CVO instances matters, since the Instance type also determines networking connectivity.
Yep, the VM Instance type does matter very much. Bigger = better, basically... in AWS for instance, there are some types that have much more bandwidth to the disk layer in there. And ones that have more bandwidth to the [NFS] clients if needed ("n"). Of course the CVO (ONTAP) must be able to actually drive that back end bandwidth and for especially write this is not easy for it. CVO is a dog for W, pretty much -- compared to the real thing, a FAS appliance. 7.2K rpm or 10K rpm or SSD, it's the same
Scott Miller wrote:
Disk type (in this case GCP PDs) matters, too; I'd test with Local SSD to see where that gets you.
I agree. You'll get the spinning disks our of the equation first, then see what you can get out of a CVO instance. Then try it with spinnind PD (7.2K rpm or whatever)
Scott Miller wrote:
Lastly, Throughput is dependent on workload, which determines the IO pattern. Maybe create an FIO config that generates traffic close to what you expect, and use that to drive various CVO configurations.
This sounds like something very hard to do. Unless your IRL workload really is very simplistic. I'd be hard pressed to even consider trying fio or iozone etc for any (aggregated) workloads we have here. It won't lead to better knowledge with anything a bit more complex. The only way is to get knowledge is to run the "real thing". Purchase the biggest supported CVO instance there is, configure it with as much "virtual gear" you can and then throw the real workload at it and see what happens. Does it work OK? Good. No? You're out of luck pretty much. The only thing you can do is create more CVO instances. If you need several dozens to handle your workload, then it will be a nightmare to manage, probably. You can't scale up CVO more than a little (and it becomes rather expensive!) and you can't scale out like you can with on-prem AFF/FAS appliances. So, the scale out tactic becomes to have one (hopefully!) CVO for each one of your applications. If you have 100 apps, then you'll end up with 100 CVOs. A.s.o.
/M