https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters < toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid fschmid@ubimet.com To: toasters toasters@teaparty.net Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options? Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it? We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure? Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3: *nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0*
This is from our old virtualization oVirt: *nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0*
- oVirt was using soft, but this I think, I shouldn't use on Proxmox
and instead use hard. oVirt had a special watchdog for this.
Best regards, Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters toasters@lists.teaparty.net To: toasters toasters@teaparty.net Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.net
Hi,
thank you very much for your answer. This page I know already, but there is not much information about NFS at all. Yes, Netapp is using NFS v4.1 and they write how to setup session trunking, but that's it. It seems like, NetApp has used a 2 years old Proxmox setup and Proxmox has recently upgraded their version to 9 with a newer Debian version, too.
I would like to get some real-life experiences.
Best regards, Florian ________________________________ From: tmac tmacmd@gmail.com Sent: Saturday, 8 November 2025 04:32 To: Florian Schmid fschmid@ubimet.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options? Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it? We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure? Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3: nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt: nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards, Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net
With any move from NFSv3 to NFSv4.x, you have to take into consideration the major changes in how the protocols work. In a virt use case, you mainly would care most about the change in stateful vs. stateless, where storage failovers could introduce more delay/disruption in NFSv4.x than v3, due to locks and states. NFSv4.2 has no mount options avoiding this, but you can tune some of the lock grace periods on the SVM if needed.
TR-4067 covers the differences in the protocol versions on page 42 and has some information about nondisruptive operations on page 63.
https://www.netapp.com/pdf.html?item=/media/10720-tr-4067.pdf
While NFSv3 is indeed old, it is still very viable for most use cases, provided you don’t require the added security or locking benefits in NFSv4.x.
From: tmac tmacmd@gmail.com Sent: Friday, November 7, 2025 10:32 PM To: Florian Schmid fschmid@ubimet.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options? Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it? We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure? Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3: nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt: nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards, Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net
Hello Justin,
Thank you very much for your reply.
Actually, we won't need the additional security NFSv4 would bring. Advanced locking is also not an issue, as the VM disks are always only used on a single node.
Do you have any real life experience with NetApp and Proxmox using NFS? NFSv3 has always worked for us, should we stay there?
When using NFSv3, rsize and wsize are always at 65536. Can this be increased and if yes, should I do this? We have 25 Gbit from Proxmox to NetApp with MTU of 1500.
Any other tweaks here: rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
Best regards, Flo
________________________________ From: Parisi, Justin Justin.Parisi@netapp.com Sent: Monday, 10 November 2025 16:10 To: tmac tmacmd@gmail.com; Florian Schmid fschmid@ubimet.com Cc: toasters toasters@teaparty.net Subject: RE: Proxmox and NetApp NFS volumes
With any move from NFSv3 to NFSv4.x, you have to take into consideration the major changes in how the protocols work. In a virt use case, you mainly would care most about the change in stateful vs. stateless, where storage failovers could introduce more delay/disruption in NFSv4.x than v3, due to locks and states. NFSv4.2 has no mount options avoiding this, but you can tune some of the lock grace periods on the SVM if needed.
TR-4067 covers the differences in the protocol versions on page 42 and has some information about nondisruptive operations on page 63.
https://www.netapp.com/pdf.html?item=/media/10720-tr-4067.pdf
While NFSv3 is indeed old, it is still very viable for most use cases, provided you don’t require the added security or locking benefits in NFSv4.x.
From: tmac tmacmd@gmail.com Sent: Friday, November 7, 2025 10:32 PM To: Florian Schmid fschmid@ubimet.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes
Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused
I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options?
Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it?
We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure?
Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3:
nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt:
nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards,
Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net
NFSv3 is perfectly fine, provided you don’t need the added features of v4 (which it sounds like you don’t).
That said, v4.x does offer pNFS (for data locality/performance benefits) and NFS session trunking (for aggregation of interfaces for added performance), if that is interesting to you.
If not, then nconnect with NFSv3 is an option (up to 16 TCP connections with the nconnect=n mount option).
I don’t have any Proxmox NFS stories, so that means it either just works or it isn’t used a ton. 😊 But it should work fine, as we do have a ton of VMware and OpenShift virt success stories with NFS.
As for wsize/rsize, the impact of that is usually determined by the application in use. For instance, if you have 64K for the rsize and an application sends 8KB of data, then it all fits in a single request. If the app sends 256K of data, then you need more packets to service one request (256/64). We generally recommend setting this to at least 256K on the ONTAP side and the client will autonegotiate the wsize/rsize based on that value unless it explicitly mounts with those options. But having the larger TCP size will allow for more flexibility when IO sizes are different across access to the mount. For instance, with 256K TCP size, the following app request sizes would fit into one packet:
4KB 8KB 64KB 128KB 256KB
1MB request would need 4 packets.
Larger window sizes will help performance for larger file workloads that use larger read and write sizes but will be the same for smaller IO sizes. And again, you wouldn’t define it in the mount option – you would let the client and server negotiate the max value.
This covers it well:
https://docs.netapp.com/us-en/ontap-apps-dbs/oracle/oracle-storage-nfs-confi...
Also, you may want to investigate using larger MTU sizes for that same reason – especially with 25GB interfaces.
From: Florian Schmid fschmid@ubimet.com Sent: Wednesday, November 12, 2025 5:56 AM To: Parisi, Justin Justin.Parisi@netapp.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
Hello Justin,
Thank you very much for your reply.
Actually, we won't need the additional security NFSv4 would bring. Advanced locking is also not an issue, as the VM disks are always only used on a single node.
Do you have any real life experience with NetApp and Proxmox using NFS? NFSv3 has always worked for us, should we stay there?
When using NFSv3, rsize and wsize are always at 65536. Can this be increased and if yes, should I do this? We have 25 Gbit from Proxmox to NetApp with MTU of 1500.
Any other tweaks here: rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
Best regards, Flo
________________________________ From: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Sent: Monday, 10 November 2025 16:10 To: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com>; Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: RE: Proxmox and NetApp NFS volumes
With any move from NFSv3 to NFSv4.x, you have to take into consideration the major changes in how the protocols work. In a virt use case, you mainly would care most about the change in stateful vs. stateless, where storage failovers could introduce more delay/disruption in NFSv4.x than v3, due to locks and states. NFSv4.2 has no mount options avoiding this, but you can tune some of the lock grace periods on the SVM if needed.
TR-4067 covers the differences in the protocol versions on page 42 and has some information about nondisruptive operations on page 63.
https://www.netapp.com/pdf.html?item=/media/10720-tr-4067.pdf
While NFSv3 is indeed old, it is still very viable for most use cases, provided you don’t require the added security or locking benefits in NFSv4.x.
From: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com> Sent: Friday, November 7, 2025 10:32 PM To: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes
Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused
I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options?
Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it?
We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure?
Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3:
nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt:
nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards,
Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net
Hello Justin,
Thank you very much for this great explanation.
I definitively want to try this out when it has no impact on fail-over times when doing a NetApp upgrade?
* Maybe more connections are causing longer fail-over times?
This vserver, where the new volumes have been created for Proxmox is also used by our old virtualization and I wanted to ask, if those changes have any impact on the other volumes?
* Change the wsize/rsize to 256K -> this seems also to be recommended by NetApp in your link. It will be already an increase by 4 to our old setup running for several years... * The nconnect I think I don't need to change anything on the filer, do I? * Also, for nconnect, I don't need more lifs on the filer, do I? I have only one lif per node... * What nconnect number is a good starting point? Should I start with 4 and then test later with 8?
I would use fio to test those changes. I have some experience with it already.
Best regards, Flo
________________________________ From: Parisi, Justin Justin.Parisi@netapp.com Sent: Wednesday, 12 November 2025 17:19 To: Florian Schmid fschmid@ubimet.com Cc: toasters toasters@teaparty.net Subject: RE: Proxmox and NetApp NFS volumes
NFSv3 is perfectly fine, provided you don’t need the added features of v4 (which it sounds like you don’t).
That said, v4.x does offer pNFS (for data locality/performance benefits) and NFS session trunking (for aggregation of interfaces for added performance), if that is interesting to you.
If not, then nconnect with NFSv3 is an option (up to 16 TCP connections with the nconnect=n mount option).
I don’t have any Proxmox NFS stories, so that means it either just works or it isn’t used a ton. 😊 But it should work fine, as we do have a ton of VMware and OpenShift virt success stories with NFS.
As for wsize/rsize, the impact of that is usually determined by the application in use. For instance, if you have 64K for the rsize and an application sends 8KB of data, then it all fits in a single request. If the app sends 256K of data, then you need more packets to service one request (256/64). We generally recommend setting this to at least 256K on the ONTAP side and the client will autonegotiate the wsize/rsize based on that value unless it explicitly mounts with those options. But having the larger TCP size will allow for more flexibility when IO sizes are different across access to the mount. For instance, with 256K TCP size, the following app request sizes would fit into one packet:
4KB
8KB
64KB
128KB
256KB
1MB request would need 4 packets.
Larger window sizes will help performance for larger file workloads that use larger read and write sizes but will be the same for smaller IO sizes. And again, you wouldn’t define it in the mount option – you would let the client and server negotiate the max value.
This covers it well:
https://docs.netapp.com/us-en/ontap-apps-dbs/oracle/oracle-storage-nfs-confi...
Also, you may want to investigate using larger MTU sizes for that same reason – especially with 25GB interfaces.
From: Florian Schmid fschmid@ubimet.com Sent: Wednesday, November 12, 2025 5:56 AM To: Parisi, Justin Justin.Parisi@netapp.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
Hello Justin,
Thank you very much for your reply.
Actually, we won't need the additional security NFSv4 would bring. Advanced locking is also not an issue, as the VM disks are always only used on a single node.
Do you have any real life experience with NetApp and Proxmox using NFS?
NFSv3 has always worked for us, should we stay there?
When using NFSv3, rsize and wsize are always at 65536. Can this be increased and if yes, should I do this?
We have 25 Gbit from Proxmox to NetApp with MTU of 1500.
Any other tweaks here:
rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
Best regards,
Flo
________________________________
From: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Sent: Monday, 10 November 2025 16:10 To: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com>; Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: RE: Proxmox and NetApp NFS volumes
With any move from NFSv3 to NFSv4.x, you have to take into consideration the major changes in how the protocols work. In a virt use case, you mainly would care most about the change in stateful vs. stateless, where storage failovers could introduce more delay/disruption in NFSv4.x than v3, due to locks and states. NFSv4.2 has no mount options avoiding this, but you can tune some of the lock grace periods on the SVM if needed.
TR-4067 covers the differences in the protocol versions on page 42 and has some information about nondisruptive operations on page 63.
https://www.netapp.com/pdf.html?item=/media/10720-tr-4067.pdf
While NFSv3 is indeed old, it is still very viable for most use cases, provided you don’t require the added security or locking benefits in NFSv4.x.
From: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com> Sent: Friday, November 7, 2025 10:32 PM To: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes
Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused
I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options?
Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it?
We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure?
Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3:
nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt:
nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards,
Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net
* NFSv4.x does add statefulness to connections, which can increase failover times as compared to NFSv3 (up to 90 seconds, but tunable down a bit) * Changing the TCP size requires a remount of existing volumes, so you would want a maintenance window for that * Nconnect requires no ONTAP changes but the client OS needs to support nconnect as a mount option. * I would start with 4 for nconnect and keep it to 8 max
From: Florian Schmid fschmid@ubimet.com Sent: Wednesday, November 12, 2025 11:47 AM To: Parisi, Justin Justin.Parisi@netapp.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
Hello Justin,
Thank you very much for this great explanation.
I definitively want to try this out when it has no impact on fail-over times when doing a NetApp upgrade?
* Maybe more connections are causing longer fail-over times?
This vserver, where the new volumes have been created for Proxmox is also used by our old virtualization and I wanted to ask, if those changes have any impact on the other volumes?
* Change the wsize/rsize to 256K -> this seems also to be recommended by NetApp in your link. It will be already an increase by 4 to our old setup running for several years...
* The nconnect I think I don't need to change anything on the filer, do I?
* Also, for nconnect, I don't need more lifs on the filer, do I? I have only one lif per node...
* What nconnect number is a good starting point? Should I start with 4 and then test later with 8?
I would use fio to test those changes. I have some experience with it already.
Best regards, Flo
________________________________ From: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Sent: Wednesday, 12 November 2025 17:19 To: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: RE: Proxmox and NetApp NFS volumes
NFSv3 is perfectly fine, provided you don’t need the added features of v4 (which it sounds like you don’t).
That said, v4.x does offer pNFS (for data locality/performance benefits) and NFS session trunking (for aggregation of interfaces for added performance), if that is interesting to you.
If not, then nconnect with NFSv3 is an option (up to 16 TCP connections with the nconnect=n mount option).
I don’t have any Proxmox NFS stories, so that means it either just works or it isn’t used a ton. 😊 But it should work fine, as we do have a ton of VMware and OpenShift virt success stories with NFS.
As for wsize/rsize, the impact of that is usually determined by the application in use. For instance, if you have 64K for the rsize and an application sends 8KB of data, then it all fits in a single request. If the app sends 256K of data, then you need more packets to service one request (256/64). We generally recommend setting this to at least 256K on the ONTAP side and the client will autonegotiate the wsize/rsize based on that value unless it explicitly mounts with those options. But having the larger TCP size will allow for more flexibility when IO sizes are different across access to the mount. For instance, with 256K TCP size, the following app request sizes would fit into one packet:
4KB
8KB
64KB
128KB
256KB
1MB request would need 4 packets.
Larger window sizes will help performance for larger file workloads that use larger read and write sizes but will be the same for smaller IO sizes. And again, you wouldn’t define it in the mount option – you would let the client and server negotiate the max value.
This covers it well:
https://docs.netapp.com/us-en/ontap-apps-dbs/oracle/oracle-storage-nfs-confi...
Also, you may want to investigate using larger MTU sizes for that same reason – especially with 25GB interfaces.
From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Sent: Wednesday, November 12, 2025 5:56 AM To: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
Hello Justin,
Thank you very much for your reply.
Actually, we won't need the additional security NFSv4 would bring. Advanced locking is also not an issue, as the VM disks are always only used on a single node.
Do you have any real life experience with NetApp and Proxmox using NFS?
NFSv3 has always worked for us, should we stay there?
When using NFSv3, rsize and wsize are always at 65536. Can this be increased and if yes, should I do this?
We have 25 Gbit from Proxmox to NetApp with MTU of 1500.
Any other tweaks here:
rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
Best regards,
Flo
________________________________
From: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Sent: Monday, 10 November 2025 16:10 To: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com>; Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: RE: Proxmox and NetApp NFS volumes
With any move from NFSv3 to NFSv4.x, you have to take into consideration the major changes in how the protocols work. In a virt use case, you mainly would care most about the change in stateful vs. stateless, where storage failovers could introduce more delay/disruption in NFSv4.x than v3, due to locks and states. NFSv4.2 has no mount options avoiding this, but you can tune some of the lock grace periods on the SVM if needed.
TR-4067 covers the differences in the protocol versions on page 42 and has some information about nondisruptive operations on page 63.
https://www.netapp.com/pdf.html?item=/media/10720-tr-4067.pdf
While NFSv3 is indeed old, it is still very viable for most use cases, provided you don’t require the added security or locking benefits in NFSv4.x.
From: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com> Sent: Friday, November 7, 2025 10:32 PM To: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes
Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused
I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options?
Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it?
We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure?
Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3:
nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt:
nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards,
Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net
Hello,
I have done now a lot of tests with nconnect and what should I say, the differences are huge.
I have checked our Grafana dashboard and the blocksize is always below 64k, so I won't change rsize/wsize, but nconnect, I will use for sure.
Thank you very much for your detailed answers and help.
Best regards, Flo
________________________________ From: Parisi, Justin Justin.Parisi@netapp.com Sent: Wednesday, 12 November 2025 18:38 To: Florian Schmid fschmid@ubimet.com Cc: toasters toasters@teaparty.net Subject: RE: Proxmox and NetApp NFS volumes
* NFSv4.x does add statefulness to connections, which can increase failover times as compared to NFSv3 (up to 90 seconds, but tunable down a bit) * Changing the TCP size requires a remount of existing volumes, so you would want a maintenance window for that * Nconnect requires no ONTAP changes but the client OS needs to support nconnect as a mount option. * I would start with 4 for nconnect and keep it to 8 max
From: Florian Schmid fschmid@ubimet.com Sent: Wednesday, November 12, 2025 11:47 AM To: Parisi, Justin Justin.Parisi@netapp.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
Hello Justin,
Thank you very much for this great explanation.
I definitively want to try this out when it has no impact on fail-over times when doing a NetApp upgrade?
* Maybe more connections are causing longer fail-over times?
This vserver, where the new volumes have been created for Proxmox is also used by our old virtualization and I wanted to ask, if those changes have any impact on the other volumes?
* Change the wsize/rsize to 256K -> this seems also to be recommended by NetApp in your link. It will be already an increase by 4 to our old setup running for several years...
* The nconnect I think I don't need to change anything on the filer, do I?
* Also, for nconnect, I don't need more lifs on the filer, do I? I have only one lif per node...
* What nconnect number is a good starting point? Should I start with 4 and then test later with 8?
I would use fio to test those changes. I have some experience with it already.
Best regards,
Flo
________________________________
From: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Sent: Wednesday, 12 November 2025 17:19 To: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: RE: Proxmox and NetApp NFS volumes
NFSv3 is perfectly fine, provided you don’t need the added features of v4 (which it sounds like you don’t).
That said, v4.x does offer pNFS (for data locality/performance benefits) and NFS session trunking (for aggregation of interfaces for added performance), if that is interesting to you.
If not, then nconnect with NFSv3 is an option (up to 16 TCP connections with the nconnect=n mount option).
I don’t have any Proxmox NFS stories, so that means it either just works or it isn’t used a ton. 😊 But it should work fine, as we do have a ton of VMware and OpenShift virt success stories with NFS.
As for wsize/rsize, the impact of that is usually determined by the application in use. For instance, if you have 64K for the rsize and an application sends 8KB of data, then it all fits in a single request. If the app sends 256K of data, then you need more packets to service one request (256/64). We generally recommend setting this to at least 256K on the ONTAP side and the client will autonegotiate the wsize/rsize based on that value unless it explicitly mounts with those options. But having the larger TCP size will allow for more flexibility when IO sizes are different across access to the mount. For instance, with 256K TCP size, the following app request sizes would fit into one packet:
4KB
8KB
64KB
128KB
256KB
1MB request would need 4 packets.
Larger window sizes will help performance for larger file workloads that use larger read and write sizes but will be the same for smaller IO sizes. And again, you wouldn’t define it in the mount option – you would let the client and server negotiate the max value.
This covers it well:
https://docs.netapp.com/us-en/ontap-apps-dbs/oracle/oracle-storage-nfs-confi...
Also, you may want to investigate using larger MTU sizes for that same reason – especially with 25GB interfaces.
From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Sent: Wednesday, November 12, 2025 5:56 AM To: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
Hello Justin,
Thank you very much for your reply.
Actually, we won't need the additional security NFSv4 would bring. Advanced locking is also not an issue, as the VM disks are always only used on a single node.
Do you have any real life experience with NetApp and Proxmox using NFS?
NFSv3 has always worked for us, should we stay there?
When using NFSv3, rsize and wsize are always at 65536. Can this be increased and if yes, should I do this?
We have 25 Gbit from Proxmox to NetApp with MTU of 1500.
Any other tweaks here:
rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
Best regards,
Flo
________________________________
From: Parisi, Justin <Justin.Parisi@netapp.commailto:Justin.Parisi@netapp.com> Sent: Monday, 10 November 2025 16:10 To: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com>; Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: RE: Proxmox and NetApp NFS volumes
With any move from NFSv3 to NFSv4.x, you have to take into consideration the major changes in how the protocols work. In a virt use case, you mainly would care most about the change in stateful vs. stateless, where storage failovers could introduce more delay/disruption in NFSv4.x than v3, due to locks and states. NFSv4.2 has no mount options avoiding this, but you can tune some of the lock grace periods on the SVM if needed.
TR-4067 covers the differences in the protocol versions on page 42 and has some information about nondisruptive operations on page 63.
https://www.netapp.com/pdf.html?item=/media/10720-tr-4067.pdf
While NFSv3 is indeed old, it is still very viable for most use cases, provided you don’t require the added security or locking benefits in NFSv4.x.
From: tmac <tmacmd@gmail.commailto:tmacmd@gmail.com> Sent: Friday, November 7, 2025 10:32 PM To: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> Cc: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Subject: Re: Proxmox and NetApp NFS volumes
EXTERNAL EMAIL - USE CAUTION when clicking links or attachments
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes
Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused
I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options?
Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it?
We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure?
Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3:
nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt:
nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards,
Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net