You can modify the nfs server: [-tcp-max-xfer-size <integer>] - TCP Maximum Transfer Size (bytes) (privilege: advanced)
This optional parameter specifies the maximum transfer size (in bytes) that the storage system negotiates with the client for TCP transport of data for NFSv3 and NFSv4.x protocols. The range is 8192 to 1048576 Setting to the max of 1048576 will allow the client to be set to the same
If your nfs client supports it, you can use “nconnect” set to 4 to also help
Sent from Gmail Mobile.
On Wed, Nov 12, 2025 at 5:56 AM Florian Schmid via toasters < toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid fschmid@ubimet.com To: "Parisi, Justin" Justin.Parisi@netapp.com Cc: toasters toasters@teaparty.net Bcc: Date: Wed, 12 Nov 2025 10:55:50 +0000 Subject: Re: Proxmox and NetApp NFS volumes Hello Justin,
Thank you very much for your reply.
Actually, we won't need the additional security NFSv4 would bring. Advanced locking is also not an issue, as the VM disks are always only used on a single node.
Do you have any real life experience with NetApp and Proxmox using NFS? NFSv3 has always worked for us, should we stay there?
When using NFSv3, rsize and wsize are always at 65536. Can this be increased and if yes, should I do this? We have 25 Gbit from Proxmox to NetApp with MTU of 1500.
Any other tweaks here: rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
Best regards, Flo
*From:* Parisi, Justin Justin.Parisi@netapp.com *Sent:* Monday, 10 November 2025 16:10 *To:* tmac tmacmd@gmail.com; Florian Schmid fschmid@ubimet.com *Cc:* toasters toasters@teaparty.net *Subject:* RE: Proxmox and NetApp NFS volumes
With any move from NFSv3 to NFSv4.x, you have to take into consideration the major changes in how the protocols work. In a virt use case, you mainly would care most about the change in stateful vs. stateless, where storage failovers could introduce more delay/disruption in NFSv4.x than v3, due to locks and states. NFSv4.2 has no mount options avoiding this, but you can tune some of the lock grace periods on the SVM if needed.
TR-4067 covers the differences in the protocol versions on page 42 and has some information about nondisruptive operations on page 63.
https://www.netapp.com/pdf.html?item=/media/10720-tr-4067.pdf
While NFSv3 is indeed old, it is still very viable for most use cases, provided you don’t require the added security or locking benefits in NFSv4.x.
*From:* tmac tmacmd@gmail.com *Sent:* Friday, November 7, 2025 10:32 PM *To:* Florian Schmid fschmid@ubimet.com *Cc:* toasters toasters@teaparty.net *Subject:* Re: Proxmox and NetApp NFS volumes
*EXTERNAL EMAIL - USE CAUTION when clicking links or attachments *
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.... https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters < toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid fschmid@ubimet.com To: toasters toasters@teaparty.net Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes
Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused
I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options?
Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it?
We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure?
Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3:
*nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0*
This is from our old virtualization oVirt:
*nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0*
- oVirt was using soft, but this I think, I shouldn't use on Proxmox
and instead use hard. oVirt had a special watchdog for this.
Best regards,
Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters toasters@lists.teaparty.net To: toasters toasters@teaparty.net Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.net
---------- Forwarded message ---------- From: Florian Schmid via toasters toasters@lists.teaparty.net To: "Parisi, Justin" Justin.Parisi@netapp.com Cc: toasters toasters@teaparty.net Bcc: Date: Wed, 12 Nov 2025 10:55:50 +0000 Subject: Re: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.net