Hi,
thank you very much for your answer. This page I know already, but there is not much information about NFS at all. Yes, Netapp is using NFS v4.1 and they write how to setup session trunking, but that's it. It seems like, NetApp has used a 2 years old Proxmox setup and Proxmox has recently upgraded their version to 9 with a newer Debian version, too.
I would like to get some real-life experiences.
Best regards, Florian ________________________________ From: tmac tmacmd@gmail.com Sent: Saturday, 8 November 2025 04:32 To: Florian Schmid fschmid@ubimet.com Cc: toasters toasters@teaparty.net Subject: Re: Proxmox and NetApp NFS volumes
https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index....
Sent from Gmail Mobile.
On Fri, Nov 7, 2025 at 7:52 AM Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> wrote:
---------- Forwarded message ---------- From: Florian Schmid <fschmid@ubimet.commailto:fschmid@ubimet.com> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes Hello,
we are moving to Proxmox now and I would need some help from you.
In the past, I have always used NFSv3, because in our KVM environment, we had issues with NFSv4.1 and head fail-overs -> VMs got paused I think also VMware had such issues. Problem I think is that NFSv4.1 is stateful and on a head fail-over, this state got lost.
I'm on 9.14.1.
As NFSv3 is quite old and Proxmox is using a newer kernel, I wanted to ask, if someone of you have experience with Proxmox and NFS volumes from NetApp.
Do you use special mount options? Is NFSv4.2 working fine with head fail-overs -> VMs are not pausing or don't even notice it? We have 25 Gbit network interfaces as a bond to our NetApp, are here any special options to configure? Are you using on VM disks "cache=none" or something different?
Unfortunately, Proxmox has not much experience with NetApp either, so that I hope someone from you can help me here to create a good, fast and stable setup.
This is how a mount point looks like with default options from Proxmox and NFSv3: nfs-root-01:/pve_DC_nfs_root_01 /mnt/pve/nfs-v3-root-01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=none,addr=172.16.4.3 0 0
This is from our old virtualization oVirt: nfs-root-01:/oVirt_DC_nfs_root_01 /rhev/data-center/mnt/nfs-root-01:_oVirt__DC__nfs__root__01 nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,soft,nolock,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.4.3,mountvers=3,mountport=635,mountproto=udp,local_lock=all,addr=172.16.4.3 0 0
* oVirt was using soft, but this I think, I shouldn't use on Proxmox and instead use hard. oVirt had a special watchdog for this.
Best regards, Florian
---------- Forwarded message ---------- From: Florian Schmid via toasters <toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net> To: toasters <toasters@teaparty.netmailto:toasters@teaparty.net> Cc: Bcc: Date: Fri, 7 Nov 2025 12:52:00 +0000 Subject: Proxmox and NetApp NFS volumes _______________________________________________ toasters mailing list -- toasters@lists.teaparty.netmailto:toasters@lists.teaparty.net To unsubscribe send an email to toasters-leave@lists.teaparty.netmailto:toasters-leave@lists.teaparty.net