hi,
I am migrating 2 7mode iscsi luns to an iscsi svm on a cdot cluster (8.3.2). The migration is fine, I am now testing the luns from the client os (oracle linux 5.6, I do not have any say on the client os,).
So I have followed the instructions in here: https://library.netapp.com/ecmdocs/ECMP1654943/html/index.html
I see the luns, but now I see 2x 4 luns, because the cdot cluster has 4 iscsi lifs.
So I need to configure multipathd.
I have this /etc/multipath.conf
defaults { user_friendly_names yes }
blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^sd[a-c]" }
multipaths { multipath { wwid 3600a098038303370562b4946426b6742 alias netapp_oracle_recovery } multipath { wwid 3600a098038303370562b4946426b6743 alias netapp_oracle_datafile }
devices { device { vendor "NETAPP" product "LUN" path_grouping_policy group_by_prio features "1 queue_if_no_path" prio_callout "/sbin/mpath_prio_alua /dev/%n" path_checker directio path_selector "round-robin 0" failback immediate hardware_handler "1 alua" rr_weight uniform rr_min_io 128 getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
} }
I blacklist sda, sdb and sdc (local vmware disks).
This is the output of multipath -ll:
# multipath -ll netapp_oracle_datafile (3600a098038303370562b4946426b6743) dm-3 NETAPP,LUN C-Mode size=350G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 6:0:0:1 sdg 8:96 active ready running | `- 3:0:0:1 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 4:0:0:1 sdj 8:144 active ready running `- 5:0:0:1 sdk 8:160 active ready running netapp_oracle_recovery (3600a098038303370562b4946426b6742) dm-2 NETAPP,LUN C-Mode size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 3:0:0:0 sdd 8:48 active ready running | `- 6:0:0:0 sde 8:64 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 5:0:0:0 sdh 8:112 active ready running `- 4:0:0:0 sdi 8:128 active ready running
And after installing the netapp host tools, this is the output of sanlun lun show -p
# sanlun lun show -p
ONTAP Path: ALR-SVM-iSCSI:/vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile LUN: 1 LUN Size: 350.1g Product: cDOT Host Device: netapp_oracle_datafile(3600a098038303370562b4946426b6743) Multipath Policy: round-robin 0 Multipath Provider: Native --------- ---------- ------- ------------ ---------------------------------------------- host vserver path path /dev/ host vserver state type node adapter LIF --------- ---------- ------- ------------ ---------------------------------------------- up primary sdg host6 iscsi_lif02b up primary sdf host3 iscsi_lif02a up secondary sdj host4 iscsi_lif01a up secondary sdk host5 iscsi_lif01b
ONTAP Path: ALR-SVM-iSCSI:/vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery LUN: 0 LUN Size: 300g Product: cDOT Host Device: netapp_oracle_recovery(3600a098038303370562b4946426b6742) Multipath Policy: round-robin 0 Multipath Provider: Native --------- ---------- ------- ------------ ---------------------------------------------- host vserver path path /dev/ host vserver state type node adapter LIF --------- ---------- ------- ------------ ---------------------------------------------- up primary sdd host3 iscsi_lif02a up primary sde host6 iscsi_lif02b up secondary sdh host5 iscsi_lif01b up secondary sdi host4 iscsi_lif01a
and this is sanlun lun show: sanlun lun show controller(7mode/E-Series)/ device host lun vserver(cDOT/FlashRay) lun-pathname filename adapter protocol size product ----------------------------------------------------------------------------------------------------------------------------------------------- ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdk host5 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sdi host4 iSCSI 300g cDOT ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdj host4 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sdh host5 iSCSI 300g cDOT ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdf host3 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdg host6 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sde host6 iSCSI 300g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sdd host3 iSCSI 300g cDOT
In /dev/mapper I see this:
ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 62 Jul 13 14:00 control brw-rw---- 1 root disk 253, 3 Jul 13 14:01 netapp_oracle_datafile brw-rw---- 1 root disk 253, 4 Jul 13 14:01 netapp_oracle_datafilep1 brw-rw---- 1 root disk 253, 2 Jul 13 14:01 netapp_oracle_recovery brw-rw---- 1 root disk 253, 5 Jul 13 14:01 netapp_oracle_recoveryp1 brw-rw---- 1 root disk 253, 0 Jul 13 14:00 VolGroup00-LogVol00 brw-rw---- 1 root disk 253, 1 Jul 13 14:00 VolGroup00-LogVol01
Is this ok? What do I have to use in /etc/fstab? /dev/mapper/netapp_oracle_datafile en /dev/mapper/netapp_oracle_recovery? Or am I doing something terribly wrong. I called with support but as this is no break or fix stuff, they recommend calling professional services ...
Apologies for the wall of text.
Thanks in advance.
-- Groeten, natxo
Hi,
Please see inline.
On Wed, Jul 13, 2016 at 2:24 PM, Natxo Asenjo natxo.asenjo@gmail.com wrote:
So I need to configure multipathd.
I have this /etc/multipath.conf
defaults { user_friendly_names yes }
I suggest you consult NetApp's documentation for "host utilities", it has specific per OS release setting recommendations, IMO, " user_friendly_names no" is recommended.
# multipath -ll netapp_oracle_datafile (3600a098038303370562b4946426b6743) dm-3 NETAPP,LUN C-Mode size=350G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 6:0:0:1 sdg 8:96 active ready running | `- 3:0:0:1 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 4:0:0:1 sdj 8:144 active ready running `- 5:0:0:1 sdk 8:160 active ready running netapp_oracle_recovery (3600a098038303370562b4946426b6742) dm-2 NETAPP,LUN C-Mode size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 3:0:0:0 sdd 8:48 active ready running | `- 6:0:0:0 sde 8:64 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 5:0:0:0 sdh 8:112 active ready running `- 4:0:0:0 sdi 8:128 active ready running
It looks OK .. but if you can provide me with output from "lsblk", then I can tell you more about it.
It's important that multipathd kernel module + proper multipath.conf are properly installed into initrd file aka RAM disk. Otherwise the output above can be different after OS reboots.
Is this ok? What do I have to use in /etc/fstab? /dev/mapper/netapp_oracle_datafile en /dev/mapper/netapp_oracle_recovery? Or am I doing something terribly wrong. I called with support but as this is no break or fix stuff, they recommend calling professional services ...
NetApp support rocks, right ? =)
Are you planning to employ Linux LVM on top of your multipath device? If yes, then you need to add those to /etc/fstab.
The best you could do is the following: once you think you've done everything correctly SAN wise, prior going "production" with it, do extensive testing, including target (filer) and initiator (your server) reboots. It will either reveil issues or you are going to see how awesome it is to have redundancy in place =)
Cheers, Vladimir
On Sun, Jul 17, 2016 at 5:48 PM, Momonth momonth@gmail.com wrote:
Hi,
Please see inline.
On Wed, Jul 13, 2016 at 2:24 PM, Natxo Asenjo natxo.asenjo@gmail.com wrote:
So I need to configure multipathd.
I have this /etc/multipath.conf
defaults { user_friendly_names yes }
I suggest you consult NetApp's documentation for "host utilities", it has specific per OS release setting recommendations, IMO, " user_friendly_names no" is recommended.
ok, thanks, will do.
# multipath -ll netapp_oracle_datafile (3600a098038303370562b4946426b6743) dm-3
NETAPP,LUN
C-Mode size=350G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 6:0:0:1 sdg 8:96 active ready running | `- 3:0:0:1 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 4:0:0:1 sdj 8:144 active ready running `- 5:0:0:1 sdk 8:160 active ready running netapp_oracle_recovery (3600a098038303370562b4946426b6742) dm-2
NETAPP,LUN
C-Mode size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 3:0:0:0 sdd 8:48 active ready running | `- 6:0:0:0 sde 8:64 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 5:0:0:0 sdh 8:112 active ready running `- 4:0:0:0 sdi 8:128 active ready running
It looks OK .. but if you can provide me with output from "lsblk", then I can tell you more about it.
unfortunately this OS (oracle linux 5.6) does not have this command
It's important that multipathd kernel module + proper multipath.conf are properly installed into initrd file aka RAM disk. Otherwise the output above can be different after OS reboots.
aha, will take a look at that as well.
Is this ok? What do I have to use in /etc/fstab? /dev/mapper/netapp_oracle_datafile en
/dev/mapper/netapp_oracle_recovery? Or
am I doing something terribly wrong. I called with support but as this
is no
break or fix stuff, they recommend calling professional services ...
NetApp support rocks, right ? =)
mostly yes I must say, but this time this list rocks ;-)
Are you planning to employ Linux LVM on top of your multipath device? If yes, then you need to add those to /etc/fstab.
The best you could do is the following: once you think you've done everything correctly SAN wise, prior going "production" with it, do extensive testing, including target (filer) and initiator (your server) reboots. It will either reveil issues or you are going to see how awesome it is to have redundancy in place =)
Thanks for the tips!
On Mon, Jul 18, 2016 at 9:10 AM, Natxo Asenjo natxo.asenjo@gmail.com wrote:
It looks OK .. but if you can provide me with output from "lsblk", then I can tell you more about it.
unfortunately this OS (oracle linux 5.6) does not have this command
On my CentOS 6.7 "lsblk" comes with "util-linux-ng" RPM package, maybe you need to install it, if available.
Vladimir