hi,
I am migrating 2 7mode iscsi luns to an iscsi svm on a cdot cluster (8.3.2). The migration is fine, I am now testing the luns from the client os (oracle linux 5.6, I do not have any say on the client os,).
So I have followed the instructions in here: https://library.netapp.com/ecmdocs/ECMP1654943/html/index.html
I see the luns, but now I see 2x 4 luns, because the cdot cluster has 4 iscsi lifs.
So I need to configure multipathd.
I have this /etc/multipath.conf
defaults { user_friendly_names yes }
blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^sd[a-c]" }
multipaths { multipath { wwid 3600a098038303370562b4946426b6742 alias netapp_oracle_recovery } multipath { wwid 3600a098038303370562b4946426b6743 alias netapp_oracle_datafile }
devices { device { vendor "NETAPP" product "LUN" path_grouping_policy group_by_prio features "1 queue_if_no_path" prio_callout "/sbin/mpath_prio_alua /dev/%n" path_checker directio path_selector "round-robin 0" failback immediate hardware_handler "1 alua" rr_weight uniform rr_min_io 128 getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
} }
I blacklist sda, sdb and sdc (local vmware disks).
This is the output of multipath -ll:
# multipath -ll netapp_oracle_datafile (3600a098038303370562b4946426b6743) dm-3 NETAPP,LUN C-Mode size=350G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 6:0:0:1 sdg 8:96 active ready running | `- 3:0:0:1 sdf 8:80 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 4:0:0:1 sdj 8:144 active ready running `- 5:0:0:1 sdk 8:160 active ready running netapp_oracle_recovery (3600a098038303370562b4946426b6742) dm-2 NETAPP,LUN C-Mode size=300G features='1 queue_if_no_path' hwhandler='0' wp=rw |-+- policy='round-robin 0' prio=4 status=active | |- 3:0:0:0 sdd 8:48 active ready running | `- 6:0:0:0 sde 8:64 active ready running `-+- policy='round-robin 0' prio=1 status=enabled |- 5:0:0:0 sdh 8:112 active ready running `- 4:0:0:0 sdi 8:128 active ready running
And after installing the netapp host tools, this is the output of sanlun lun show -p
# sanlun lun show -p
ONTAP Path: ALR-SVM-iSCSI:/vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile LUN: 1 LUN Size: 350.1g Product: cDOT Host Device: netapp_oracle_datafile(3600a098038303370562b4946426b6743) Multipath Policy: round-robin 0 Multipath Provider: Native --------- ---------- ------- ------------ ---------------------------------------------- host vserver path path /dev/ host vserver state type node adapter LIF --------- ---------- ------- ------------ ---------------------------------------------- up primary sdg host6 iscsi_lif02b up primary sdf host3 iscsi_lif02a up secondary sdj host4 iscsi_lif01a up secondary sdk host5 iscsi_lif01b
ONTAP Path: ALR-SVM-iSCSI:/vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery LUN: 0 LUN Size: 300g Product: cDOT Host Device: netapp_oracle_recovery(3600a098038303370562b4946426b6742) Multipath Policy: round-robin 0 Multipath Provider: Native --------- ---------- ------- ------------ ---------------------------------------------- host vserver path path /dev/ host vserver state type node adapter LIF --------- ---------- ------- ------------ ---------------------------------------------- up primary sdd host3 iscsi_lif02a up primary sde host6 iscsi_lif02b up secondary sdh host5 iscsi_lif01b up secondary sdi host4 iscsi_lif01a
and this is sanlun lun show: sanlun lun show controller(7mode/E-Series)/ device host lun vserver(cDOT/FlashRay) lun-pathname filename adapter protocol size product ----------------------------------------------------------------------------------------------------------------------------------------------- ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdk host5 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sdi host4 iSCSI 300g cDOT ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdj host4 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sdh host5 iSCSI 300g cDOT ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdf host3 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_datafile/q_oracle_datafile/lun_oracle_datafile /dev/sdg host6 iSCSI 350.1g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sde host6 iSCSI 300g cDOT ALR-SVM-iSCSI /vol/vol_oracle_recovery/q_oracle_recovery/lun_oracle_recovery /dev/sdd host3 iSCSI 300g cDOT
In /dev/mapper I see this:
ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 62 Jul 13 14:00 control brw-rw---- 1 root disk 253, 3 Jul 13 14:01 netapp_oracle_datafile brw-rw---- 1 root disk 253, 4 Jul 13 14:01 netapp_oracle_datafilep1 brw-rw---- 1 root disk 253, 2 Jul 13 14:01 netapp_oracle_recovery brw-rw---- 1 root disk 253, 5 Jul 13 14:01 netapp_oracle_recoveryp1 brw-rw---- 1 root disk 253, 0 Jul 13 14:00 VolGroup00-LogVol00 brw-rw---- 1 root disk 253, 1 Jul 13 14:00 VolGroup00-LogVol01
Is this ok? What do I have to use in /etc/fstab? /dev/mapper/netapp_oracle_datafile en /dev/mapper/netapp_oracle_recovery? Or am I doing something terribly wrong. I called with support but as this is no break or fix stuff, they recommend calling professional services ...
Apologies for the wall of text.
Thanks in advance.
-- Groeten, natxo