温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

Solaris 10 Configure IPMP

发布时间:2020-08-07 19:15:54 来源:ITPUB博客 阅读:165 作者:leodinas_kong 栏目:建站服务器

1.Link Based IPMP

Request:

Configure IP address “192.168.2.50” on e1000g1 & e1000g2 using Link based IPMP.

 

Step:1

Find out the installed NIC’s on the systems and its status.Verify the ifconfig output as well.
Make sure the NIC status are up and not in use.
Arena-Node1#dladm show-dev
e1000g0         link: up        speed: 1000  Mbps       duplex: full
e1000g1         link: up        speed: 1000  Mbps       duplex: full
e1000g2         link: up        speed: 1000  Mbps       duplex: full
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:af
Arena-Node1#


Step:2
Add the IP address in /etc/hosts and specify the netmask value in /etc/netmasks like below one.
Arena-Node1#cat /etc/hosts |grep 192.168.2.50
192.168.2.50    arenagroupIP
Arena-Node1#cat /etc/netmasks |grep 192.168.2
192.168.2.0     255.255.255.0
Arena-Node1#eeprom "local-mac-address?=true"


Step:3
Plumb the interfaces which you are going to use for new IP address. check the status in “ifconfig” output.
Arena-Node1#ifconfig e1000g1 plumb
Arena-Node1#ifconfig e1000g2 plumb
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:af
e1000g1: flags=1000842 mtu 1500 index 3
inet 0.0.0.0 netmask 0
ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
ether 0:c:29:ec:b3:c3



Step:4
Configure IP on Primary interface and add the interfaces to IPMP group with your own group name.
Arena-Node1#ifconfig e1000g1 192.168.2.50 netmask 255.255.255.0 broadcast + up
Arena-Node1#ifconfig e1000g1
e1000g1: flags=1000843 mtu 1500 index 3
inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:b9
Arena-Node1#ifconfig e1000g1 group arenagroup-1
Arena-Node1#ifconfig e1000g2 group arenagroup-1
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:af
e1000g1: flags=1000843 mtu 1500 index 3
inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
groupname arenagroup-1
ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
groupname arenagroup-1
ether 0:c:29:ec:b3:c3


----------------------------

Step:5
Now we have to ensure IPMP is working fine.This can be done in two ways.

i.Test:1 Remove the primary LAN cable and check it.Here i have removed the LAN cable from e1000g1 and let see what happens.
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:af
e1000g1: flags=19000802 mtu 0 index 3
inet 0.0.0.0 netmask 0
groupname arenagroup-1
ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
groupname arenagroup-1
ether 0:c:29:ec:b3:c3
e1000g2:1: flags=1000843 mtu 1500 index 4
inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255


Again i have connected back  the LAN cable to e1000g1.
Arena-Node1#dladm show-dev
e1000g0         link: up        speed: 1000  Mbps       duplex: full
e1000g1         link: up        speed: 1000  Mbps       duplex: full
e1000g2         link: up        speed: 1000  Mbps       duplex: full
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:af
e1000g1: flags=1000843 mtu 1500 index 3
inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
groupname arenagroup-1
ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
groupname arenagroup-1
ether 0:c:29:ec:b3:c3


Here the configured IP is going back to original interface where it was running before. Here I had specified “FALLBACK=yes” . That’s why IP is moving back to original interface.The same way you can also specify failure detection time to mpathd  using parameter “FAILURE_DETECTION_TIME” in ms.
Arena-Node1#cat /etc/default/mpathd |grep -v "#"
FAILURE_DETECTION_TIME=10000
FAILBACK=yes
TRACK_INTERFACES_ONLY_WITH_GROUPS=yes
Arena-Node1#


ii.Test:2 Normally most of the Unix admins will be sitting in remote site. So you will be not able to perform the above test.In this case ,you can use “if_mpadm” command to disable the interface in OS level.
Fist i am going to disable e1000g1 and let see what happens.
Arena-Node1#if_mpadm -d e1000g1
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:af
e1000g1: flags=89000842 mtu 0 index 3
inet 0.0.0.0 netmask 0
groupname arenagroup-1
ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
groupname arenagroup-1
ether 0:c:29:ec:b3:c3
e1000g2:1: flags=1000843 mtu 1500 index 4
inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255



Now i am going to enable it back.
Arena-Node1#if_mpadm -r e1000g1
Arena-Node1#ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g0: flags=1000843 mtu 1500 index 2
inet 192.168.2.5 netmask ffffff00 broadcast 192.168.2.255
ether 0:c:29:ec:b3:af
e1000g1: flags=1000843 mtu 1500 index 3
inet 192.168.2.50 netmask ffffff00 broadcast 192.168.2.255
groupname arenagroup-1
ether 0:c:29:ec:b3:b9
e1000g2: flags=1000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
groupname arenagroup-1
ether 0:c:29:ec:b3:c3


The same way you can manually failover the IP to one interface to another interface.

In the both tests,we can clearly see IP is moving from e1000g1 to e1000g2 automatically without any issues.So we have successfully configured Link based IPMP on Solaris.
These failover logs will be logged in /var/adm/messages like below.
Jun 26 20:57:24 node1 in.mpathd[3800]: [ID 215189 daemon.error] The link has gone down on e1000g1
Jun 26 20:57:24 node1 in.mpathd[3800]: [ID 594170 daemon.error] NIC failure detected on e1000g1 of group arenagroup-1
Jun 26 20:57:24 node1 in.mpathd[3800]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g1 to NIC e1000g2
Jun 26 20:57:57 node1 in.mpathd[3800]: [ID 820239 daemon.error] The link has come up on e1000g1
Jun 26 20:57:57 node1 in.mpathd[3800]: [ID 299542 daemon.error] NIC repair detected on e1000g1 of group arenagroup-1
Jun 26 20:57:57 node1 in.mpathd[3800]: [ID 620804 daemon.error] Successfully failed back to NIC e1000g1
Jun 26 21:03:59 node1 in.mpathd[3800]: [ID 832587 daemon.error] Successfully failed over from NIC e1000g1 to NIC e1000g2
Jun 26 21:04:07 node1 in.mpathd[3800]: [ID 620804 daemon.error] Successfully failed back to NIC e1000g1


To make the above work persistent across the reboot create the configuration files for both the network interfaces.
Arena-Node1#cat /etc/hostname.e1000g1
arenagroupIP netmask + broadcast + group arenagroup up
Arena-Node1#cat /etc/hostname.e1000g2
group arenagroup up

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI