重庆思庄Oracle、Redhat认证学习论坛

标题: CentOS7使用Bond方式,实现网卡绑定 [打印本页]

作者: 王亮    时间: 2020-12-13 16:16
标题: CentOS7使用Bond方式,实现网卡绑定
(1)实验环境
物理网口:eth0,eth1
绑定后虚拟口:bond0
IP地址:192.168.133.23
网关:192.168.133.2
掩码:255.255.255.0
DNS:8.8.8.8

(2)查看并加载bound

[root@localhost ~]# modprobe --first-time bonding
[root@localhost ~]# lsmod|grep bonding           

bonding               132885  0

加载成功后,可以查看到bond0 端口

[root@localhost ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
    inet 192.168.133.23/24 brd 192.168.133.255 scope global eth0
    inet6 fe80::20c:29ff:fe94:b529/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:0c:29:94:b5:33 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN
    link/ether 66:51:2f:37:a2:31 brd ff:ff:ff:ff:ff:ff

(3)配置虚拟端口bound0

在/etc/sysconfig/network-scripts/目录下,创建ifcfg-bond0文件

vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.133.23
NETMASK=255.255.255.0
GATEWAY=192.168.133.2
DNS1=8.8.8.8

(4)配置物理网卡eth0,eth1

vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=none
MASTER=bond0
SLAVE=yes

vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
BOOTPROTO=none
MASTER=bond0

SLAVE=yes

(5)修改modprobe相关设定文件

vim /etc/modprobe.d/bonding.conf
alias bond0 binding

options bond0 miimon=100 mode=0   //模式0,miimon是用来进行链路监测的,后面指定的是检查的间隔时间,单位是ms

(6)重启并测试

[root@localhost network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP qlen 1000
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 00:0c:29:94:b5:29 brd ff:ff:ff:ff:ff:ff
    inet 192.168.133.23/24 brd 192.168.133.255 scope global bond0
    inet6 fe80::20c:29ff:fe94:b529/64 scope link tentative dadfailed

       valid_lft forever preferred_lft forever

查看bonud工作状况:

[root@localhost bonding]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 200
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 00:0c:29:94:b5:29
Slave queue ID: 0

Slave Interface: eth1
MII Status: down
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 1
Permanent HW addr: 00:0c:29:94:b5:33
Slave queue ID: 0






欢迎光临 重庆思庄Oracle、Redhat认证学习论坛 (http://bbs.cqsztech.com/) Powered by Discuz! X3.2