Linux網絡管理之網卡別名及網卡綁定配置

在日常的運維工作當中,有時候需要在一塊物理網卡上配置多個IP地址,這就是網卡子接口的概念,以及多塊網卡實現網卡的綁定,通俗來講就是多塊網卡使用的一個IP地址,下面我來詳細說明實現的過程。

創建網卡子接口

CentOS系統當中網絡是由NetworkManager這個服務來管理的,它提供了一個圖形的界面,但此服務不支持物理網卡子接口的設置,所以在配置網卡子接口的時候,我們需要關閉此服務

臨時關閉:service NetworkManager stop

永久關閉:chkconfig  NetworkMangager  off

如果有時需要臨時創建子接口需要這么操作

[root@server ~]#  ip   addr add 10.1.252.100/16 dev eth0 label  eth0:0

注意:一旦重啟網絡服務,將會失效

創建永久的網卡子接口,這時候就需要寫到網卡的配置文件里面去了網卡的配置文件路徑在/etc/sysconfig/network-scripts/目錄下以ifcfg開頭跟設備名的文件,加入我設置的子接口的配置文件叫做eth0:0

vim /etc/sysconfig/network-scripts/ifcfg-eth0:0(如果你每次編輯網卡配置文件,每次這個路徑覺得很長的時候可以定義別名,直接cd切換目錄到這個文件的當前目錄下)

DEVICE=eth0:0   //網卡的子接口名稱                                                                                 

BOOTPROTO=none  //使用的協議這里是靜態                                                                   

IPADDR=192.168.1.100   //子接口的IP地址                                                                     

NETMASK=255.255.255.0  //子接口的子網掩碼                                                                

GATEWAY=192.168.1.254   //子接口的網關                                                                       

DNS1=8.8.8.8                     //子接口指定的dns                                                                        

編輯網卡的配置文件之后需要重啟網絡服務                                                                     

[root@server network-scripts]# service network restart                                                    

[root@server network-scripts]# ifconfig                                                                                  

eth0      Link encap:Ethernet  HWaddr 00:0C:29:D1:18:FD                                     

          inet addr:10.1.252.100  Bcast:10.1.255.255  Mask:255.255.0.0          

          inet6 addr: fe80::20c:29ff:fed1:18fd/64 Scope:Link                                        

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1                

          RX packets:47570 errors:0 dropped:0 overruns:0 frame:0                           

          TX packets:1618 errors:0 dropped:0 overruns:0 carrier:0                            

          collisions:0 txqueuelen:1000                                                                                

          RX bytes:3140045 (2.9 MiB)  TX bytes:135945 (132.7 KiB)                         

                                                                                                                                                              

eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:D1:18:FD                                                

          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0 

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1                

至此網絡子接口就配置完成了

 

 

網卡綁定

在講解如何實現bonding網卡綁定前我先來講講bond的原理以及bond的工作模式,最后將實現網卡綁定的配置

bonding

就是將多塊網卡綁定同一IP地址對外提供服務,可以實現高可用或者負載均衡。當然給兩塊網卡設置同一IP地址是不可能的。通過bonding,虛擬一塊網卡對外提供連接,物理網卡被修改為相同的MAC地址。

正常情況下,網卡只接受目的硬件地址是自身MAC的以太網幀,對于別的數據幀都過濾掉,以減輕負擔。但是網卡也支持混雜promisc的模式,接收網絡上的所有幀,tcpdumpbonding就運行在這個模式下,驅動程序中的mac地址,將兩塊網卡的MAC地址改成相同,可以接受特定的mac數據幀,然后把相應的數據幀傳給bond驅動程序處理。雙網卡工作的時候表現為一個虛擬網卡(bond0),該虛擬網卡也需要驅動,驅動名叫bonding

bonding的工作模式

mode 0 balance-rr

輪詢(round-robin)策略:從頭到尾順序的在每一個slave接口上面發送數據包。本模式提供負載均衡和容錯的能力,兩塊網卡都工作。

 

mode  1 active-backup

主備策略:在綁定中,只有一個slave被激活。當且僅當活動的slvae接口失敗時才會激活其他slave。為了避免交換機發生混亂時綁定的MAC地址只有一個外部端口上可見。

 

mode 3broadcast

廣播策略:在所有的slave接口上傳送所有的保溫。本模式提供容錯能力。

 

這里我給大家配置的mode 1模式,我這里使用的是vmware虛擬機來做的實驗,在做實驗之前需要再添加一塊網卡,這樣linux系統中才會有兩塊網卡

第一步:創建bonding設備的配置文件

[root@server network-scripts]# vim ifcfg-bond0                                                                    

DEVICE=bond0                                                                                                                                 

BOOTPROTO=none                                                                                                                          

IPADDR=10.1.252.100                                                                                                                    

NETMASK=255.255.0.0                                                                                                                  

GATEWAY=10.1.0.1                                                                                                                         

DNS1=8.8.8.8                                                                                                                                    

BONDING_OPTS="miimon=100 mode=1"                                                                                 

第二部:編輯兩塊物理網卡的配置文件                                                                              

[root@server network-scripts]# vim ifcfg-eth0                                                                       

DEVICE=eth0                                                                                                                                    

MASTER=bond0                                                                                                                               

SLAVE=yes                                                                                                                                          

                                                                                                                                                              

[root@server network-scripts]# vim ifcfg-eth1                                                                       

DEVICE=eth1                                                                                                                                    

MASTER=bond0                                                                                                                                

SLAVE=yes              

注:miimon是用來進行鏈路檢測的。如果miimon=100,那么系統每100毫秒檢測一次鏈路狀態,如果有一條線路不通就轉入另一條線路。

    mode=1表示工作模式為主備模式

    MASTER=bond0 主設備為bond0

 

配置完成只需要重啟網絡服務即可,測試使用另一臺主機來ping bond0IP地址接口,接下來測試bond的狀態,將其中的一塊網卡down掉,看另一塊網卡能不能頂上來,如果能,則表示成功

查看bond的狀態:watch –n 1 cat /proc/net/bonding/bond 動態觀察bond的狀態

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)                                           

Bonding Mode: fault-tolerance (active-backup)                                                             

Primary Slave: None                                                                                                               

Currently Active Slave: eth0                                                                                                

MII Status: up                                                                                                                          

MII Polling Interval (ms): 100                                                                                              

Up Delay (ms): 0                                                                                                                     

Down Delay (ms): 0                                                                                                                

Slave Interface: eth0                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:fd                                                                          

Slave queue ID: 0                                                                                                                    

Slave Interface: eth1                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:07                                                                         

Slave queue ID:  0                                                                                                                                                                                       

 

當我把eth0網卡down掉,當前活動的網卡就變成了eth1

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)                                           

Bonding Mode: fault-tolerance (active-backup)                                                             

Primary Slave: None                                                                                                               

Currently Active Slave: eth1                                                                                                

MII Status: down                                                                                                                    

MII Polling Interval (ms): 100                                                                                              

Up Delay (ms): 0                                                                                                                     

Down Delay (ms): 0                                                                                                                

Slave Interface: eth0                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:fd                                                                          

Slave queue ID: 0                                                                                                                    

Slave Interface: eth1                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:07                                                                         

Slave queue ID          :        0                                                                                                                                                                                                              

                                                                                                                                                                                                                                             

原創文章,作者:fszxxxks,如若轉載,請注明出處:http://www.www58058.com/42839

(0)
fszxxxksfszxxxks
上一篇 2016-09-01 20:05
下一篇 2016-09-02 08:47

相關推薦

  • Centos7 PHP-FPM源碼安裝

    PHP-FPM源碼安裝 安裝必要組件 yum install -y openssl-devel traceroute libtool unzip gcc gcc-c++ autoconf net-snmp-devel vim wget sysstat lrzsz  man tree mysql-devel ntpdate rsync libxml2…

    系統運維 2016-09-06
  • 讓運維工程師不再藍瘦、香菇

    最近廣西一小哥失戀后錄的視頻風靡互聯網,也讓“藍瘦、香菇”這兩個詞火了一把。雖然原故事男主角是因為失戀才藍瘦、香菇,但想想作為運維“狗”的我們也時常因強大的工作壓力而藍瘦,常常在晚上睡得香呼呼的時候因為要處理故障從溫暖的被窩爬起來,看著鋁朋友鄙視的眼神,真的好香菇……本來作為技術大牛的我們,工作應該是很酷的事情,享受的應該是小白美鋁們崇拜的眼神,可現在卻那么…

    系統運維 2017-01-09
  • Linux 內存管理

    1.Linux 進程在內存數據結構      可以看到一個可執行程序在存儲(沒有調入內存)時分為代碼段,數據段,未初始化數據段三部分:      1) 代碼段:存放CPU執行的機器指令。通常代碼區是共享的,即其它執行程序可調用它。假如機器中有數個進程運行相同的一個程序,那么它們就…

    Linux干貨 2015-04-13
  • CPU 處理器架構知識

    CPU處理器架構: 主要有ARM、X86/Atom、MIPS、PowerPC,其中ARM在智能手機上面一枝獨秀;其中ARM/MIPS/PowerPC均是基于精簡指令集機器處理器的架構;X86則是基于復雜指令集的架構,Atom是x86或者是x86指令集的精簡版。 Android在支持各種處理器的現狀: ARM+Android 最早發展、完善的支持,主要在手機市…

    Linux干貨 2015-08-03
  • 關于文件權限管理了解和使用

                    文件權限管理   文件屬性格式              文件屬性操作 chown          設置文件的所有者…

    系統運維 2016-08-05
  • 數據結構-棧和隊列

    1.棧 1.1 棧的定義 棧是一種特殊的線性表。其特殊性在于限定插入和刪除數據元素的操作只能在線性表的一端進行。如下所示: 結論:后進先出(Last In First Out),簡稱為LIFO線性表。 棧的基本運算有六種: 構造空棧:InitStack(S)、 判??? StackEmpty(S)、 判棧滿: StackFull(S)、 …

    Linux干貨 2015-04-07
欧美性久久久久