邏輯卷管理器(LVM)
※·步驟一:添加三塊硬盤,分區,并且標記硬盤分區為 8e LVM 4
※·測試:把/dev/sdd1添加到VG中去,看看PV的顯示信息如何 8
§·測試:縮減LV卷的大小。(lv0101為5G縮減為2G, lv0102為6G縮減為3G ) 15
※·使用resize2fs調整LV的邏輯邊界;再使用lvreduce調整物理邊界 16
※·掛載lv0101 lv0102兩個lv看看數據還在不在的 16
§·邏輯卷管理器LVM介紹
※·LVM邏輯卷的簡單描述
lvm(logical volume manager 邏輯卷管理器)的可以彈性的調整文件系統的容量,支持任何塊設備,需要使用dm模塊:device mapper設備映射,將一個或多個底層設備組織成一個邏輯設備的模塊。
lvm的重點在于彈性的調整文件系統的容量,而并非在于數據的存儲效率及安全上面,需要文件的讀寫效能或者是數據的可靠性是RAID所考慮的問題。
※·LVM邏輯卷的好壞
優點:
LVM的重點在與可以彈性的調整文件系統的容量,當實際場景中,我們如果使用普通的分區后,基本該分區的容量就已經固定,比如linux的/home分區由于用戶存儲的數據太多,導致/home分區的容量不夠,我們需要把/home分區的數據拷貝到別的地方,掛載一塊大分區上去,才可以完成/home的擴容,步驟比較繁瑣。
由于LVM可以動態在線調整分區大小,我們可以直接通過LVM給/home分區擴容。如果有其它分區太大而浪費空間,我們還可以在線的縮小容量。
LVM還支持快照,在不中斷業務的情況下,對數據做完整備份。
缺點:
LVM是基于操作系統之上,若由于程序問題導致數據丟失,或LVM的損壞,數據恢復比較麻煩。
※·LVM結構組成部分
以上為一個簡單模型:
PV:物理磁盤,LVM是建立在物理磁盤上面的,增加物理磁盤可以擴展上層分區的大小。
VG:卷組,可以包含一個或多個物理卷。
LV:邏輯卷,從VG上分出可以實際存儲數據的分區,建立文件系統
PE:VG上最小的塊大小單元。
§·LVM的舉例分析
※·LVM設備名稱
/dev/mapper/VG_NAME_LV_NAME
例如:/dev/mapper/vo10_root <—- /dev/vo10/root (符號鏈接)
※·LVM分區類型:
類型: 8e LVM
※·LVM PV相關命令
pvs :簡要PV信息顯示;
pvdisplay :顯示PV的詳細信息;
pvcreate : 創建PV
例如:創建PV pvcreatr /dev/sda3
※·LVM VG相關命令
vgs :簡要顯示VG信息
vgdisplay :顯示VG的詳細信息
vgcreate :創建VG
例如:創建VG vgcreate myvg /dev/sda3
Vgxtend : 添加PV
例如: 添加PV vgxtend myvg /dev/sda5
縮小VG:
1.把PV中的數據移動到VG 的其它PV中;
Pvmove /dev/sda5
2.在VG中刪除PV
Vgreduce myvg /dev/sda5
※·LVM LV 相關命令
創建LV : lvcreate
-L# : 直接指明空間;
-n :指明名稱 后面接VG設備
例如: lvcreate -L 2G -n mylv myvg (相當與創建了分區)
#創建 一個2G的LV 從 VG(必須是創建好的VG)上
mke2fs –t ext4 –b 1024 –L mylv /dev/myvg/mylv (格式化分區為ext4格式)
#格式化 在設備 /dev/myvg/mylv的LV的文件系統為 ext4
擴展LV:
lvextend –L [+]#[MGT] /dev/vg_name/lv_name
重新識別文件系統大?。?/span>resize2fs /dev/myvg/mylv
例如:lvextend –L 5G /dev/myvg/mylv
縮減LV :
首先需要卸載mglv : umount /dev/myvg/mylv
強制檢測mylv : e2fsck -f /dev/myvg/mylv
調整文件系統的大小:resize2fs /dev/myvg/mylv 3000M
縮小文件系統的邏輯大?。?/span>lvreduce -L 3000M /dev/myvg/mylv
再掛載使用: moung /dev/myvg/mylv /mylvm/
※·LVM 快照機制:
創建快照:
lvcreate -s -L #[GT] -P r -n sanphot_lv_name /dev/myvg/mylv
# -s : 創建快照的參數;-L:設置快照空間大??;-P r快照為只讀模式-n創建快照的名稱
例如:
lvcreate -s -L 512M -P r -n mylv_snap /dev/myvg/mylv
mount mylv_snap /mnt/snap
拷貝出快照文件,即可刪除快照卷mylv_snap
刪除快照卷lvremove mylv_snap
§·舉例練習
1.使用三塊20G硬盤做一個LVM,創建兩個VG (VG01,VG02),每個VG上創建兩個LV(LV0101,LV0102,LV0201,LV0202),每個LV 5G,自己做卷的擴展和收縮,快照等等。
※·步驟一:添加三塊硬盤,分區,并且標記硬盤分區為 8e LVM
[root@love721 ~]# fdisk -l /dev/sdb Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xa49e4ef2 Device Boot Start End Blocks Id System /dev/sdb1 2048 41943039 20970496 8e Linux LVM [root@love721 ~]# fdisk -l /dev/sdc Disk /dev/sdc: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x05cfc514 Device Boot Start End Blocks Id System /dev/sdc1 2048 41943039 20970496 8e Linux LVM [root@love721 ~]# fdisk -l /dev/sdd Disk /dev/sdd: 21.5 GB, 21474836480 bytes, 41943040 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0xc9626279 Device Boot Start End Blocks Id System /dev/sdd1 2048 41943039 20970496 8e Linux LVM
※·步驟二:把三個硬盤的三個分區創建成PV
[root@love721 ~]# pvs #沒有創建前 沒有PV的相關信息 [root@love721 ~]# pvdisplay [root@love721 ~]# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created [root@love721 ~]# pvcreate /dev/sdc1 Physical volume "/dev/sdc1" successfully created [root@love721 ~]# pvcreate /dev/sdd1 #創建PV的命令和信息提示 Physical volume "/dev/sdd1" successfully created [root@love721 ~]# pvs # 簡要PV信息概述信息 PV VG Fmt Attr PSize PFree /dev/sdb1 lvm2 --- 20.00g 20.00g /dev/sdc1 lvm2 --- 20.00g 20.00g /dev/sdd1 lvm2 --- 20.00g 20.00g [root@love721 ~]# pvdisplay #詳細的PV信息信息 "/dev/sdd1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdd1 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID DiJsp3-PUu5-oFmp-min1-dfs8-q17e-E3dyb5 "/dev/sdb1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdb1 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID xVVOUU-aRPa-oF0U-wVVb-HF9g-xLcg-dQWJuI "/dev/sdc1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdc1 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 9BfbMg-Rvwt-0NwD-kYm3-Rld1-zngF-NxWymJ
※·步驟三:有了PV我們就創建VG
[root@love721 ~]# vgcreate vg01 /dev/sdb1 #在sdb1上創建一個vg01 Volume group "vg01" successfully created [root@love721 ~]# vgcreate vg02 /dev/sdb1 #測試一個分區只能屬于一個vg Physical volume '/dev/sdb1' is already in volume group 'vg01' Unable to add physical volume '/dev/sdb1' to volume group 'vg02'. [root@love721 ~]# vgcreate vg02 /dev/sdc1 #在sdc1上創建一個vg02 Volume group "vg02" successfully created [root@love721 ~]# vgdisplay #查看vg的信息 --- Volume group --- VG Name vg01 #vg 名稱 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write #可讀寫 VG Status resizable #可調整的 MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB #VG的尺寸 PE Size 4.00 MiB #PE的大小 Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID 1ImH19-1Y6G-mbnI-52c1-FB8C-jN9e-djU8rk --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB PE Size 4.00 MiB Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID hXcukw-bsgg-iTJv-WTVx-paHt-6HEK-IdTDfO [root@love721 ~]# pvdisplay # 顯示pv的信息,由于把PV加入了VG,所以PV的信息肯定有所改變 --- Physical volume --- PV Name /dev/sdb1 #sdb1添加到VG01中 VG Name vg01 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID xVVOUU-aRPa-oF0U-wVVb-HF9g-xLcg-dQWJuI --- Physical volume --- PV Name /dev/sdc1 #sdc1添加到VG02中 VG Name vg02 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID 9BfbMg-Rvwt-0NwD-kYm3-Rld1-zngF-NxWymJ "/dev/sdd1" is a new physical volume of "20.00 GiB" --- NEW Physical volume --- PV Name /dev/sdd1 #sdd1沒有添加到任何VG中 VG Name PV Size 20.00 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID DiJsp3-PUu5-oFmp-min1-dfs8-q17e-E3dyb5
※·測試:把/dev/sdd1添加到VG中去,看看PV的顯示信息如何
[root@love721 ~]# vgextend vg01 /dev/sdd1 #把sdd1添加到VG01中去 Volume group "vg01" successfully extended [root@love721 ~]# vgdisplay #VG信息顯示 --- Volume group --- VG Name vg01 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB #VG空間增大由于有 sdd1的加入 PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 0 / 0 Free PE / Size 10238 / 39.99 GiB VG UUID 1ImH19-1Y6G-mbnI-52c1-FB8C-jN9e-djU8rk --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB PE Size 4.00 MiB Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID hXcukw-bsgg-iTJv-WTVx-paHt-6HEK-IdTDfO [root@love721 ~]# pvdisplay #PV的詳細信息 --- Physical volume --- PV Name /dev/sdb1 VG Name vg01 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID xVVOUU-aRPa-oF0U-wVVb-HF9g-xLcg-dQWJuI --- Physical volume --- PV Name /dev/sdd1 VG Name vg01 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID DiJsp3-PUu5-oFmp-min1-dfs8-q17e-E3dyb5 --- Physical volume --- PV Name /dev/sdc1 VG Name vg02 PV Size 20.00 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 5119 Free PE 5119 Allocated PE 0 PV UUID 9BfbMg-Rvwt-0NwD-kYm3-Rld1-zngF-NxWymJ
※·步驟四:在VG中分配LV
LV在實際可以正常使用的空間,一定要區分 PV ,VG LV的區別 [root@love721 ~]# lvcreate -L 2G -n lv0101 vg01 #在vg01上創建兩個2G的lv Logical volume "lv0101" created. [root@love721 ~]# lvcreate -L 2G -n lv0102 vg01 Logical volume "lv0102" created. [root@love721 ~]# lvdisplay #顯示lv的信息 --- Logical volume --- LV Path /dev/vg01/lv0101 LV Name lv0101 VG Name vg01 LV UUID Kv9y7w-cdLQ-T1hb-GLcb-he3E-Zca1-kffH0T LV Write Access read/write LV Creation host, time love721.q.com, 2016-08-01 10:56:32 +0800 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:0 --- Logical volume --- LV Path /dev/vg01/lv0102 LV Name lv0102 VG Name vg01 LV UUID eIDTga-iY8A-2BXg-TpSY-XoMH-vbsA-h3pmGd LV Write Access read/write LV Creation host, time love721.q.com, 2016-08-01 10:56:40 +0800 LV Status available # open 0 LV Size 2.00 GiB Current LE 512 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:1 [root@love721 ~]# vgdisplay #顯示VG的信息 --- Volume group --- VG Name vg01 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 4 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 39.99 GiB PE Size 4.00 MiB Total PE 10238 Alloc PE / Size 1024 / 4.00 GiB #在vg01上顯示有4的空間已經被使用了,就是分配兩個LV Free PE / Size 9214 / 35.99 GiB VG UUID 1ImH19-1Y6G-mbnI-52c1-FB8C-jN9e-djU8rk --- Volume group --- VG Name vg02 System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 1 VG Access read/write VG Status resizable MAX LV 0 Cur LV 0 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 20.00 GiB PE Size 4.00 MiB Total PE 5119 Alloc PE / Size 0 / 0 Free PE / Size 5119 / 20.00 GiB VG UUID hXcukw-bsgg-iTJv-WTVx-paHt-6HEK-IdTDfO
※·步驟五:格式化LV 掛載LV
從VG中分配出LV ,相當于分配了可用空間出來,我們需要格式化,掛載即可使用分配出來的 兩個LV. 格式化LV: [root@love721 ~]# mke2fs -t ext4 -b 1024 -L mylv0101 /dev/mapper/vg01-lv0101 #創建ext4格式分區,卷標mylv0101的,設備在/dev/mapper/vg01-lv0101,創建LV自動生成的 mke2fs 1.42.9 (28-Dec-2013) Filesystem label=mylv0101 OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 131072 inodes, 2097152 blocks 104857 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=35651584 256 block groups 8192 blocks per group, 8192 fragments per group 512 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409, 663553, 1024001, 1990657 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done [root@love721 ~]# mkdir /mnt/mylv0101 #創建掛載目錄 [root@love721 ~]# mount /dev/mapper/vg01-lv0101 /mnt/mylv0101/ #掛載mylv0101 [root@love721 ~]# cd /mnt/mylv0101/ [root@love721 mylv0101]# ls lost+found [root@love721 mylv0101]# cp -r /boot/* ./ #拷貝文件到LV中來 [root@love721 mylv0101]# ll total 107037 -rw-r--r-- 1 root root 126426 Aug 1 11:07 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 1024 Aug 1 11:07 grub drwx------ 6 root root 1024 Aug 1 11:07 grub2 -rw-r--r-- 1 root root 57644379 Aug 1 11:07 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 28097829 Aug 1 11:07 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 10190079 Aug 1 11:07 initrd-plymouth.img drwx------ 2 root root 12288 Aug 1 11:05 lost+found -rw-r--r-- 1 root root 252612 Aug 1 11:07 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2963044 Aug 1 11:07 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5156528 Aug 1 11:07 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5156528 Aug 1 11:07 vmlinuz-3.10.0-327.el7.x86_64
※·測試:在線擴容 lv的空間大小
擴展LV的步驟:
由于我現有的VG容量有40G,LV的容量為3G,可以直接使用LV命令擴容;
lvextend –L 10G /dev/vg01/lv0101
resize2fs /dev/vg01/lv0101
查看現有lv的空間大?。?/span>
[root@love721 mylv0102]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 40G 315M 40G 1% / devtmpfs 475M 0 475M 0% /dev tmpfs 489M 0 489M 0% /dev/shm tmpfs 489M 6.8M 483M 2% /run tmpfs 489M 0 489M 0% /sys/fs/cgroup /dev/sda3 20G 2.6G 18G 13% /usr /dev/sda6 1003K 23K 909K 3% /mnt/tools /dev/sda1 485M 138M 348M 29% /boot tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0101 2.9G 3.1M 2.8G 1% /mnt/mylv0101 #查看到lv0101的空間大小為 3G /dev/mapper/vg01-lv0102 1.9G 3.1M 1.8G 1% /mnt/mylv0102 #查看到lv0102的空間大小為 2G
可以看到兩個LV已經掛載到主機上面,我們使用命令來擴容現有的兩個LV,
lv0101從3G擴容到 5G ; lv0102從2G擴容到 6G
[root@love721 mylv0102]# lvextend -L 5G /dev/vg01/lv0101 #擴容lv0101命令 Size of logical volume vg01/lv0101 changed from 2.93 GiB (750 extents) to 5.00 GiB (1280 extents). Logical volume lv0101 successfully resized. [root@love721 mylv0102]# lvextend -L 6G /dev/vg01/lv0102 #擴容lv0102命令 Size of logical volume vg01/lv0102 changed from 1.95 GiB (500 extents) to 6.00 GiB (1536 extents). Logical volume lv0102 successfully resized. [root@love721 mylv0102]# fdisk –l #查看分區的信息,中間省了很多文字信息,直接看lv0101 lv0102 ………………………………………………………………………………………………………. Disk /dev/mapper/vg01-lv0101: 5368 MB, 5368709120 bytes, 10485760 sectors #lv0101容量為5G Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/vg01-lv0102: 6442 MB, 6442450944 bytes, 12582912 sectors #lv0102容量為6G Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes [root@love721 mylv0102]# df –h #查看掛載的分區信息,容量還是先前的容量空間,沒有改變成為我們擴容的空間 Filesystem Size Used Avail Use% Mounted on …………………………………………………………………………………………………………… /dev/mapper/vg01-lv0101 2.9G 3.1M 2.8G 1% /mnt/mylv0101 /dev/mapper/vg01-lv0102 1.9G 3.1M 1.8G 1% /mnt/mylv0102
出現上面的問題,我們使用 resize2fs 命令將文件系統確實添加到系統中
[root@love721 mylv0102]# resize2fs /dev/vg01/lv0101 #重新讓系統識別下lv0101 resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/vg01/lv0101 is mounted on /mnt/mylv0101; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/vg01/lv0101 is now 1310720 blocks long. [root@love721 mylv0102]# resize2fs /dev/vg01/lv0102 #重新讓系統識別下lv0102 resize2fs 1.42.9 (28-Dec-2013) Filesystem at /dev/vg01/lv0102 is mounted on /mnt/mylv0102; on-line resizing required old_desc_blocks = 1, new_desc_blocks = 1 The filesystem on /dev/vg01/lv0102 is now 1572864 blocks long. [root@love721 mylv0102]# df –h #可以顯示正常的擴容后的容量 Filesystem Size Used Avail Use% Mounted on /dev/sda2 40G 315M 40G 1% / devtmpfs 475M 0 475M 0% /dev /dev/mapper/vg01-lv0101 4.9G 4.0M 4.7G 1% /mnt/mylv0101 /dev/mapper/vg01-lv0102 5.9G 4.0M 5.7G 1% /mnt/mylv0102
測試原來的文件是否還存在原來的LV中:文件都存在且可以正常打開
[root@love721 mylv0102]# ll /mnt/mylv0101 total 20 -rw-r--r-- 1 root root 683 Aug 1 13:04 fstab drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mylv0102]# ll /mnt/mylv0102 total 20 -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mylv0102]#
※·關于 lvm在線擴容總結一下:
1.在線擴容LVM命令比較簡單,但是需要理解的的是,實際工作中應該是把新的PV分區格式化為8e的分區,把PV添加到VG中,,在通過VG來分配空間給我們的LV,由于我的VG空間充足,直接LV命令擴容。
2.在線lvetend擴容LV后,系統還無法識別LV的空間,需要 resize2fs重新讀取一下,是系統識別到正確的容量空間。
§·測試:縮減LV卷的大小。(lv0101為5G縮減為2G, lv0102為6G縮減為3G )
縮減LV卷的步驟:
1. 先需要卸載已經掛載的LV;
2. 強制進行磁盤檢查;e2fsck –f /dev/vg0/lv0101
3. resize2fs /dev/vg0/lv0101 2000M (調整邏輯邊界);
4. lvreduce –L 2000M /dev/vg0/lv0101 (調整物理邊界);
5.掛載設備,檢查文件是否存在。
注意:無論是原因需要縮減LV卷的大小,首先需要確認原LV中的數據大小肯定要小于縮減后的LV的空間大小,不然是無法縮減的。還有一點就是,盡然要縮減LV的大小,那該LV中應該是沒有重要的數據(有的話肯定需要自己先備份出來的)
※·卸載 : lv0101 lv0102兩個LV卷
[root@love721 mylv0102]# df -h #可以查看到兩個LV都掛載在系統中在 Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0101 4.9G 4.0M 4.7G 1% /mnt/mylv0101 /dev/mapper/vg01-lv0102 5.9G 4.0M 5.7G 1% /mnt/mylv0102 [root@love721 mylv0102]# umount /mnt/mylv0101 [root@love721 mylv0102]# cd .. [root@love721 mnt]# umount /mnt/mylv0102 #由于我剛剛在LV卷的掛載目錄里面無法卸載mylv0102 [root@love721 mnt]# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 #兩個LV卷都被卸載掉了
※·強制檢測兩個LV的文件分區
(如果不檢測,直接縮減resize2fs LV的空間,還是會提示你強制檢測的)
[root@love721 mnt]# e2fsck -f /dev/vg01/lv0101 #強制檢測/lv0101 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information mylv0101: 12/327680 files (0.0% non-contiguous), 29791/1310720 blocks [root@love721 mnt]# e2fsck -f /dev/vg01/lv0102 #強制檢測/lv0101 e2fsck 1.42.9 (28-Dec-2013) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information mylv0102: 12/393216 files (0.0% non-contiguous), 33903/1572864 blocks
※·使用resize2fs調整LV的邏輯邊界;再使用lvreduce調整物理邊界
[root@love721 mnt]# resize2fs /dev/vg01/lv0101 2000M #resize2fs lv0101的空間為2G resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/vg01/lv0101 to 512000 (4k) blocks. The filesystem on /dev/vg01/lv0101 is now 512000 blocks long. [root@love721 mnt]# resize2fs /dev/vg01/lv0102 3000M #resize2fs lv0102的空間為3G resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/vg01/lv0102 to 768000 (4k) blocks. The filesystem on /dev/vg01/lv0102 is now 768000 blocks long. [root@love721 mnt]# lvreduce -L 2000M /dev/vg01/lv0101 # lvreduce lv0101的空間為2G WARNING: Reducing active logical volume to 1.95 GiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv0101? [y/n]: y Size of logical volume vg01/lv0101 changed from 5.00 GiB (1280 extents) to 1.95 GiB (500 extents). Logical volume lv0101 successfully resized. [root@love721 mnt]# lvreduce -L 3000M /dev/vg01/lv0102 # lvreduce lv0102的空間為3G WARNING: Reducing active logical volume to 2.93 GiB THIS MAY DESTROY YOUR DATA (filesystem etc.) Do you really want to reduce lv0102? [y/n]: y Size of logical volume vg01/lv0102 changed from 6.00 GiB (1536 extents) to 2.93 GiB (750 extents). Logical volume lv0102 successfully resized.
※·掛載lv0101 lv0102兩個lv看看數據還在不在的
[root@love721 mnt]# mount /dev/vg01/lv0101 /mnt/mylv0101 #掛載lv0101 [root@love721 mnt]# mount /dev/vg01/lv0102 /mnt/mylv0102 #掛載lv0102 [root@love721 mnt]# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0101 1.9G 3.1M 1.9G 1% /mnt/mylv0101 #大小調整過來了 /dev/mapper/vg01-lv0102 2.9G 3.1M 2.8G 1% /mnt/mylv0102 [root@love721 mnt]# ll /mnt/mylv0101 #原來的數據文件都在,查看頁沒有問題 total 20 -rw-r--r-- 1 root root 683 Aug 1 13:04 fstab drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mnt]# ll /mnt/mylv0102 total 20 -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found [root@love721 mnt]#
※·LVM空間的縮減小結
以上就是LVM空間的縮減,關于為什么縮減需要卸載掛載點,擴容不需要卸載掛載點的理解。
在實際中有可能出現原來的LV空間數據存放BLOCK比較分散,縮減需要把磁盤空間數據的集中在限定大小的區域,所以需要需要卸載,以免在數據移動過程中,有其它的數據寫入縮減空間以為的block。
還有記得卸載后一定記得 需要強制檢測分區,使用 resize2fs調整分區,再使用lvreduce調整分區,不然會出現該LV卷出現損壞的故障報錯。
§·LVM快照機制測試(在線備份)
介紹幾個概念,關于數據的備份
冷備份:卸載掉文件系統,不能讀不能寫
溫備份:不卸載文件系統,能讀取文件系統內容但是不能寫
熱備份:不卸載文件系統,既能讀取文件系統內容又能寫入
注意兩點:
1),快照其實也是一個邏輯卷
2),快照只能對邏輯卷LVM進行備份,并且只能對同一個卷組的邏輯卷進行備份
l
vcreate -s -L #[GT] -P r -n sanphot_lv_name /dev/myvg/mylv # -s : 創建快照的參數;-L:設置快照空間大??;-P r快照為只讀模式-n創建快照的名稱
※·對lv0102做快照,備份其數據
1.查看lv0102的空間大小,查看其中數據;
2.做快照后,刪除和修改源卷的數據;
3.查看快照卷中的數據是否完整。
[root@love721 mylv0102]# ll –h # 查看到原卷上的數據有105M total 105M -rw-r--r-- 1 root root 124K Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4.0K Aug 1 16:10 grub drwx------ 6 root root 4.0K Aug 1 16:10 grub2 -rw-r--r-- 1 root root 55M Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 27M Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 9.8M Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16K Aug 1 13:03 lost+found -rw-r--r-- 1 root root 247K Aug 1 16:10 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2.9M Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-3.10.0-327.el7.x86_64 [root@love721 mylv0102]# df –h #硬盤的空間還有2.7G左右 Filesystem Size Used Avail Use% Mounted on tmpfs 98M 0 98M 0% /run/user/0 /dev/mapper/vg01-lv0102 2.9G 116M 2.7G 5% /mnt/mylv0102
創建快照卷 lv0102_snap 大小為500M (快照卷空間不要太小了,一邊原卷上的數據變動比較頻繁時,快照卷無法保存所有的原卷數據備份)
[root@love721 mylv0102]# lvcreate -s -L 500M -n lv0102_snap -p r /dev/mapper/vg01-lv0102 #創建一個500M的 lv0102的快照卷 lv0102_snap,權限為只讀的 Logical volume "lv0102_snap" created. [root@love721 mylv0102]# lvdisplay #顯示LV卷的信息,部分文字省了,可以看到剛剛創建的快照卷 ………………………………….. --- Logical volume --- LV Path /dev/vg01/lv0102_snap LV Name lv0102_snap VG Name vg01 LV UUID YAETxW-lPfi-af9a-RM41-t5Ok-5BBT-ourZfl LV Write Access read only LV Creation host, time love721.q.com, 2016-08-01 16:16:34 +0800 LV snapshot status active destination for lv0102 LV Status available # open 0 LV Size 2.93 GiB Current LE 750 COW-table size 500.00 MiB COW-table LE 125 Allocated to snapshot 0.00% Snapshot chunk size 4.00 KiB Segments 1 Allocation inherit Read ahead sectors auto - currently set to 8192 Block device 253:4
掛載快照卷,修改lv0102中的數據,刪除部分數據,測試快照卷的完整性。
[root@love721 mylv0102]# mkdir /mnt/snap [root@love721 mylv0102]# mount /dev/vg01/lv0102_snap /mnt/snap #掛載快照卷 mount: /dev/mapper/vg01-lv0102_snap is write-protected, mounting read-only [root@love721 snap]# ll -h total 105M -rw-r--r-- 1 root root 124K Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4.0K Aug 1 16:10 grub drwx------ 6 root root 4.0K Aug 1 16:10 grub2 -rw-r--r-- 1 root root 55M Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 27M Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 9.8M Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16K Aug 1 13:03 lost+found -rw-r--r-- 1 root root 247K Aug 1 16:10 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2.9M Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5.0M Aug 1 16:10 vmlinuz-3.10.0-327.el7.x86_64
刪除原卷下一些數據,修改個別文件
[root@love721 mylv0102]# rm symvers-3.10.0-327.el7.x86_64.gz rm: remove regular file ‘symvers-3.10.0-327.el7.x86_64.gz’? y [root@love721 mylv0102]# rm vmlinuz-* rm: remove regular file ‘vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc’? y rm: remove regular file ‘vmlinuz-3.10.0-327.el7.x86_64’? y
#刪除lv0102原卷上的三個文件
[root@love721 mylv0102]# ll total 96736 -rw-r--r-- 1 root root 126426 Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4096 Aug 1 16:10 grub drwx------ 6 root root 4096 Aug 1 16:10 grub2 -rw-r--r-- 1 root root 57644379 Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 28097829 Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 10190079 Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found -rw------- 1 root root 2963044 Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 [root@love721 mylv0102]# echo "1234567890" >> issue #修改issue文件內容 [root@love721 mylv0102]# cat issue \S Kernel \r on an \m Mage Education Learning Services http://www.magedu.com TTY is \l HOSTNAME is \n DATE is \t 1234567890
查看快照上的文件內容:刪除的文件在,修改issue的文件內容是快照以前的
[root@love721 mylv0102]# ll /mnt/snap/ total 107056 -rw-r--r-- 1 root root 126426 Aug 1 16:10 config-3.10.0-327.el7.x86_64 drwxr-xr-x 2 root root 4096 Aug 1 16:10 grub drwx------ 6 root root 4096 Aug 1 16:10 grub2 -rw-r--r-- 1 root root 57644379 Aug 1 16:10 initramfs-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc.img -rw-r--r-- 1 root root 28097829 Aug 1 16:10 initramfs-3.10.0-327.el7.x86_64.img -rw-r--r-- 1 root root 10190079 Aug 1 16:10 initrd-plymouth.img -rw-r--r-- 1 root root 119 Aug 1 13:04 issue drwx------ 2 root root 16384 Aug 1 13:03 lost+found -rw-r--r-- 1 root root 252612 Aug 1 16:10 symvers-3.10.0-327.el7.x86_64.gz -rw------- 1 root root 2963044 Aug 1 16:10 System.map-3.10.0-327.el7.x86_64 -rwxr-xr-x 1 root root 5156528 Aug 1 16:10 vmlinuz-0-rescue-7d0dd8f054af463ca2d6ac1f4b210fdc -rwxr-xr-x 1 root root 5156528 Aug 1 16:10 vmlinuz-3.10.0-327.el7.x86_64 [root@love721 mylv0102]# cat /mnt/snap/issue \S Kernel \r on an \m Mage Education Learning Services http://www.magedu.com TTY is \l HOSTNAME is \n DATE is \t
以上快照就完成,我們把快照內的文件拷貝出來放在備份服務器上即可
刪除快照卷:
lvremove lv0102_snap
[root@love721 mylv0102]# lvremove /dev/vg01/lv0102_snap # 移除快照卷,釋放VG空間 Do you really want to remove active logical volume lv0102_snap? [y/n]: y Logical volume "lv0102_snap" successfully removed
※·測試刪除整個LVM
1.卸載所有的掛載點上的LV卷;
2.刪除LV卷
lvremove /dev/vg01/lv0101
lvremove /dev/vg01/lv0102
3.刪除VG:
Vgremove /dev/vg01
Vgremove /dev/vg02
4.刪除PE:
peremove /dev/sdd1
peremove /dev/sdc1
peremove /dev/sdb1
§·課外練習
1、創建一個2G的文件系統,塊大小為2048byte,預留1%可用空間,文件系統ext4,卷標為TEST,要求此分區開機后自動掛載至/test目錄,且默認有acl掛載選項
步驟一 : 分區格式化
[root@centos68 ~]# lsblk #創建一個2G的磁盤分區 NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ?à?¤sda1 8:1 0 200M 0 part /boot ?à?¤sda2 8:2 0 48.8G 0 part / ?à?¤sda3 8:3 0 19.5G 0 part /testdir ?à?¤sda4 8:4 0 1K 0 part ?à?¤sda5 8:5 0 2G 0 part [SWAP] ?à?¤sda6 8:6 0 10G 0 part ???¤sda7 8:7 0 10G 0 part /home sdb 8:16 0 20G 0 disk ???¤sdb1 8:17 0 2G 0 part sdd 8:48 0 20G 0 disk sdc 8:32 0 20G 0 disk sde 8:64 0 20G 0 disk sr0 11:0 1 3.7G 0 rom /media/CentOS_6.8_Final [root@centos68 ~]# mke2fs -t ext4 -b 2048 -m 1 -L "TEST" /dev/sdb1 #格式化分區,類型為 ext4 塊大小為 2048 預留比率 1%,卷標為: TEST mke2fs 1.41.12 (17-May-2010) Filesystem label=TEST #卷標 OS type: Linux Block size=2048 (log=1) #塊大小 Fragment size=2048 (log=1) Stride=0 blocks, Stripe width=0 blocks 131560 inodes, 1052240 blocks 10522 blocks (1.00%) reserved for the super user #預留空間 1% First data block=0 Maximum filesystem blocks=538968064 65 block groups 16384 blocks per group, 16384 fragments per group 2024 inodes per group Superblock backups stored on blocks: 16384, 49152, 81920, 114688, 147456, 409600, 442368, 802816 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 24 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@centos68 ~]# [root@centos68 ~]# dumpe2fs -h /dev/sdb1 #查看格式化后的詳細信息 dumpe2fs 1.41.12 (17-May-2010) Filesystem volume name: TEST Last mounted on: <not available> Filesystem UUID: e4e8efdb-9ae5-45b2-aac5-e447ca608626 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 131560 Block count: 1052240 Reserved block count: 10522 Free blocks: 998252 Free inodes: 131549 First block: 0 Block size: 2048 Fragment size: 2048 Reserved GDT blocks: 512 Blocks per group: 16384 Fragments per group: 16384 Inodes per group: 2024 Inode blocks per group: 253 Flex block group size: 16 Filesystem created: Sat Aug 27 09:24:51 2016 Last mount time: n/a Last write time: Sat Aug 27 09:24:53 2016 Mount count: 0 Maximum mount count: 24 Last checked: Sat Aug 27 09:24:51 2016 Check interval: 15552000 (6 months) Next check after: Thu Feb 23 09:24:51 2017 Lifetime writes: 97 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: b0bc8c76-2b96-437e-98e3-ca2043607802 Journal backup: inode blocks Journal features: (none) Journal size: 64M Journal length: 32768 Journal sequence: 0x00000001 Journal start: 0
步驟2:設置開機掛載并啟用ACL
cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 19 18:10:17 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=ca4c44c8-1c65-4896-a295-d55e5d5e5c5e / ext4 defaults 1 1 UUID=2c97fd2d-e455-493b-822c-25ce8c330e2b /boot ext4 defaults 1 2 UUID=1c6d09df-f7a1-4a72-b842-2b94063f38c7 /testdir ext4 defaults 1 2 UUID=ebd1d743-af4a-465b-98a3-6c9d3945c1d7 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 UUID="466d9111-784b-4206-b212-35f91a8a56cc" /home ext4 defaults,usrquota,grpquota 0 0 UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" /test ext4 defaults,acl 0 0
2、寫一個腳本,完成如下功能:
(1) 列出當前系統識別到的所有磁盤設備
(2) 如磁盤數量為1,則顯示其空間使用信息
否則,則顯示最后一個磁盤上的空間使用信息
解題思路:nums統計磁盤數量,lastdisk取出最后還磁盤的信息
[root@centos68 ~]# cat disk_num.sh #!/bin/bash nums=$(fdisk -l | grep -o "Disk /dev/sd." | cut -d" " -f2 | sort | wc -l ) lastdisk=$(fdisk -l | grep -o "Disk /dev/sd." | cut -d" " -f2 | sort | tail -1 ) echo "disk nums is : $nums" echo "lastdisk info : $(fdisk -l $lastdisk)"
3、創建一個可用空間為1G的RAID1設備,要求其chunk大小為128k,文件系統為ext4,有一個空閑盤,開機可自動掛載至/backup目錄
解:步驟一 準備兩個分區為1G的空間,由于是RAID1所以鏡像卷
[root@centos68 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 48.8G 0 part / ├─sda3 8:3 0 19.5G 0 part /testdir ├─sda4 8:4 0 1K 0 part ├─sda5 8:5 0 2G 0 part [SWAP] ├─sda6 8:6 0 10G 0 part └─sda7 8:7 0 10G 0 part /home sdd 8:48 0 20G 0 disk └─sdd1 8:49 0 1G 0 part sdc 8:32 0 20G 0 disk └─sdc1 8:33 0 1G 0 part [root@centos68 ~]#
步驟二 : 創建RAID設備。
[root@centos68 ~]# mdadm -C -l 1 -a yes -n 2 -c 128 /dev/md1 /dev/sdd1 /dev/sdc1 #創建免md1 raid為1級別,自動創建設備文件,硬盤數量為2 chunk為 128K mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90 Continue creating array? y mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md1 started. [root@centos68 ~]# mdadm -D /dev/md1 /dev/md1: Version : 1.2 Creation Time : Sat Aug 27 10:15:59 2016 Raid Level : raid1 #RAID 等級 Array Size : 1059200 (1034.38 MiB 1084.62 MB) Used Dev Size : 1059200 (1034.38 MiB 1084.62 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Sat Aug 27 10:16:12 2016 State : clean, resyncing Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Resync Status : 81% complete Name : centos68.qq.com:1 (local to host centos68.qq.com) UUID : d721a5d7:a7ee3b35:2f42a5ff:7945abfb Events : 13 Number Major Minor RaidDevice State 0 8 49 0 active sync /dev/sdd1 1 8 33 1 active sync /dev/sdc1 [root@centos68 ~]#
步驟三 : 創建文件系統
[root@centos68 ~]# mke2fs -t ext4 -L raid1-disk /dev/md1 mke2fs 1.41.12 (17-May-2010) Filesystem label=raid1-disk OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 66240 inodes, 264800 blocks 13240 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=272629760 9 block groups 32768 blocks per group, 32768 fragments per group 7360 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 31 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@centos68 ~]# blkid /dev/sda1: UUID="2c97fd2d-e455-493b-822c-25ce8c330e2b" TYPE="ext4" /dev/sda2: UUID="ca4c44c8-1c65-4896-a295-d55e5d5e5c5e" TYPE="ext4" LABEL="mydate2" /dev/sda3: UUID="1c6d09df-f7a1-4a72-b842-2b94063f38c7" TYPE="ext4" /dev/sda5: UUID="ebd1d743-af4a-465b-98a3-6c9d3945c1d7" TYPE="swap" /dev/sdb1: UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" TYPE="ext4" LABEL="TEST" /dev/sda7: LABEL="MYHOME" UUID="466d9111-784b-4206-b212-35f91a8a56cc" TYPE="ext4" /dev/sdd1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="f1be1939-6b90-6b6a-59aa-b07e20795a4e" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/sdc1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="c4db6eb3-881b-7197-b043-b12ca769aa2d" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/md1: LABEL="raid1-disk" UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" TYPE="ext4"
步驟四 :設置自動掛載
[root@centos68 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 19 18:10:17 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=ca4c44c8-1c65-4896-a295-d55e5d5e5c5e / ext4 defaults 1 1 UUID=2c97fd2d-e455-493b-822c-25ce8c330e2b /boot ext4 defaults 1 2 UUID=1c6d09df-f7a1-4a72-b842-2b94063f38c7 /testdir ext4 defaults 1 2 UUID=ebd1d743-af4a-465b-98a3-6c9d3945c1d7 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 UUID="466d9111-784b-4206-b212-35f91a8a56cc" /home ext4 defaults,usrquota,grpquota 0 0 UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" /test ext4 defaults,acl 0 0 UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" /backup ext4 defaults 0 0
4、創建由三塊硬盤組成的可用空間為2G的RAID5設備,要求其chunk大小為256k,文件系統為ext4,開機可自動掛載至/mydata目錄
解:步驟一:創建三個分區,每個分區為2G
[root@centos68 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 48.8G 0 part / ├─sda3 8:3 0 19.5G 0 part /testdir ├─sda4 8:4 0 1K 0 part ├─sda5 8:5 0 2G 0 part [SWAP] ├─sda6 8:6 0 10G 0 part └─sda7 8:7 0 10G 0 part /home sdb 8:16 0 20G 0 disk ├─sdb1 8:17 0 2G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdb2 8:18 0 1G 0 part sdd 8:48 0 20G 0 disk ├─sdd1 8:49 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdd2 8:50 0 1G 0 part sdc 8:32 0 20G 0 disk ├─sdc1 8:33 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdc2 8:34 0 1G 0 part sde 8:64 0 20G 0 disk sr0 11:0 1 3.7G 0 rom /media/CentOS_6.8_Final
步驟二:創建raid 5
[root@centos68 ~]# mdadm -C /dev/md5 -l 5 -c 256 -n 3 /dev/sd{b,c,d}2 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started. [root@centos68 ~]# mdadm /dev/md5 /dev/md5: 2.02GiB raid5 3 devices, 0 spares. Use mdadm --detail for more detail. [root@centos68 ~]# mdadm -D /dev/md5 /dev/md5: Version : 1.2 Creation Time : Sat Aug 27 11:00:03 2016 Raid Level : raid5 Array Size : 2118144 (2.02 GiB 2.17 GB) Used Dev Size : 1059072 (1034.25 MiB 1084.49 MB) Raid Devices : 3 Total Devices : 3 Persistence : Superblock is persistent Update Time : Sat Aug 27 11:00:13 2016 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 256K Name : centos68.qq.com:5 (local to host centos68.qq.com) UUID : 4732a4aa:c4955360:c2e69a98:101c395c Events : 18 Number Major Minor RaidDevice State 0 8 18 0 active sync /dev/sdb2 1 8 34 1 active sync /dev/sdc2 3 8 50 2 active sync /dev/sdd2 [root@centos68 ~]#
步驟三 :格式化
[root@centos68 ~]# mke2fs -t ext4 -L raid5_disk /dev/md5 mke2fs 1.41.12 (17-May-2010) Filesystem label=raid5_disk OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=64 blocks, Stripe width=128 blocks 132464 inodes, 529536 blocks 26476 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=545259520 17 block groups 32768 blocks per group, 32768 fragments per group 7792 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912 Writing inode tables: done Creating journal (16384 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 26 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@centos68 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 200G 0 disk ├─sda1 8:1 0 200M 0 part /boot ├─sda2 8:2 0 48.8G 0 part / ├─sda3 8:3 0 19.5G 0 part /testdir ├─sda4 8:4 0 1K 0 part ├─sda5 8:5 0 2G 0 part [SWAP] ├─sda6 8:6 0 10G 0 part └─sda7 8:7 0 10G 0 part /home sdb 8:16 0 20G 0 disk ├─sdb1 8:17 0 2G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdb2 8:18 0 1G 0 part └─md5 9:5 0 2G 0 raid5 sdd 8:48 0 20G 0 disk ├─sdd1 8:49 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdd2 8:50 0 1G 0 part └─md5 9:5 0 2G 0 raid5 sdc 8:32 0 20G 0 disk ├─sdc1 8:33 0 1G 0 part │ └─md1 9:1 0 1G 0 raid1 └─sdc2 8:34 0 1G 0 part └─md5 9:5 0 2G 0 raid5 sde 8:64 0 20G 0 disk sr0 11:0 1 3.7G 0 rom /media/CentOS_6.8_Final
步驟四:自動掛載配置
[root@centos68 ~]# blkid /dev/sda1: UUID="2c97fd2d-e455-493b-822c-25ce8c330e2b" TYPE="ext4" /dev/sda2: UUID="ca4c44c8-1c65-4896-a295-d55e5d5e5c5e" TYPE="ext4" LABEL="mydate2" /dev/sda3: UUID="1c6d09df-f7a1-4a72-b842-2b94063f38c7" TYPE="ext4" /dev/sda5: UUID="ebd1d743-af4a-465b-98a3-6c9d3945c1d7" TYPE="swap" /dev/sdb1: UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" TYPE="ext4" LABEL="TEST" /dev/sda7: LABEL="MYHOME" UUID="466d9111-784b-4206-b212-35f91a8a56cc" TYPE="ext4" /dev/sdd1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="f1be1939-6b90-6b6a-59aa-b07e20795a4e" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/sdc1: UUID="d721a5d7-a7ee-3b35-2f42-a5ff7945abfb" UUID_SUB="c4db6eb3-881b-7197-b043-b12ca769aa2d" LABEL="centos68.qq.com:1" TYPE="linux_raid_member" /dev/md1: LABEL="raid1-disk" UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" TYPE="ext4" /dev/sdb2: UUID="4732a4aa-c495-5360-c2e6-9a98101c395c" UUID_SUB="edba2fa3-3ba9-4053-8a56-873f9598faae" LABEL="centos68.qq.com:5" TYPE="linux_raid_member" /dev/sdd2: UUID="4732a4aa-c495-5360-c2e6-9a98101c395c" UUID_SUB="81b8750e-7dfb-09a8-48cd-20202aea41ad" LABEL="centos68.qq.com:5" TYPE="linux_raid_member" /dev/sdc2: UUID="4732a4aa-c495-5360-c2e6-9a98101c395c" UUID_SUB="4ec508cf-50a0-2ce9-f4ff-8793b2ac4a7a" LABEL="centos68.qq.com:5" TYPE="linux_raid_member" /dev/md5: LABEL="raid5_disk" UUID="7c615c12-26f2-4de4-8798-c388e4bb7d48" TYPE="ext4" [root@centos68 ~]# vim /etc/fstab [root@centos68 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Jul 19 18:10:17 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # UUID=ca4c44c8-1c65-4896-a295-d55e5d5e5c5e / ext4 defaults 1 1 UUID=2c97fd2d-e455-493b-822c-25ce8c330e2b /boot ext4 defaults 1 2 UUID=1c6d09df-f7a1-4a72-b842-2b94063f38c7 /testdir ext4 defaults 1 2 UUID=ebd1d743-af4a-465b-98a3-6c9d3945c1d7 swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 UUID="466d9111-784b-4206-b212-35f91a8a56cc" /home ext4 defaults,usrquota,grpquota 0 0 UUID="e4e8efdb-9ae5-45b2-aac5-e447ca608626" /test ext4 defaults,acl 0 0 UUID="12510dfb-60d5-4bb3-9bd3-a819389b5708" /backup ext4 defaults 0 0 UUID="7c615c12-26f2-4de4-8798-c388e4bb7d48" /mydate ext4 defaults 0 0 [root@centos68 ~]#
原創文章,作者:linux_root,如若轉載,請注明出處:http://www.www58058.com/40811
有圖有文,每個練習結果均有圖證,看得出其中的認真程度,贊。建議目錄部分篇幅不要占據太多。