热备&mdadm.conf中的CREATE


I实验目标


1.创建RAID的同时创建热备,并验证热备的作用

2.验证mdadm.conf中“CREATE owner=root group=root mode=0640”在创建新RAID的同时将对应的参数设定为与配置文件中的相同。



II实验步骤

一、实验前信息信息


1.查看之前在没有mdadm.conf时候创建的RAID的属主、属组还有权限信息(和后面的做对比)

2.查看是否有mdadm.conf文件

3.查看mdadm.conf中是否有“CREATE owner=root group=root mode=0640”.若没有增加进去.



二、硬盘


1.在虚拟机上增加4块硬盘


2.每块盘分出一个2G的fd类型的分区

  • 创建(n)

  • 转换(t)fd

  • 查看(p)

  • 保存并退出(w)


3.探测分区是否被识别

  • partprobe /dev/sdx

  • partx -a /dev/sdx(两者中使用一个就可以)


4.查看分区信息

  • fdisk -l | grep "/dev/sd[xxxx]n"



三、创建RAID


1.创建RAID设备

  • mdadm --create --auto=yes /dev/md2 --level=5 \ --raid-devices=4 --spare-devices=1 /dev/sd[eghf]3 /dev/sdc3


2.查看RAID信息

  • mdadm --detail /dev/md2

  • cat /proc/mdstat


3.格式化

  • mkfs -t ext4 /dev/md2


4.挂载

  • mkdir /mnt/raid

  • mount /dev/md2 /mnt/raid

  • df -hT


5.自动挂载

  • /dev/md2 /mnt/raid ext4 defaults 0 0




四、测试


1.查看属组、属主、权限信息是否和mdadm.conf一致

  • ls -l /mnt/raid


2.模拟错误

  • mdadm --manage /dev/md2 --fail /dev/sde3

  • mdadm /dev/md2 -r /dev/sde3


3.查看热备是否自动加载

  • mdadm --detail /dev/md2

  • cat /proc/mdstat




III实施过程


一、对照信息


1.查看之前在没有mdadm.conf时候创建的RAID的属主、属组还有权限信息(和后面的做对比)

[root@test2 dev]# ls -lh | grep md1
brw-rw----  1 root disk      9,   1 Jun 25 15:29 md1
[root@test2 dev]#
[root@test2 /]# ls -lh | grep mdata
drwxr-xr-x    2 root        root      4.0K Jun 16 16:26 mdata
drwxr-xr-x    3 root        root      4.0K Jun 20 13:10 mdata2
[root@test2 /]#


2.查看是否有mdadm.conf文件

[root@test2 dev]# locate mdadm.conf
/etc/.mdadm.conf.swp
/etc/mdadm.conf
/etc/mdadm.conf.bak
/usr/share/doc/mdadm-3.2.6/mdadm.conf-example
/usr/share/man/man5/mdadm.conf.5.gz
[root@test2 dev]#


3.查看mdadm.conf中是否有“CREATE owner=root group=root mode=0640”.若没有增加进去.

[root@test2 etc]# cat /etc/mdadm.conf | grep -v "^#" 
DEVICE /dev/sd[egfh]2
ARRAY /dev/md1 level=raid5 num-devices=4 UUID=8f4b5df4:f380ce6c:2ff605af:1b33ebcd devices=/dev/sde2,/dev/sdg2,/dev/sdf2,/dev/sdh2
CREATE owner=root group=root mode=0640
[root@test2 etc]#



二、硬盘


1.在虚拟机上增加4块硬盘


2.每块盘分出一个2G的fd类型的分区

  • 创建(n)

  • 转换(t)fd

  • 查看(p)

  • 保存并退出(w)


sde

[root@test2 jason]# fdisk /dev/sde

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (525-2610, default 525):
Using default value 525
Last cylinder, +cylinders or +size{K,M,G} (525-2610, default 2610): +2G

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sde: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe4b1d138

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1         262     2104483+  fd  Linux raid autodetect
/dev/sde2             263         524     2104515   fd  Linux raid autodetect
/dev/sde3             525         786     2104515   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

sdf

[root@test2 jason]# fdisk /dev/sdf

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (525-2610, default 525):
Using default value 525
Last cylinder, +cylinders or +size{K,M,G} (525-2610, default 2610): +2G

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdf: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x65c9c4f1

   Device Boot      Start         End      Blocks   Id  System
/dev/sdf1               1         262     2104483+  fd  Linux raid autodetect
/dev/sdf2             263         524     2104515   fd  Linux raid autodetect
/dev/sdf3             525         786     2104515   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

sdg

[root@test2 jason]# fdisk /dev/sdg

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (525-2610, default 525):
Using default value 525
Last cylinder, +cylinders or +size{K,M,G} (525-2610, default 2610): +2G

Command (m for help): t
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdg: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x37970afa

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1               1         262     2104483+  fd  Linux raid autodetect
/dev/sdg2             263         524     2104515   fd  Linux raid autodetect
/dev/sdg3             525         786     2104515   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

sdh

[root@test2 jason]# fdisk /dev/sdh

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 3
First cylinder (525-2610, default 525):
Using default value 525
Last cylinder, +cylinders or +size{K,M,G} (525-2610, default 2610): +2G

Command (m for help): t
Partition number (1-4): fd
Partition number (1-4): 3
Hex code (type L to list codes): fd
Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdh: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xdeb9d0d0

   Device Boot      Start         End      Blocks   Id  System
/dev/sdh1               1         262     2104483+  fd  Linux raid autodetect
/dev/sdh2             263         524     2104515   fd  Linux raid autodetect
/dev/sdh3             525         786     2104515   fd  Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.


3.探测分区是否被识别

  • partx -a /dev/sdx(两者中使用一个就可以)

  • 下面的操作都是第一次按enter,第二次是按tab。


sde3

[root@test2 jason]# partx -a /dev/sde3
abc                  .dmrc                .gtk-bookmarks       .pulse/
.abrt/               Documents/           .gvfs/               .pulse-cookie
.bash_history        Downloads/           .ICEauthority        .recently-used.xbel
.bash_logout         .esd_auth            .imsettings.log      .ssh/
.bash_profile        file1                .local/              Templates/
.bashrc              .gconf/              mdadm.txt            Videos/
bootetc-bak.tar.bz2  .gconfd/             .mozilla/            .viminfo
.cache/              .gnome2/             Music/               .xsession-errors
.config/             .gnote/              .nautilus/           
.dbus/               .gnupg/              Pictures/           
Desktop/             .gstreamer-0.10/     Public/             
[root@test2 jason]# partx -a /dev/sde
sde   sde1  sde2  sde3

sdf3

[root@test2 jason]# partx -a /dev/sdf
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
[root@test2 jason]# partx -a /dev/sdf
sdf   sdf1  sdf2  sdf3

sdh3

[root@test2 jason]# partx -a /dev/sdh
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
[root@test2 jason]# partx -a /dev/sdh
sdh   sdh1  sdh2  sdh3 
[root@test2 jason]# partx -a /dev/sdh
sdh   sdh1  sdh2  sdh3

sdg3

[root@test2 jason]# partx -a /dev/sdg
BLKPG: Device or resource busy
error adding partition 1
BLKPG: Device or resource busy
error adding partition 2
[root@test2 jason]# partx -a /dev/sdg
sdg   sdg1  sdg2  sdg3


4.查看分区信息

  • fdisk -l | grep "/dev/sd[e-h]3"


RAID成员

[root@test2 /]# fdisk -l | grep "/dev/sd[e-h]3"
/dev/sdf3             525         786     2104515   fd  Linux raid autodetect
/dev/sdg3             525         786     2104515   fd  Linux raid autodetect
/dev/sdh3             525         786     2104515   fd  Linux raid autodetect
/dev/sde3             525         786     2104515   fd  Linux raid autodetect
[root@test2 /]#

热备盘

[root@test2 jason]# fdisk -l | grep "sdi1"
/dev/sdi1               1         262     2104483+  fd  Linux raid autodetect
[root@test2 jason]#



三、创建RAID


1.创建RAID设备

  • mdadm --create --auto=yes /dev/md2 --level=5 \ --raid-devices=4 --spare-devices=1 /dev/sd[eghf]3 /dev/sdc3

[root@test2 jason]# mdadm --create --auto=yes /dev/md2 --level=5 --raid-devices=4 --spare-devices=1 /dev/sd[e-h]3 /dev/sdi1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.


2.查看RAID信息

/dev/md2的详细信息

[root@test2 jason]# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sun Jun 26 22:47:07 2016
     Raid Level : raid5
     Array Size : 6306816 (6.01 GiB 6.46 GB)
  Used Dev Size : 2102272 (2.00 GiB 2.15 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Sun Jun 26 22:47:18 2016
          State : clean, degraded, recovering 
 Active Devices : 3
Working Devices : 5
 Failed Devices : 0
  Spare Devices : 2

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 14% complete

           Name : test2:2  (local to host test2)
           UUID : c486f242:0816a0f8:6956dbdc:98b5d3e2
         Events : 3

    Number   Major   Minor   RaidDevice State
       0       8       67        0      active sync   /dev/sde3
       1       8       83        1      active sync   /dev/sdf3
       2       8       99        2      active sync   /dev/sdg3
       5       8      115        3      spare rebuilding   /dev/sdh3

       4       8      129        -      spare   /dev/sdi1

cat /proc/mdstat

[root@test2 jason]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md2 : active raid5 sdh3[5] sdi1[4](S) sdg3[2] sdf3[1] sde3[0]
      6306816 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [=====>...............]  recovery = 27.5% (579456/2102272) finish=1.0min speed=24144K/sec

md1 : active raid5 sdf2[1] sdh2[5] sdg2[2] sde2[4]
      6306816 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: 
[root@test2 jason]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md2 : active raid5 sdh3[5] sdi1[4](S) sdg3[2] sdf3[1] sde3[0]
      6306816 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      [==================>..]  recovery = 92.6% (1947648/2102272) finish=0.0min speed=26200K/sec

md1 : active raid5 sdf2[1] sdh2[5] sdg2[2] sde2[4]
      6306816 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: 

3.格式化

  • mkfs -t ext4 /dev/md2


4.挂载

  • mkdir /mnt/raid

  • mount /dev/md2 /mnt/raid

  • df -hT


5.自动挂载

  • /dev/md2 /mnt/raid ext4 defaults 0 0




四、测试


1.查看属组、属主、权限信息是否和mdadm.conf一致

  • ls -l /mnt/raid


2.模拟错误

  • mdadm --manage /dev/md2 --fail /dev/sde3

[root@test2 dev]# mdadm /dev/md2 -f /dev/sde3
mdadm: set /dev/sde3 faulty in /dev/md2
[root@test2 dev]# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sun Jun 26 22:47:07 2016
     Raid Level : raid5
     Array Size : 6306816 (6.01 GiB 6.46 GB)
  Used Dev Size : 2102272 (2.00 GiB 2.15 GB)
   Raid Devices : 4
  Total Devices : 5
    Persistence : Superblock is persistent

    Update Time : Sun Jun 26 23:05:36 2016
          State : clean, degraded, recovering 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 1
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

 Rebuild Status : 35% complete

           Name : test2:2  (local to host test2)
           UUID : c486f242:0816a0f8:6956dbdc:98b5d3e2
         Events : 27

    Number   Major   Minor   RaidDevice State
       4       8      129        0      spare rebuilding   /dev/sdi1
       1       8       83        1      active sync   /dev/sdf3
       2       8       99        2      active sync   /dev/sdg3
       5       8      115        3      active sync   /dev/sdh3

       0       8       67        -      faulty   /dev/sde3
  [root@test2 dev]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md2 : active raid5 sdh3[5] sdi1[4] sdg3[2] sdf3[1] sde3[0](F)
      6306816 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [_UUU]
      [==========>..........]  recovery = 51.3% (1079168/2102272) finish=0.2min speed=59953K/sec

md1 : active raid5 sdf2[1] sdh2[5] sdg2[2] sde2[4]
      6306816 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: 
  • mdadm /dev/md2 -r /dev/sde3


3.查看热备是否自动加载

  • mdadm --detail /dev/md2

  • cat /proc/mdstat

results matching ""

    No results matching ""