0%

Linux上使用mdadm配置RAID磁盘阵列

异常和恢复流程

Problem 1: harddisk is removed

  1. 首先通过sudo mdadm -D /dev/md0 可以看到有两块disk显示removed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
/dev/md0:
Version : 1.2
Creation Time : Thu Jan 2 15:23:16 2020
Raid Level : raid10
Array Size : 7813772288 (7451.79 GiB 8001.30 GB)
Used Dev Size : 3906886144 (3725.90 GiB 4000.65 GB)
Raid Devices : 4
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 18 20:55:04 2020
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Layout : near=2
Chunk Size : 512K

Consistency Policy : bitmap

Number Major Minor RaidDevice State
- 0 0 0 removed
1 8 48 1 active sync set-B
- 0 0 2 removed
4 8 80 3 active sync set-B
  1. sudo fdisk -l 查看所有硬盘, 找到是哪些硬盘,例如/dev/sdc, /dev/sde没有添加上

  2. sudo mdadm --re-add /dev/md0 /dev/sdc 重新添加硬盘进raid,添加成功会显示is added

  3. 此时再 sudo mdadm -D /dev/md0 则会显示rebuilding

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10

    Consistency Policy : bitmap

    Rebuild Status : 99% complete

    Number Major Minor RaidDevice State
    0 8 32 0 spare rebuilding /dev/sdc
    1 8 48 1 active sync set-B
    2 8 96 2 spare rebuilding /dev/sdg
    4 8 80 3 active sync set-B
  4. 可以通过cat /proc/mdadm 查看状态