測試災難復原


active raid6 sdl[10] sdk[9](F) sdi[8] sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb[1](F) sda[0]
Update Time : Mon Sep  2 18:49:53 2019
             State : clean, degraded
    Active Devices : 9
   Working Devices : 9
    Failed Devices : 2
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : omvNAS1:raid6  (local to host omvNAS1)
              UUID : 4b63596b:c0babc7b:f8cb8b0d:ec5c853a
            Events : 236

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       -       0        0        1      removed
       2       8       32        2      active sync   /dev/sdc
       3       8       48        3      active sync   /dev/sdd
       4       8       64        4      active sync   /dev/sde
       5       8       80        5      active sync   /dev/sdf
       6       8       96        6      active sync   /dev/sdg
       7       8      112        7      active sync   /dev/sdh
       8       8      128        8      active sync   /dev/sdi
       -       0        0        9      removed
      10       8      176       10      active sync   /dev/sdl

       1       8       16        -      faulty   /dev/sdb
       9       8      160        -      faulty   /dev/sdk

指令檢視

cat /proc/mdstat
mdadm --detail --verbose /dev/md0

active raid6 sdl[10] sdk9[F] sdi[8] sdh[7] sdg[6] sdf[5] sde[4] sdd[3] sdc[2] sdb1[F] sda[0]

/dev/sdb,/dev/sdk有問題,移除

mdadm --fail /dev/md0 /dev/sdb
mdadm --remove /dev/md0 /dev/sdb
mdadm --remove /dev/md0 /dev/sdk

重新加入新硬碟

mdadm --add /dev/md0 /dev/sdb
mdadm --add /dev/md0 /dev/sdk

圖形介面功能修復

cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid6 sdk[12] sdh[7] sdc[2] sdi[8] sdf[5] sdl[10] sdb[11] sdd[3] sdg[6] sde[4] sda[0]
      52743513600 blocks super 1.2 level 6, 512k chunk, algorithm 2 [11/10] [UUUUUUUUU_U]
      [>....................]  recovery =  0.0% (50184/5860390400) finish=46703.3min speed=2091K/sec
      bitmap: 44/44 pages [176KB], 65536KB chunk

指定/dev/md0內某一裝置故障 mdadm --fail /dev/md0 /dev/sdb

實作將損毀硬碟移除並重新加入新硬碟做rebuild的流程

將故障/dev/sdb由/dev/md0內移除 mdadm --remove /dev/md0 /dev/sdb

mdadm --fail /dev/md0 /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0
mdadm --remove /dev/md0 /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0

加入新硬碟

指令為mdadm --add /dev/md0 /dev/sdb

從cockpit系統檢測硬碟讀寫的狀態,新加入的硬碟有大量寫入(rebuild)

將/dev/sdb加入/dev/md0 mdadm --add /dev/md0 /dev/sdb

將/dev/md0啟動 mdadm --auto-detect

將/dev/md0停止 mdadm --stop /dev/md0

mdadm --stop /dev/md0
mdadm --auto-detect
mdadm --assemble --scan

Last updated