Centos7: Replaced failed drive in a 3-disk RAID 1 array. (DRAFT) 
Scenario:Replaced failed drive in a 3-disk RAID 1 array.

[acool@localhost ~]$
[acool@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[acool@localhost ~]$
[acool@localhost ~]$ mdadm --version
mdadm - v3.4 - 28th January 2016
[acool@localhost ~]$
[acool@localhost ~]$

//Inspect (I manually disconnected power and SATA cables on sda to simulate a hw failure)
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/2] [UU_]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/2] [UU_]

md127 : active raid1 sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
[sudo] password for acool:
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 10:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 14:06:44 2017
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 46

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
- 0 0 2 removed
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$

// the following messages appear because
// this drive is no longer available imo...
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --fail sda4
mdadm: sda4 does not appear to be a component of /dev/md123
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --remove sda4
mdadm: sda4 does not appear to be a component of /dev/md123
[acool@localhost ~]$

//.. so we'll just plug in a new hd (in same SATA port)
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 465.8G 0 part
└─sda5 8:5 0 4G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//inspect partition tables
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x90909090

Device Boot Start End Blocks Id System
/dev/sda1 * 63 976772789 488386363+ a5 FreeBSD
Partition 1 does not start on physical sector boundary.
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//copy gpt table to new disk (sda) and randomize guids
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk /dev/sdc -R /dev/sda
[sudo] password for acool:
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk -G /dev/sda
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


//check again partition tables
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
├─sda2 8:2 0 6.9G 0 part
├─sda3 8:3 0 1G 0 part
├─sda4 8:4 0 201M 0 part
└─sda5 8:5 0 12G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

// now we're ready to add the new partitions in sda to the md devices
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sda4
[sudo] password for acool:
mdadm: added /dev/sda4
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sda5
mdadm: added /dev/sda5
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sda3
mdadm: added /dev/sda3
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sda2
mdadm: added /dev/sda2
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sda1
mdadm: added /dev/sda1
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


// monitor progress
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sda4[3] sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sda5[3] sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/2] [UU_]
[==>..................] recovery = 13.7% (1730176/12582912) finish=6.2min speed=28829K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sda3[3] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sda2[3] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/2] [UU_]
resync=DELAYED

md127 : active raid1 sda1[3] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Thu Jan 19 10:05:04 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:18:53 2017
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 51% complete

Name : localhost.localdomain:home (local to host localhost.localdomain)
UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566
Events : 2220

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
3 8 5 2 spare rebuilding /dev/sda5
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 10:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:14:54 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 66

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
3 8 4 2 active sync /dev/sda4
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


Comments
Comments are not available for this entry.
2024 By Angel Cool