Centos 7: Replace not-yet-failed RAID1 memeber with a new hd. (DRAFT) 
Scenario:

Replace not-yet-failed RAID1 memeber with a new hd (sdd).
(http://unix.stackexchange.com/questions/74924/how-to-safely-replace-a-not-yet-failed-disk-in-a-linux-raid5-array)



// versions
[acool@localhost ~]$
[acool@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[acool@localhost ~]$
[acool@localhost ~]$ mdadm --version
mdadm - v3.4 - 28th January 2016
[acool@localhost ~]$
[acool@localhost ~]$



//check devices
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
[sudo] password for acool:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sda2 8:2 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sda3 8:3 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sda4 8:4 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sda5 8:5 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdd 8:48 0 111.8G 0 disk
├─sdd1 8:49 0 111.8G 0 part
└─sdd5 8:53 0 4G 0 part
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//check partitions

[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID

Disk /dev/sdd: 120.0 GB, 120034123776 bytes, 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x90909090

Device Boot Start End Blocks Id System
/dev/sdd1 * 63 234441647 117220792+ a5 FreeBSD
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


//copy GPT table to sdd drive and generate random guids
//(idk why I got caution messages)
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk /dev/sda -R /dev/sdd
Caution! Secondary header was placed beyond the disk's limits! Moving the
header, but other problems may occur!
The operation has completed successfully.
[acool@localhost ~]$ sudo sgdisk -G /dev/sdd
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//verify partitons
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdd: 120.0 GB, 120034123776 bytes, 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sda2 8:2 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sda3 8:3 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sda4 8:4 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sda5 8:5 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdd 8:48 0 111.8G 0 disk
├─sdd1 8:49 0 12G 0 part
├─sdd2 8:50 0 6.9G 0 part
├─sdd3 8:51 0 1G 0 part
├─sdd4 8:52 0 201M 0 part
└─sdd5 8:53 0 12G 0 part
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$


// replace sda with sdd
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sdd4
[sudo] password for acool:
mdadm: added /dev/sdd4
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sdd5
mdadm: added /dev/sdd5
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sdd3
mdadm: added /dev/sdd3
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sdd2
mdadm: added /dev/sdd2
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sdd1
mdadm: added /dev/sdd1
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --replace /dev/sda4 --with /dev/sdd4
mdadm: Marked /dev/sda4 (device 2 in /dev/md123) for replacement
mdadm: Marked /dev/sdd4 in /dev/md123 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --replace /dev/sda5 --with /dev/sdd5
mdadm: Marked /dev/sda5 (device 2 in /dev/md124) for replacement
mdadm: Marked /dev/sdd5 in /dev/md124 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --replace /dev/sda3 --with /dev/sdd3
mdadm: Marked /dev/sda3 (device 2 in /dev/md125) for replacement
mdadm: Marked /dev/sdd3 in /dev/md125 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --replace /dev/sda2 --with /dev/sdd2
mdadm: Marked /dev/sda2 (device 2 in /dev/md126) for replacement
mdadm: Marked /dev/sdd2 in /dev/md126 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --replace /dev/sda1 --with /dev/sdd1
mdadm: Marked /dev/sda1 (device 2 in /dev/md127) for replacement
mdadm: Marked /dev/sdd1 in /dev/md127 as replacement for device 2
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


// monitor progress
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdd4[4] sda4[3](F) sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdd5[4] sda5[3](F) sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdd3[4](R) sda3[3] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/3] [UUU]
resync=DELAYED
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdd2[4](R) sda2[3] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]
[=>...................] recovery = 5.1% (370560/7208960) finish=3.6min speed=30880K/sec

md127 : active raid1 sdd1[4](R) sda1[3] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/3] [UUU]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 07:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:58:24 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 70

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
4 8 52 2 active sync /dev/sdd4

3 8 4 - faulty /dev/sda4
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Thu Jan 19 07:04:48 2017
Raid Level : raid1
Array Size : 7208960 (6.88 GiB 7.38 GB)
Used Dev Size : 7208960 (6.88 GiB 7.38 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sun Jan 22 16:06:59 2017
State : clean, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 13% complete

Name : localhost.localdomain:swap (local to host localhost.localdomain)
UUID : 0701fcab:0d6eadef:98a73bd8:45b1bd0b
Events : 64

Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
3 8 2 2 active sync /dev/sda2
4 8 50 2 spare rebuilding /dev/sdd2
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Thu Jan 19 07:05:04 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 16:11:37 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0

Name : localhost.localdomain:home (local to host localhost.localdomain)
UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566
Events : 2393

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
4 8 53 2 active sync /dev/sdd5

3 8 5 - faulty /dev/sda5
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$



//remove sda partitions from md devices
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdd4[4] sda4[3](F) sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdd5[4] sda5[3](F) sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdd3[4] sda3[3](F) sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdd2[4] sda2[3](F) sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]

md127 : active raid1 sdd1[4] sda1[3](F) sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --remove /dev/sda4
mdadm: hot removed /dev/sda4 from /dev/md123
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --remove /dev/sda5
mdadm: hot removed /dev/sda5 from /dev/md124
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --remove /dev/sda3
mdadm: hot removed /dev/sda3 from /dev/md125
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --remove /dev/sda2
mdadm: hot removed /dev/sda2 from /dev/md126
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --remove /dev/sda1
mdadm: hot removed /dev/sda1 from /dev/md127
[acool@localhost ~]$
[acool@localhost ~]$



//verify
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Thu Jan 19 07:05:04 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 16:22:22 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:home (local to host localhost.localdomain)
UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566
Events : 2394

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
4 8 53 2 active sync /dev/sdd5
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdd4[4] sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdd5[4] sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdd3[4] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdd2[4] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]

md127 : active raid1 sdd1[4] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
├─sda2 8:2 0 6.9G 0 part
├─sda3 8:3 0 1G 0 part
├─sda4 8:4 0 201M 0 part
└─sda5 8:5 0 12G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdd 8:48 0 111.8G 0 disk
├─sdd1 8:49 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdd2 8:50 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdd3 8:51 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdd4 8:52 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdd5 8:53 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


// insteresting fact: after shutting down, physically removed sda and restart
/ sdd became sda

[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
[sudo] password for acool:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sda2 8:2 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sda3 8:3 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sda4 8:4 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sda5 8:5 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


[ view entry ] ( 1197 views )   |  print article
Centos7: Replaced failed drive in a 3-disk RAID 1 array. (DRAFT) 
Scenario:Replaced failed drive in a 3-disk RAID 1 array.

[acool@localhost ~]$
[acool@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[acool@localhost ~]$
[acool@localhost ~]$ mdadm --version
mdadm - v3.4 - 28th January 2016
[acool@localhost ~]$
[acool@localhost ~]$

//Inspect (I manually disconnected power and SATA cables on sda to simulate a hw failure)
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/2] [UU_]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/2] [UU_]

md127 : active raid1 sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
[sudo] password for acool:
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 10:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 14:06:44 2017
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 46

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
- 0 0 2 removed
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$

// the following messages appear because
// this drive is no longer available imo...
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --fail sda4
mdadm: sda4 does not appear to be a component of /dev/md123
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --remove sda4
mdadm: sda4 does not appear to be a component of /dev/md123
[acool@localhost ~]$

//.. so we'll just plug in a new hd (in same SATA port)
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 465.8G 0 part
└─sda5 8:5 0 4G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//inspect partition tables
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x90909090

Device Boot Start End Blocks Id System
/dev/sda1 * 63 976772789 488386363+ a5 FreeBSD
Partition 1 does not start on physical sector boundary.
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//copy gpt table to new disk (sda) and randomize guids
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk /dev/sdc -R /dev/sda
[sudo] password for acool:
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk -G /dev/sda
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


//check again partition tables
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
├─sda2 8:2 0 6.9G 0 part
├─sda3 8:3 0 1G 0 part
├─sda4 8:4 0 201M 0 part
└─sda5 8:5 0 12G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

// now we're ready to add the new partitions in sda to the md devices
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sda4
[sudo] password for acool:
mdadm: added /dev/sda4
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sda5
mdadm: added /dev/sda5
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sda3
mdadm: added /dev/sda3
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sda2
mdadm: added /dev/sda2
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sda1
mdadm: added /dev/sda1
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


// monitor progress
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sda4[3] sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sda5[3] sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/2] [UU_]
[==>..................] recovery = 13.7% (1730176/12582912) finish=6.2min speed=28829K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sda3[3] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sda2[3] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/2] [UU_]
resync=DELAYED

md127 : active raid1 sda1[3] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Thu Jan 19 10:05:04 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:18:53 2017
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 51% complete

Name : localhost.localdomain:home (local to host localhost.localdomain)
UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566
Events : 2220

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
3 8 5 2 spare rebuilding /dev/sda5
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 10:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:14:54 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 66

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
3 8 4 2 active sync /dev/sda4
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


[ view entry ] ( 1215 views )   |  print article
Centos 7: Adding another member to an existing RAID1 array. 
Scenario: I have a RAID1 array with two disks, I want to add another drive to make it a 3-disk RAID1 array.
[acool@localhost ~]$ 
[acool@localhost ~]$
[acool@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[acool@localhost ~]$
[acool@localhost ~]$ mdadm --version
mdadm - v3.4 - 28th January 2016
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$# new HD was plugged in next available SATA socket
[acool@localhost ~]$
[acool@localhost ~]$#copy gpt table to new disk (sdc)
[acool@localhost ~]$sudo sgdisk /dev/sda -R /dev/sdc
[acool@localhost ~]$
[acool@localhost ~]$#randomize guids
[acool@localhost ~]$sudo sgdisk -G /dev/sdc
[acool@localhost ~]$
[acool@localhost ~]$#reboot

[acool@localhost ~]$
[acool@localhost ~]$#check partitons (sdc should match the others)
[acool@localhost ~]$
[acool@localhost ~]$#add partitions to corresponding md devices
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sdc5
mdadm: added /dev/sda5
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sdc4
mdadm: added /dev/sdc4
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sdc3
mdadm: added /dev/sdc3
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sdc2
mdadm: added /dev/sdc2
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sdc1
mdadm: added /dev/sdc1
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$#grow array
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md123
raid_disks for /dev/md123 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md124
raid_disks for /dev/md124 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md125
raid_disks for /dev/md125 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md126
raid_disks for /dev/md126 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md127
raid_disks for /dev/md127 set to 3
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$#monitor progress
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sda5[2] sdb5[0] sdc5[1]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md124 : active raid1 sda4[2] sdb4[0] sdc4[1]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md125 : active raid1 sda1[2] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
[======>..............] recovery = 30.1% (3792000/12582912) finish=4.4min speed=32926K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sda2[2] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]

md127 : active raid1 sda3[2] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
resync=DELAYED
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ # all md devices should have 3 Us when sync is finished.
[acool@localhost ~]$
[acool@localhost ~]$




[ view entry ] ( 1206 views )   |  print article
Python: Tricks, Fun & More. 
Parsing a json string:
>>> import json
>>>
>>> jsonString = """[
... {"firstName":"John", "lastName":"Doe"},
... {"firstName":"Anna", "lastName":"Smith"},
... {"firstName":"Angel", "lastName":"Cool"}
... ]"""
>>>
>>>
>>> print jsonString
[
{"firstName":"John", "lastName":"Doe"},
{"firstName":"Anna", "lastName":"Smith"},
{"firstName":"Angel", "lastName":"Cool"}
]
>>>
>>> data = json.loads(jsonString)
>>>
>>> print data[2]['firstName']
Angel
>>> print data[2]['lastName']
Cool
>>>
>>> type(data)
<type 'list'>
>>>
>>>
>>> for person in data:
... print person['firstName'] + ' ' + person['lastName']
...
John Doe
Anna Smith
Angel Cool
>>>
>>> data.reverse()
>>>
>>> for person in data:
... print person['firstName'] + ' ' + person['lastName']
...
Angel Cool
Anna Smith
John Doe
>>>

Reversing strings:
>>> print 'Donald Trump'[::-1]
pmurT dlanoD
>>> print 'lion oil'[::-1]
lio noil
>>> print 'A car, a man, a maraca.'[::-1]
.acaram a ,nam a ,rac A


[ view entry ] ( 1171 views )   |  print article
Python: Getting started with virtualenv 
# install virutalenv
[aesteban@localhost ~]$ sudo pip install virtualenv

# create virtual environment
[aesteban@localhost ~]$ mkdir virt_env
[aesteban@localhost ~]$ virtualenv virt_env/virt1 --no-site-packages
New python executable in /home/aesteban/virt_env/virt1/bin/python
Installing setuptools, pip, wheel...done.
[aesteban@localhost ~]$

# load environment
[aesteban@localhost ~]$ source virt_env/virt1/bin/activate
(virt1) [aesteban@localhost ~]$

# deactivate environment
(virt1) [aesteban@localhost ~]$ deactivate
[aesteban@localhost ~]$
[aesteban@localhost ~]$

# listing installed packages with yolk
[aesteban@localhost ~]$ sudo pip install yolk
[aesteban@localhost ~]$ yolk -l

# installing yolk in our virutal environment
[aesteban@localhost ~]$ source virt_env/virt1/bin/activate
(virt1) [aesteban@localhost ~]$ pip install yolk
(virt1) [aesteban@localhost ~]$ yolk -l


# let's create another environment
(virt1) [aesteban@localhost ~]$ deactivate
[aesteban@localhost ~]$ virtualenv virt_env/virt2 --no-site-packages

# let's switch back to virt1 and install Pylons and SqlAlchemy
(virt1) [aesteban@localhost ~]$ pip install Pylons
...
(virt1) [aesteban@localhost ~]$ pip install SqlAlchemy

# compare virt1 and virt2 using: yolk -l


Big shout-out to:
http://www.simononsoftware.com/virtualenv-tutorial/

Thank you guys!

[ view entry ] ( 1190 views )   |  print article
PHP - Detecting palindromes example - not production grade. 
<?php

function isPalindrome($string=null)
{
if($string === null)
return 0;

if(is_numeric($string))
return 0;

$string = preg_replace('/\s+/', '', $string);

$reverse = strrev($string);

if($reverse == $string)
return 1;
else
return 0;

}

echo isPalindrome('lion oil');


[ view entry ] ( 1282 views )   |  print article
Linux: Miscellaneous Commands 
# Finding users in a specific group
[root@storage1 ~]# getent group wheel
wheel:x:10:dsibitzky

# Sort directory content by size
du -hs * | sort -h

# Grep search showing several surrounding lines (4 before and 2 after)
grep -nir -B 4 -A 2 "Angel Cool" /home

# Grep search showing several surrounding lines (3 before and 3 after)
grep -nir -C 3 "Angel Cool" /home

# python package manager installation PIP (fedora 20)
[aesteban@localhost ~]$ sudo yum install python-pip.noarch

# example of using PIP
[aesteban@localhost ~]$ pip search memcache

# Test Disk I/O Performance With dd Command
[aesteban@localhost ~]$ dd if=/dev/zero of=test.img bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 4.90908 s, 219 MB/s



Passwordless ssh login - ssh keys setup -copy ssh key to remote server
#generate new key if needed
[aesteban@localhost ~]$ ssh-keygen -t rsa

#copy key to remote server
[aesteban@localhost ~]$ ssh-copy-id root@storage3.example.com

#alternative way to copy key to remote server
[aesteban@localhost ~]$ cat ~/.ssh/id_rsa.pub | ssh root@storage3.example.com "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

# You should now be able to login to remote server without a password


Generate public key from private ssh key
[acool@localhost ~]$ ssh-keygen -f id_rsa_codedeployment -y > id_rsa_codedeployment.pub



Vim Little Corner
//commenting out a block in vim using visual mode
1.- ctrl + v
2.- use keydown to select lines
3.- shift + i (capital i ) // or #
4.- press ESC and give it a second



-----------------------------------------------------------------------------

This is an attempt to create a cleaner version of linux related sutff in :

http://angelcool.net/sphpblog/comments. ... 420-124152

oooo yea!! (òÓ,)_\,,/



[ view entry ] ( 1555 views )   |  print article
GPG: Tasks 101 
Common tasks:
[acool@localhost ~]$ gpg --version
...
[acool@localhost ~]$ gpg --gen-key
...
# make entropy faster
[acool@localhost ~]$ sudo rngd -r /dev/urandom
...
[acool@localhost ~]$ gpg --list-keys
...
# exports public key
[acool@localhost ~]$ gpg --export --armor
...
[acool@localhost ~]$ gpg --export-secret-key --armor
...
# symmetric-encrypting a file
[acool@localhost ~]$ gpg --symmetric --cipher-algo AES256 top-secret.txt
...
[acool@localhost ~]$ gpg --interactive --edit-key "gpg-id-email"
gpg> showpref
...
[acool@localhost ~]$ gpg --fingerprint "gpg-id-email"
...
[acool@localhost ~]$ gpg --delete-keys "gpg-id-email"

Pass
[acool@localhost ~]$ pass init "gpg-id-email"
...



12/19/2020 - Encrypt and decrypt existing file.

[acool@localhost ~]$ echo "Confidential information goes here." > secret-message.txt
[acool@localhost ~]$
[acool@localhost ~]$ # encrypt it!
[acool@localhost ~]$ gpg --output secret-message.txt.gpg --recipient acool.pgp@10-network.net --armor --encrypt secret-message.txt
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat secret-message.txt.gpg
-----BEGIN PGP MESSAGE-----

hQIMA2ZQHKXHI4G5AQ//f1JwJREZGa904w5Ev1svX/dbyydDb7FBowjnVG2Ne3eO
BZS1VqV+Zjq9j0HgQ6W86j4bYOwwqgW0YYtixBRr3+TFBoN8bBSxaMTLo6w+MHEL
iPJo1FIyDm5gYyljQBE4CLISJan9wsIIuSX4RcH+yNV7kYlK/eWJDstlX/1GW7bA
P6gcNpGNtliFwZZzsC88+qCOg3kX9iEBPcAmRUNg1Tk1s1AHX3MD5pO/dV8HIJ8N
Byti/jyd+X/DbJT6r8rDwwsY320vwNWU4QrLnsoRc34ce2q3V6avXKX9Fj+94xXo
5c00NzI55MVE9jdTDakD2H4xrI40trFg6pOB2JRrxeS7AUg4Ae+tRidAOP0qGgM9
jsOY+ToPFcetMwVNZ+Tw4d7VCN/4/vimHeZ6DRAo6kfqVTkJVVtkYvyRr6r0UxvO
Bdk0p7rmrhMu1jHAcwxSGSZZLKRPYe01W/WIbazF5Q3Mb6fmg8K6sqoRaFdiY036
XIx+QjjjG2PoLJlZ31LwSyQyFhd4hGIk0mbXmXFwY+Vo9cpToaTIcahqAY8wdieM
fwP/w5w2jbhNYYte69e+YC9C38uYMihX47IEtUGYhVUQYRRH085cYL/0Rc68uMDd
oiXIynlT7fzbSMwD6aNTYe3dRwKpd+ZJ8ri6FO9E0bbA3E3feJseLktpmRZxETTS
cQF9fN7hl7fFK7dU/KAdpNJz0ihRV9UofEOrEM1svtcPViLBF0oYNMv5qbtLgO0y
NU4mCjZHH+F00AouErS5VrDRC6/D748t0nNOlTLAp/0MONRBSKDqzDEM8pPB1AIX
jrDesP+iYqNgBnZe9qRVSXQ3
=65wR
-----END PGP MESSAGE-----
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ #decrypt it!
[acool@localhost ~]$ gpg --output output.txt --decrypt secret-message.txt.gpg
[acool@localhost ~]$
[acool@localhost ~]$ # see decrypted information
[acool@localhost ~]$ cat output.txt
Confidential information goes here.
[acool@localhost ~]$



[ view entry ] ( 1253 views )   |  print article
CIsco: Aironet 1242G Autonomos AP Configuration 
Cisco 1242G Access Point Configuration (AIR-LAP1242G-A-K9), Image: c1240-k9w7-mx.124-21a.JA1 Autonomous AP
/*reset everything*/
ap#write erase
ap#reload
ap>en
Password: Cisco //default password
ap#

/*configure AP's ip address*/
ap#config t
ap(config)#interface BVI1
ap(config-if)#ip address 192.168.0.100 255.255.255.0
ap(config-if)#no shut

/*other stuff*/
ap(config)#ip name-server 4.2.2.2 8.8.8.8
ap(config)#ip default-gateway 192.168.0.1
ap(config)#ip domain name example.com

/* configure ssid */
ap(config)#dot11 ssid 1242G
ap(config-ssid)#authentication open
ap(config-ssid)#authentication key-management wpa version 2
ap(config-ssid)#wpa-psk ascii 123456789 // psk
ap(config-ssid)#guest-mode //broadcasts ssid

/* associate ssid 1242G to the radio*/
ap(config)#interface dot11radio 0
ap(config-if)#encryption mode ciphers aes-ccm
ap(config-if)#ssid 1242G

/*ssh config*/
ap(config)#crypto key generate rsa //chose 1024
ap(config)#aaa new-model
ap(config)#aaa authentication login default local //use local database
ap(config)#username admin password admin

/*defaults http password*/
admin/Cisco

[ view entry ] ( 1384 views )   |  print article
PHP: Poor man's alternative to array_chunk 
Useful when having to insert a large set of data in chunks to a databse:
<?php
$numbers=array (
'1',
'2',
'3',
'4',
'5',
'6',
'7',
'8',
'9',
'10',
'11',
);

$output = array();

foreach($numbers as $i=>$number)
{
$output[]= $number;

if(($i+1) % 3 == 0 || $i==count($numbers)-1)//last key
{
var_dump($output);
$output=array();
}
}

The above outputs:
array(3) {
[0]=>
string(1) "1"
[1]=>
string(1) "2"
[2]=>
string(1) "3"
}
array(3) {
[0]=>
string(1) "4"
[1]=>
string(1) "5"
[2]=>
string(1) "6"
}
array(3) {
[0]=>
string(1) "7"
[1]=>
string(1) "8"
[2]=>
string(1) "9"
}
array(2) {
[0]=>
string(2) "10"
[1]=>
string(2) "11"
}

http://stackoverflow.com/questions/3549 ... -iteration

Thank you very much for your time !!!!!!

[ view entry ] ( 1372 views )   |  print article

<<First <Back | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | Next> Last>>


2024 By Angel Cool