Ansible: Tasks 101 
More crap will be added to this post sometime in the future, stay tuned if you want...

-Angel

[acool@hydra2 ansible]$ # running a playbook
[acool@hydra2 ansible]$ ansible-playbook dev-hosts-playbook.yml -i dev-hosts.txt
[acool@hydra2 ansible]$
[acool@hydra2 ansible]$# content of dev-hosts.txt
[acool@hydra2 ansible]$ cat dev-hosts.txt
acool2.10-network.net ansible_user=root
userA_2.10-network.net ansible_user=root
userB_2.10-network.net ansible_user=root
userC_2.10-network.net ansible_user=root


[acool@hydra2 ansible]$
[acool@hydra2 ansible]$ # playbook content
[acool@hydra2 ansible]$ cat dev-hosts-playbook.yml
---
- hosts: all
tasks:
- name: Installing EPEL repo.
yum: pkg=epel-release.noarch state=installed
- name : Installing RPMs
yum: pkg={{item}} state=installed
with_items:
- centos-release-scl
- centos-release-scl-rh
- rh-php70
- rh-php70-php-mysqlnd
- rh-php70-php-bcmath
- rh-php70-php-gd
- rh-php70-php-soap
- rh-php70-php-mbstring
- rh-php70-php-fpm
- sclo-php70-php-pecl-memcached
- git
#- rabbitmq-server
- openvpn
- nginx
- composer
- memcached
- npm
- http-parser
- name: Open firewall ports
firewalld:
port: "{{item.port}}/tcp"
zone: public
permanent: true
state: enabled
immediate: yes
with_items:
- { port: '80' }
- { port: '443' }
- name: Starting services.
action: service name={{item}} state=started enabled=yes
with_items:
- nginx
- memcached
- rh-php70-php-fpm
- name: Enabling php 7
copy:
src: /home/acool/ansible/files/dev-vms/rh-php70.sh
dest: /etc/profile.d/rh-php70.sh
- name: Setting SELINUX to permissive.
selinux:
policy: targeted
state: permissive
- name: Copying nginx config files
template:
src: /home/acool/ansible/templates/dev-vms/10-network-net.conf
dest: /etc/nginx/conf.d/10-network-net.conf
- name: Installing gulp globally.
command: npm install gulp -g
[acool@hydra2 ansible]$
[acool@hydra2 ansible]$
[acool@hydra2 ansible]$
[acool@hydra2 ansible]$
[acool@hydra2 ansible]$
[acool@hydra2 ansible]$
[acool@hydra2 ansible]$ # Ad-Hoc commands, -i stands for inventory and -l for limit
[acool@hydra2 ansible]$ ansible all -i dev-hosts.txt -a 'free -h' -l acool2.10-network.net
acool2.10-network.net | SUCCESS | rc=0 >>
total used free shared buff/cache available
Mem: 1.8G 142M 1.5G 8.6M 165M 1.5G
Swap: 2.0G 0B 2.0G

[acool@hydra2 ansible]$
[acool@hydra2 ansible]$



9/1/2018 - more stuff :)
[aesteban@localhost ansible]$ ## adding a new role
[aesteban@localhost ansible]$ ansible-galaxy init roles/dev --offline
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$ ll roles/
total 16
drwxrwxr-x 8 aesteban aesteban 4096 Sep 1 13:04 app
drwxrwxr-x 8 aesteban aesteban 4096 Sep 1 11:17 cms
drwxrwxr-x 8 aesteban aesteban 4096 Sep 1 11:10 common
drwxrwxr-x 8 aesteban aesteban 4096 Sep 1 13:04 dev
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$ ls -l
total 32
-rw-rw-r-- 1 aesteban aesteban 62 Sep 1 12:43 app-machines.yml
-rw-rw-r-- 1 aesteban aesteban 62 Sep 1 12:43 cms-machines.yml
-rw-rw-r-- 1 aesteban aesteban 70 Sep 1 16:24 dev-machines.yml
drwxrwxr-x 2 aesteban aesteban 4096 Sep 1 12:48 files
-rw-rw-r-- 1 aesteban aesteban 65 Sep 1 16:19 hosts.txt
drwxrwxr-x 6 aesteban aesteban 4096 Sep 1 13:04 roles
drwxrwxr-x 2 aesteban aesteban 4096 Sep 1 11:10 templates
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$ ansible-playbook -i hosts.txt dev-machines.yml --check --limit "dev3.example.com"
...
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$ cat hosts.txt
[devmachines]
dev3.example.com

[cmsmachines]

[appmachies]
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$ ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.13 (default, May 10 2017, 20:04:36) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$
[aesteban@localhost ansible]$ ansible-playbook -i hosts.txt dev-machines.yml --syntax-check

playbook: dev-machines.yml
[aesteban@localhost ansible]$


[ view entry ] ( 1280 views )   |  print article
LVM - Logical Volume Manager Commands 101 
[aesteban@localhost ~]$  # PVS, VGS and LVS commands
[aesteban@localhost ~]$
[aesteban@localhost ~]$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/sda2 fedora lvm2 a-- 237.98g 4.00m
[aesteban@localhost ~]$
[aesteban@localhost ~]$ sudo vgs
VG #PV #LV #SN Attr VSize VFree
fedora 1 3 0 wz--n- 237.98g 4.00m
[aesteban@localhost ~]$
[aesteban@localhost ~]$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
home fedora -wi-ao---- 180.17g
root fedora -wi-ao---- 50.00g
swap fedora -wi-ao---- 7.81g
[aesteban@localhost ~]$
[aesteban@localhost ~]$

[aesteban@localhost ~]$ # LVSCAN and LVDISPLAY commands
[aesteban@localhost ~]$
[aesteban@localhost ~]$ sudo lvscan
ACTIVE '/dev/fedora/swap' [7.81 GiB] inherit
ACTIVE '/dev/fedora/home' [180.17 GiB] inherit
ACTIVE '/dev/fedora/root' [50.00 GiB] inherit
[aesteban@localhost ~]$
[aesteban@localhost ~]$ sudo lvdisplay /dev/fedora/home
--- Logical volume ---
LV Path /dev/fedora/home
LV Name home
VG Name fedora
LV UUID V6WFgj-PA3l-TYA7-fZ2J-IC0z-3yL4-4Rttov
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2016-10-19 10:59:08 -0700
LV Status available
# open 1
LV Size 180.17 GiB
Current LE 46123
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

[aesteban@localhost ~]$
[aesteban@localhost ~]$

[aesteban@localhost ~]$ 
[aesteban@localhost ~]$ sudo lvm
lvm>
lvm>
lvm>
lvm> lvscan
ACTIVE '/dev/fedora/swap' [7.81 GiB] inherit
ACTIVE '/dev/fedora/home' [180.17 GiB] inherit
ACTIVE '/dev/fedora/root' [50.00 GiB] inherit
lvm>
lvm>


Physical volumes commands:
pvcreate
pvmove
pvresize
...etc.

Volume groups commands:
vgcreate
vgextend
vgconvert
vgreduce
...etc.

Logical volumes commands:
lvmcache
lvmthin
lvconvert
lvchange
lvextend
lvreduce
lvremove
lvrename
...etc.

See new kid on the block (as of 2017) : SSM, system storage manager.


[ view entry ] ( 1195 views )   |  print article
CentOS 7: Recovering data from RAID 1 member. 
Scenario: We have a RAID 1 member, the other members are missing. We will re-assemble the MD array and mount it to recover the data.


[acool@localhost sdX]$ #connect surviving hd in any available sata port,
[acool@localhost sdX]$ #copy partition from surviving HD (sdd)
[acool@localhost sdX]$ sudo dd if=/dev/sdd1 of=./sdX1.img status=progress
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$ #set image as loop device
[acool@localhost sdX]$ sudo losetup /dev/loop200 sdX1.img
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$ #examine raid1 member
[acool@localhost sdX]$ sudo mdadm --examine /dev/loop200
/dev/loop200:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : 626c8ef2:f11c73eb:d3fb3366:bbf7a200
Name : localhost.localdomain:root (local to host localhost.localdomain)
Creation Time : Thu Jan 19 10:05:17 2017
Raid Level : raid1
Raid Devices : 3

Avail Dev Size : 25165824 (12.00 GiB 12.88 GB)
Array Size : 12582912 (12.00 GiB 12.88 GB)
Data Offset : 16384 sectors
Super Offset : 8 sectors
Unused Space : before=16296 sectors, after=0 sectors
State : clean
Device UUID : 626c8ef2:f11c73eb:d3fb3366:bbf7ae4b

Internal Bitmap : 8 sectors from superblock
Update Time : Sat Jan 28 23:33:56 2017
Bad Block Log : 512 entries available at offset 72 sectors
Checksum : b33f59c9 - correct
Events : 1540


Device Role : Active device 0
Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$ # reassemble array (I had to change UUID)
[acool@localhost sdX]$ sudo mdadm --assemble --run --force /dev/md200 --update=uuid --uuid=626c8ef2:f11c73eb:d3fb3366:bbf7a200 /dev/loop200
mdadm: /dev/md200 has been started with 1 drive (out of 3).
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$ sudo mdadm --detail /dev/md200
/dev/md200:
Version : 1.2
Creation Time : Thu Jan 19 10:05:17 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sat Jan 28 23:33:56 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:root (local to host localhost.localdomain)
UUID : 626c8ef2:f11c73eb:d3fb3366:bbf7a200
Events : 1540

Number Major Minor RaidDevice State
0 7 200 0 active sync /dev/loop200
- 0 0 1 removed
- 0 0 2 removed
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$ # mount md device in order to access content
[acool@localhost sdX]$ sudo mount /dev/md200 sdX1_mount/
[acool@localhost sdX]$
[acool@localhost sdX]$
[acool@localhost sdX]$ # you can now ls sdX1_mount directory to see contents
[acool@localhost sdX]$
[acool@localhost sdX]$ #also, see partscan option in losetup


[ view entry ] ( 1152 views )   |  print article
FDISK - GDISK: List of known partition types 
FDISK
[aesteban@localhost ~]$ 
[aesteban@localhost ~]$ sudo fdisk /dev/sda

Welcome to fdisk (util-linux 2.28.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): l

0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris
1 FAT12 27 Hidden NTFS Win 82 Linux swap / So c1 DRDOS/sec (FAT-
2 XENIX root 39 Plan 9 83 Linux c4 DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 84 OS/2 hidden or c6 DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 85 Linux extended c7 Syrinx
5 Extended 41 PPC PReP Boot 86 NTFS volume set da Non-FS data
6 FAT16 42 SFS 87 NTFS volume set db CP/M / CTOS / .
7 HPFS/NTFS/exFAT 4d QNX4.x 88 Linux plaintext de Dell Utility
8 AIX 4e QNX4.x 2nd part 8e Linux LVM df BootIt
9 AIX bootable 4f QNX4.x 3rd part 93 Amoeba e1 DOS access
a OS/2 Boot Manag 50 OnTrack DM 94 Amoeba BBT e3 DOS R/O
b W95 FAT32 51 OnTrack DM6 Aux 9f BSD/OS e4 SpeedStor
c W95 FAT32 (LBA) 52 CP/M a0 IBM Thinkpad hi ea Rufus alignment
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a5 FreeBSD eb BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a6 OpenBSD ee GPT
10 OPUS 55 EZ-Drive a7 NeXTSTEP ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a8 Darwin UFS f0 Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a9 NetBSD f1 SpeedStor
14 Hidden FAT16 <3 61 SpeedStor ab Darwin boot f4 SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys af HFS / HFS+ f2 DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fb VMware VMFS
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fc VMware VMKCORE
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid fd Linux raid auto
1c Hidden W95 FAT3 75 PC/IX bc Acronis FAT32 L fe LANstep
1e Hidden W95 FAT1 80 Old Minix be Solaris boot ff BBT

Command (m for help): quit

[aesteban@localhost ~]$
[aesteban@localhost ~]$


GDISK
[aesteban@localhost ~]$ 
[aesteban@localhost ~]$ sudo gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: not present


***************************************************************
Found invalid GPT and valid MBR; converting MBR to GPT format
in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by
typing 'q' if you don't want to convert your MBR partitions
to GPT format!
***************************************************************


Command (? for help): l
0700 Microsoft basic data 0c01 Microsoft reserved 2700 Windows RE
3000 ONIE boot 3001 ONIE config 3900 Plan 9
4100 PowerPC PReP boot 4200 Windows LDM data 4201 Windows LDM metadata
4202 Windows Storage Spac 7501 IBM GPFS 7f00 ChromeOS kernel
7f01 ChromeOS root 7f02 ChromeOS reserved 8200 Linux swap
8300 Linux filesystem 8301 Linux reserved 8302 Linux /home
8303 Linux x86 root (/) 8304 Linux x86-64 root (/ 8305 Linux ARM64 root (/)
8306 Linux /srv 8307 Linux ARM32 root (/) 8400 Intel Rapid Start
8e00 Linux LVM a500 FreeBSD disklabel a501 FreeBSD boot
a502 FreeBSD swap a503 FreeBSD UFS a504 FreeBSD ZFS
a505 FreeBSD Vinum/RAID a580 Midnight BSD data a581 Midnight BSD boot
a582 Midnight BSD swap a583 Midnight BSD UFS a584 Midnight BSD ZFS
a585 Midnight BSD Vinum a600 OpenBSD disklabel a800 Apple UFS
a901 NetBSD swap a902 NetBSD FFS a903 NetBSD LFS
a904 NetBSD concatenated a905 NetBSD encrypted a906 NetBSD RAID
ab00 Recovery HD af00 Apple HFS/HFS+ af01 Apple RAID
af02 Apple RAID offline af03 Apple label af04 AppleTV recovery
af05 Apple Core Storage bc00 Acronis Secure Zone be00 Solaris boot
bf00 Solaris root bf01 Solaris /usr & Mac Z bf02 Solaris swap
bf03 Solaris backup bf04 Solaris /var bf05 Solaris /home
bf06 Solaris alternate se bf07 Solaris Reserved 1 bf08 Solaris Reserved 2
Press the <Enter> key to see more codes:
bf09 Solaris Reserved 3 bf0a Solaris Reserved 4 bf0b Solaris Reserved 5
c001 HP-UX data c002 HP-UX service ea00 Freedesktop $BOOT
eb00 Haiku BFS ed00 Sony system partitio ed01 Lenovo system partit
ef00 EFI System ef01 MBR partition scheme ef02 BIOS boot partition
f800 Ceph OSD f801 Ceph dm-crypt OSD f802 Ceph journal
f803 Ceph dm-crypt journa f804 Ceph disk in creatio f805 Ceph dm-crypt disk i
fb00 VMWare VMFS fb01 VMWare reserved fc00 VMWare kcore crash p
fd00 Linux RAID

Command (? for help): quit
[aesteban@localhost ~]$
[aesteban@localhost ~]$


[ view entry ] ( 1557 views )   |  print article
Centos 7: Replace not-yet-failed RAID1 memeber with a new hd. (DRAFT) 
Scenario:

Replace not-yet-failed RAID1 memeber with a new hd (sdd).
(http://unix.stackexchange.com/questions/74924/how-to-safely-replace-a-not-yet-failed-disk-in-a-linux-raid5-array)



// versions
[acool@localhost ~]$
[acool@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[acool@localhost ~]$
[acool@localhost ~]$ mdadm --version
mdadm - v3.4 - 28th January 2016
[acool@localhost ~]$
[acool@localhost ~]$



//check devices
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
[sudo] password for acool:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sda2 8:2 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sda3 8:3 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sda4 8:4 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sda5 8:5 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdd 8:48 0 111.8G 0 disk
├─sdd1 8:49 0 111.8G 0 part
└─sdd5 8:53 0 4G 0 part
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//check partitions

[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID

Disk /dev/sdd: 120.0 GB, 120034123776 bytes, 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x90909090

Device Boot Start End Blocks Id System
/dev/sdd1 * 63 234441647 117220792+ a5 FreeBSD
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


//copy GPT table to sdd drive and generate random guids
//(idk why I got caution messages)
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk /dev/sda -R /dev/sdd
Caution! Secondary header was placed beyond the disk's limits! Moving the
header, but other problems may occur!
The operation has completed successfully.
[acool@localhost ~]$ sudo sgdisk -G /dev/sdd
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//verify partitons
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdd: 120.0 GB, 120034123776 bytes, 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sda2 8:2 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sda3 8:3 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sda4 8:4 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sda5 8:5 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdd 8:48 0 111.8G 0 disk
├─sdd1 8:49 0 12G 0 part
├─sdd2 8:50 0 6.9G 0 part
├─sdd3 8:51 0 1G 0 part
├─sdd4 8:52 0 201M 0 part
└─sdd5 8:53 0 12G 0 part
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$


// replace sda with sdd
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sdd4
[sudo] password for acool:
mdadm: added /dev/sdd4
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sdd5
mdadm: added /dev/sdd5
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sdd3
mdadm: added /dev/sdd3
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sdd2
mdadm: added /dev/sdd2
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sdd1
mdadm: added /dev/sdd1
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --replace /dev/sda4 --with /dev/sdd4
mdadm: Marked /dev/sda4 (device 2 in /dev/md123) for replacement
mdadm: Marked /dev/sdd4 in /dev/md123 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --replace /dev/sda5 --with /dev/sdd5
mdadm: Marked /dev/sda5 (device 2 in /dev/md124) for replacement
mdadm: Marked /dev/sdd5 in /dev/md124 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --replace /dev/sda3 --with /dev/sdd3
mdadm: Marked /dev/sda3 (device 2 in /dev/md125) for replacement
mdadm: Marked /dev/sdd3 in /dev/md125 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --replace /dev/sda2 --with /dev/sdd2
mdadm: Marked /dev/sda2 (device 2 in /dev/md126) for replacement
mdadm: Marked /dev/sdd2 in /dev/md126 as replacement for device 2
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --replace /dev/sda1 --with /dev/sdd1
mdadm: Marked /dev/sda1 (device 2 in /dev/md127) for replacement
mdadm: Marked /dev/sdd1 in /dev/md127 as replacement for device 2
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


// monitor progress
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdd4[4] sda4[3](F) sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdd5[4] sda5[3](F) sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdd3[4](R) sda3[3] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/3] [UUU]
resync=DELAYED
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdd2[4](R) sda2[3] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]
[=>...................] recovery = 5.1% (370560/7208960) finish=3.6min speed=30880K/sec

md127 : active raid1 sdd1[4](R) sda1[3] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/3] [UUU]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 07:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:58:24 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 70

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
4 8 52 2 active sync /dev/sdd4

3 8 4 - faulty /dev/sda4
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md126
/dev/md126:
Version : 1.2
Creation Time : Thu Jan 19 07:04:48 2017
Raid Level : raid1
Array Size : 7208960 (6.88 GiB 7.38 GB)
Used Dev Size : 7208960 (6.88 GiB 7.38 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Update Time : Sun Jan 22 16:06:59 2017
State : clean, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 13% complete

Name : localhost.localdomain:swap (local to host localhost.localdomain)
UUID : 0701fcab:0d6eadef:98a73bd8:45b1bd0b
Events : 64

Number Major Minor RaidDevice State
0 8 18 0 active sync /dev/sdb2
1 8 34 1 active sync /dev/sdc2
3 8 2 2 active sync /dev/sda2
4 8 50 2 spare rebuilding /dev/sdd2
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Thu Jan 19 07:05:04 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 4
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 16:11:37 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 1
Spare Devices : 0

Name : localhost.localdomain:home (local to host localhost.localdomain)
UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566
Events : 2393

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
4 8 53 2 active sync /dev/sdd5

3 8 5 - faulty /dev/sda5
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$



//remove sda partitions from md devices
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdd4[4] sda4[3](F) sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdd5[4] sda5[3](F) sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdd3[4] sda3[3](F) sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdd2[4] sda2[3](F) sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]

md127 : active raid1 sdd1[4] sda1[3](F) sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --remove /dev/sda4
mdadm: hot removed /dev/sda4 from /dev/md123
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --remove /dev/sda5
mdadm: hot removed /dev/sda5 from /dev/md124
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --remove /dev/sda3
mdadm: hot removed /dev/sda3 from /dev/md125
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --remove /dev/sda2
mdadm: hot removed /dev/sda2 from /dev/md126
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --remove /dev/sda1
mdadm: hot removed /dev/sda1 from /dev/md127
[acool@localhost ~]$
[acool@localhost ~]$



//verify
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Thu Jan 19 07:05:04 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 16:22:22 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:home (local to host localhost.localdomain)
UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566
Events : 2394

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
4 8 53 2 active sync /dev/sdd5
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdd4[4] sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdd5[4] sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdd3[4] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md126 : active raid1 sdd2[4] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]

md127 : active raid1 sdd1[4] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
├─sda2 8:2 0 6.9G 0 part
├─sda3 8:3 0 1G 0 part
├─sda4 8:4 0 201M 0 part
└─sda5 8:5 0 12G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdd 8:48 0 111.8G 0 disk
├─sdd1 8:49 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdd2 8:50 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdd3 8:51 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdd4 8:52 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdd5 8:53 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


// insteresting fact: after shutting down, physically removed sda and restart
/ sdd became sda

[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
[sudo] password for acool:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 111.8G 0 disk
├─sda1 8:1 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sda2 8:2 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sda3 8:3 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sda4 8:4 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sda5 8:5 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


[ view entry ] ( 1134 views )   |  print article
Centos7: Replaced failed drive in a 3-disk RAID 1 array. (DRAFT) 
Scenario:Replaced failed drive in a 3-disk RAID 1 array.

[acool@localhost ~]$
[acool@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[acool@localhost ~]$
[acool@localhost ~]$ mdadm --version
mdadm - v3.4 - 28th January 2016
[acool@localhost ~]$
[acool@localhost ~]$

//Inspect (I manually disconnected power and SATA cables on sda to simulate a hw failure)
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/2] [UU_]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/2] [UU_]

md127 : active raid1 sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
[sudo] password for acool:
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 10:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 14:06:44 2017
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 46

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
- 0 0 2 removed
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$

// the following messages appear because
// this drive is no longer available imo...
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --fail sda4
mdadm: sda4 does not appear to be a component of /dev/md123
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --remove sda4
mdadm: sda4 does not appear to be a component of /dev/md123
[acool@localhost ~]$

//.. so we'll just plug in a new hd (in same SATA port)
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 465.8G 0 part
└─sda5 8:5 0 4G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//inspect partition tables
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x90909090

Device Boot Start End Blocks Id System
/dev/sda1 * 63 976772789 488386363+ a5 FreeBSD
Partition 1 does not start on physical sector boundary.
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

//copy gpt table to new disk (sda) and randomize guids
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk /dev/sdc -R /dev/sda
[sudo] password for acool:
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo sgdisk -G /dev/sda
The operation has completed successfully.
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


//check again partition tables
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo fdisk -l /dev/sd?
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt


# Start End Size Type Name
1 2048 25184255 12G Linux RAID
2 25184256 39610367 6.9G Linux RAID
3 39610368 41709567 1G Linux RAID
4 41709568 42121215 201M Linux RAID
5 42121216 67303423 12G Linux RAID
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 12G 0 part
├─sda2 8:2 0 6.9G 0 part
├─sda3 8:3 0 1G 0 part
├─sda4 8:4 0 201M 0 part
└─sda5 8:5 0 12G 0 part
sdb 8:16 0 55.9G 0 disk
├─sdb1 8:17 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdb2 8:18 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdb3 8:19 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdb4 8:20 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdb5 8:21 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sdc 8:32 0 232.9G 0 disk
├─sdc1 8:33 0 12G 0 part
│ └─md127 9:127 0 12G 0 raid1 /
├─sdc2 8:34 0 6.9G 0 part
│ └─md126 9:126 0 6.9G 0 raid1 [SWAP]
├─sdc3 8:35 0 1G 0 part
│ └─md125 9:125 0 1G 0 raid1 /boot
├─sdc4 8:36 0 201M 0 part
│ └─md123 9:123 0 201M 0 raid1 /boot/efi
└─sdc5 8:37 0 12G 0 part
└─md124 9:124 0 12G 0 raid1 /home
sr0 11:0 1 1024M 0 rom
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$

// now we're ready to add the new partitions in sda to the md devices
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sda4
[sudo] password for acool:
mdadm: added /dev/sda4
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sda5
mdadm: added /dev/sda5
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sda3
mdadm: added /dev/sda3
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sda2
mdadm: added /dev/sda2
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sda1
mdadm: added /dev/sda1
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


// monitor progress
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sda4[3] sdc4[1] sdb4[0]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md124 : active raid1 sda5[3] sdc5[1] sdb5[0]
12582912 blocks super 1.2 [3/2] [UU_]
[==>..................] recovery = 13.7% (1730176/12582912) finish=6.2min speed=28829K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk

md125 : active raid1 sda3[3] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sda2[3] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/2] [UU_]
resync=DELAYED

md127 : active raid1 sda1[3] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md124
/dev/md124:
Version : 1.2
Creation Time : Thu Jan 19 10:05:04 2017
Raid Level : raid1
Array Size : 12582912 (12.00 GiB 12.88 GB)
Used Dev Size : 12582912 (12.00 GiB 12.88 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:18:53 2017
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 51% complete

Name : localhost.localdomain:home (local to host localhost.localdomain)
UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566
Events : 2220

Number Major Minor RaidDevice State
0 8 21 0 active sync /dev/sdb5
1 8 37 1 active sync /dev/sdc5
3 8 5 2 spare rebuilding /dev/sda5
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ sudo mdadm --detail /dev/md123
/dev/md123:
Version : 1.0
Creation Time : Thu Jan 19 10:04:56 2017
Raid Level : raid1
Array Size : 205760 (200.94 MiB 210.70 MB)
Used Dev Size : 205760 (200.94 MiB 210.70 MB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Sun Jan 22 15:14:54 2017
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0

Name : localhost.localdomain:boot_efi (local to host localhost.localdomain)
UUID : 89085253:47b4f9e9:dd804932:ef766c2a
Events : 66

Number Major Minor RaidDevice State
0 8 20 0 active sync /dev/sdb4
1 8 36 1 active sync /dev/sdc4
3 8 4 2 active sync /dev/sda4
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$


[ view entry ] ( 1152 views )   |  print article
Centos 7: Adding another member to an existing RAID1 array. 
Scenario: I have a RAID1 array with two disks, I want to add another drive to make it a 3-disk RAID1 array.
[acool@localhost ~]$ 
[acool@localhost ~]$
[acool@localhost ~]$ cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
[acool@localhost ~]$
[acool@localhost ~]$ mdadm --version
mdadm - v3.4 - 28th January 2016
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$# new HD was plugged in next available SATA socket
[acool@localhost ~]$
[acool@localhost ~]$#copy gpt table to new disk (sdc)
[acool@localhost ~]$sudo sgdisk /dev/sda -R /dev/sdc
[acool@localhost ~]$
[acool@localhost ~]$#randomize guids
[acool@localhost ~]$sudo sgdisk -G /dev/sdc
[acool@localhost ~]$
[acool@localhost ~]$#reboot

[acool@localhost ~]$
[acool@localhost ~]$#check partitons (sdc should match the others)
[acool@localhost ~]$
[acool@localhost ~]$#add partitions to corresponding md devices
[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sdc5
mdadm: added /dev/sda5
[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sdc4
mdadm: added /dev/sdc4
[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sdc3
mdadm: added /dev/sdc3
[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sdc2
mdadm: added /dev/sdc2
[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sdc1
mdadm: added /dev/sdc1
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$#grow array
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md123
raid_disks for /dev/md123 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md124
raid_disks for /dev/md124 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md125
raid_disks for /dev/md125 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md126
raid_disks for /dev/md126 set to 3
[acool@localhost ~]$ sudo mdadm --grow --raid-devices=3 /dev/md127
raid_disks for /dev/md127 set to 3
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$#monitor progress
[acool@localhost ~]$
[acool@localhost ~]$ cat /proc/mdstat
Personalities : [raid1]
md123 : active raid1 sda5[2] sdb5[0] sdc5[1]
12582912 blocks super 1.2 [3/3] [UUU]
bitmap: 1/1 pages [4KB], 65536KB chunk

md124 : active raid1 sda4[2] sdb4[0] sdc4[1]
205760 blocks super 1.0 [3/3] [UUU]
bitmap: 0/1 pages [0KB], 65536KB chunk

md125 : active raid1 sda1[2] sdb1[0] sdc1[1]
12582912 blocks super 1.2 [3/2] [UU_]
[======>..............] recovery = 30.1% (3792000/12582912) finish=4.4min speed=32926K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk

md126 : active raid1 sda2[2] sdb2[0] sdc2[1]
7208960 blocks super 1.2 [3/3] [UUU]

md127 : active raid1 sda3[2] sdb3[0] sdc3[1]
1049536 blocks super 1.0 [3/2] [UU_]
resync=DELAYED
bitmap: 0/1 pages [0KB], 65536KB chunk

unused devices: <none>
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$
[acool@localhost ~]$ # all md devices should have 3 Us when sync is finished.
[acool@localhost ~]$
[acool@localhost ~]$




[ view entry ] ( 1135 views )   |  print article
Python: Tricks, Fun & More. 
Parsing a json string:
>>> import json
>>>
>>> jsonString = """[
... {"firstName":"John", "lastName":"Doe"},
... {"firstName":"Anna", "lastName":"Smith"},
... {"firstName":"Angel", "lastName":"Cool"}
... ]"""
>>>
>>>
>>> print jsonString
[
{"firstName":"John", "lastName":"Doe"},
{"firstName":"Anna", "lastName":"Smith"},
{"firstName":"Angel", "lastName":"Cool"}
]
>>>
>>> data = json.loads(jsonString)
>>>
>>> print data[2]['firstName']
Angel
>>> print data[2]['lastName']
Cool
>>>
>>> type(data)
<type 'list'>
>>>
>>>
>>> for person in data:
... print person['firstName'] + ' ' + person['lastName']
...
John Doe
Anna Smith
Angel Cool
>>>
>>> data.reverse()
>>>
>>> for person in data:
... print person['firstName'] + ' ' + person['lastName']
...
Angel Cool
Anna Smith
John Doe
>>>

Reversing strings:
>>> print 'Donald Trump'[::-1]
pmurT dlanoD
>>> print 'lion oil'[::-1]
lio noil
>>> print 'A car, a man, a maraca.'[::-1]
.acaram a ,nam a ,rac A


[ view entry ] ( 1099 views )   |  print article
Python: Getting started with virtualenv 
# install virutalenv
[aesteban@localhost ~]$ sudo pip install virtualenv

# create virtual environment
[aesteban@localhost ~]$ mkdir virt_env
[aesteban@localhost ~]$ virtualenv virt_env/virt1 --no-site-packages
New python executable in /home/aesteban/virt_env/virt1/bin/python
Installing setuptools, pip, wheel...done.
[aesteban@localhost ~]$

# load environment
[aesteban@localhost ~]$ source virt_env/virt1/bin/activate
(virt1) [aesteban@localhost ~]$

# deactivate environment
(virt1) [aesteban@localhost ~]$ deactivate
[aesteban@localhost ~]$
[aesteban@localhost ~]$

# listing installed packages with yolk
[aesteban@localhost ~]$ sudo pip install yolk
[aesteban@localhost ~]$ yolk -l

# installing yolk in our virutal environment
[aesteban@localhost ~]$ source virt_env/virt1/bin/activate
(virt1) [aesteban@localhost ~]$ pip install yolk
(virt1) [aesteban@localhost ~]$ yolk -l


# let's create another environment
(virt1) [aesteban@localhost ~]$ deactivate
[aesteban@localhost ~]$ virtualenv virt_env/virt2 --no-site-packages

# let's switch back to virt1 and install Pylons and SqlAlchemy
(virt1) [aesteban@localhost ~]$ pip install Pylons
...
(virt1) [aesteban@localhost ~]$ pip install SqlAlchemy

# compare virt1 and virt2 using: yolk -l


Big shout-out to:
http://www.simononsoftware.com/virtualenv-tutorial/

Thank you guys!

[ view entry ] ( 1113 views )   |  print article
PHP - Detecting palindromes example - not production grade. 
<?php

function isPalindrome($string=null)
{
if($string === null)
return 0;

if(is_numeric($string))
return 0;

$string = preg_replace('/\s+/', '', $string);

$reverse = strrev($string);

if($reverse == $string)
return 1;
else
return 0;

}

echo isPalindrome('lion oil');


[ view entry ] ( 1203 views )   |  print article

<<First <Back | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | Next> Last>>


2024 By Angel Cool