<?xml version="1.0" encoding="UTF-8"?>
<feed version="0.3" xmlns="http://purl.org/atom/ns#" xml:lang="en-US">
	<title>Angel's Blog</title>
	<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php" />
	<modified>2026-05-13T17:56:15Z</modified>
	<author>
		<name>Angel</name>
	</author>
	<copyright>Copyright 2026, Angel</copyright>
	<generator url="http://www.sourceforge.net/projects/sphpblog" version="0.7.0">SPHPBLOG</generator>
	<entry>
		<title>Iperf - 1000BASE-LX SMF LC/LC Fiber Link Speed Test</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry240303-221911" />
		<content type="text/html" mode="escaped"><![CDATA[HOST A - SERVER<br /><pre><br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$ date<br />Sun Mar  3 02:04:35 PM PST 2024<br /><br />#IPv4<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$ iperf -s<br />------------------------------------------------------------<br />Server listening on TCP port 5001<br />TCP window size:  128 KByte (default)<br />------------------------------------------------------------<br />[  1] local 192.168.1.184 port 5001 connected with 192.168.1.192 port 57642 (icwnd/mss/irtt=14/1448/515)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.01 sec  1.10 GBytes   941 Mbits/sec<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$<br /><br /># IPv6<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$ iperf -s -V<br />------------------------------------------------------------<br />Server listening on TCP port 5001<br />TCP window size:  128 KByte (default)<br />------------------------------------------------------------<br />[  1] local 2603:8000:6a00:5748:xxxx:xxxx:xxxx:xxxx port 5001 connected with 2603:8000:6a00:5748:xxxx:xxxx:xxxx:xxxx port 56868 (icwnd/mss/irtt=13/1428/460)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.02 sec  1.08 GBytes   928 Mbits/sec<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$<br /></pre><br /><br />HOST B - CLIENT<br /><pre><br />acool@localhost ~]$<br /># IPv4<br />[acool@localhost ~]$ iperf -c 192.168.1.184<br />------------------------------------------------------------<br />Client connecting to 192.168.1.184, TCP port 5001<br />TCP window size: 16.0 KByte (default)<br />------------------------------------------------------------<br />[  1] local 192.168.1.192 port 57642 connected with 192.168.1.184 port 5001 (icwnd/mss/irtt=14/1448/731)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.02 sec  1.10 GBytes   940 Mbits/sec<br /><br /># IPv6<br />[acool@localhost ~]$ iperf -c 2603:8000:6a00:xxxx:xxxx:xxxx:xxxx<br />------------------------------------------------------------<br />Client connecting to 2603:8000:6a00:xxxx:xxxx:xxxx:xxxx, TCP port 5001<br />TCP window size: 16.0 KByte (default)<br />------------------------------------------------------------<br />[  1] local 2603:8000:6a00:5748:: port 56868 connected with 2603:8000:6a00:5748:xxxx:xxxx:xxxx:xxxx port 5001 (icwnd/mss/irtt=13/1428/783)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.02 sec  1.08 GBytes   928 Mbits/sec<br />[acool@localhost ~]$ <br /></pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry240303-221911</id>
		<issued>2024-03-03T00:00:00Z</issued>
		<modified>2024-03-03T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Terraform: AWS VPC with IPV6 support</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry210705-012044" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[acool@localhost EC2-VPC]$ <br />[acool@localhost EC2-VPC]$ date<br />Sun Jul  4 06:19:34 PM PDT 2021<br />[acool@localhost EC2-VPC]$ cat /etc/redhat-release <br />Fedora release 33 (Thirty Three)<br />[acool@localhost EC2-VPC]$ aws --version<br />aws-cli/1.18.223 Python/3.9.5 Linux/5.12.13-200.fc33.x86_64 botocore/1.19.63<br />[acool@localhost EC2-VPC]$ terraform -v<br />Terraform v1.0.1<br />on linux_amd64<br />+ provider registry.terraform.io/hashicorp/aws v3.48.0<br />[acool@localhost EC2-VPC]$<br /></pre><br />The gist of this post:<br /><pre> <br />[acool@localhost EC2-VPC]$ <br />[acool@localhost EC2-VPC]$ cat main.tf <br /># extract public ssh key from private ssh key<br /># [acool@localhost EC2-VPC]$ ssh-keygen -y -f ./COOL_SSH_PRIVATEKEY.pem &gt; COOL_SSH_PUBLICKEY.pub <br /><br />// a.- set region to use<br />provider &quot;aws&quot; {<br />    region = &quot;us-east-2&quot;<br />}<br /><br />// b.- create ssh key pair<br />resource &quot;aws_key_pair&quot; &quot;COOL_KEY_PAIR&quot; {<br />  key_name   = &quot;COOL_SSH_KEYPAIR&quot;<br />  public_key = &quot;${file(&quot;./COOL_SSH_PUBLICKEY.pub&quot;)}&quot;<br />}<br /><br />// c.- create vpc resource<br />resource &quot;aws_vpc&quot; &quot;COOL_VPC&quot; {<br />    enable_dns_support = true<br />    enable_dns_hostnames = true<br />    assign_generated_ipv6_cidr_block = true<br />    cidr_block = &quot;10.0.0.0/16&quot;<br />}<br /><br />// d.- create subnet<br />resource &quot;aws_subnet&quot; &quot;COOL_VPC_SUBNET&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.cidr_block, 4, 1)}&quot;<br />    map_public_ip_on_launch = true<br /><br />    ipv6_cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.ipv6_cidr_block, 8, 1)}&quot;<br />    assign_ipv6_address_on_creation = true<br />}<br /><br />// e.- create internet gateway<br />resource &quot;aws_internet_gateway&quot; &quot;COOL_GATEWAY&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />}<br /><br />// f.- create routing table<br />resource &quot;aws_default_route_table&quot; &quot;COOL_VPC_ROUTING_TABLE&quot; {<br />    default_route_table_id = &quot;${aws_vpc.COOL_VPC.default_route_table_id}&quot;<br /><br />    route {<br />        cidr_block = &quot;0.0.0.0/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br /><br />    route {<br />        ipv6_cidr_block = &quot;::/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br />}<br /><br />// g.- create some sort of association needed<br />resource &quot;aws_route_table_association&quot; &quot;COOL_SUBNET_ROUTE_TABLE_ASSOCIATION&quot; {<br />    subnet_id      = &quot;${aws_subnet.COOL_VPC_SUBNET.id}&quot;<br />    route_table_id = &quot;${aws_default_route_table.COOL_VPC_ROUTING_TABLE.id}&quot;<br />}<br /><br />// h.- create security group<br />resource &quot;aws_security_group&quot; &quot;COOL_SECURITY_GROUP&quot; {<br />    name = &quot;COOL_SECURITY_GROUP&quot;<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    <br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmpv6&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br />}<br /><br />// i.- create EC2 instance<br />resource &quot;aws_instance&quot; &quot;COOL_INSTANCE_APP01&quot; {<br />    ami = &quot;ami-01d5ac8f5f8804300&quot;<br />    key_name = &quot;COOL_SSH_KEYPAIR&quot;<br />    instance_type = &quot;t2.micro&quot;<br />    subnet_id = &quot;${aws_subnet.COOL_VPC_SUBNET.id}&quot;<br />    ipv6_address_count = 1<br />    vpc_security_group_ids = [&quot;${aws_security_group.COOL_SECURITY_GROUP.id}&quot;]<br /><br />    tags = {<br />        Name = &quot;COOL_INSTANCE_APP01&quot;<br />    }<br /><br />    depends_on = [aws_internet_gateway.COOL_GATEWAY]<br />}<br /><br />//j.- print instance IPs<br />output &quot;COOL_INSTANCE_APP01_IPv4&quot; {<br />  value = &quot;${aws_instance.COOL_INSTANCE_APP01.public_ip}&quot;<br />}<br /><br />output &quot;COOL_INSTANCE_APP01_IPv6&quot; {<br />  value = [&quot;${aws_instance.COOL_INSTANCE_APP01.ipv6_addresses}&quot;]<br />}<br />[acool@localhost EC2-VPC]$<br />[acool@localhost EC2-VPC]$ terraform init<br />...<br />[acool@localhost EC2-VPC]$ <br />[acool@localhost EC2-VPC]$ terraform apply<br />...<br />[acool@localhost EC2-VPC]$</pre><br /><br />Happy 4th of July, 2021! and cheers!<br /><br /><br />UPDATE - November 9, 2021<br />Added &#039;app_servers&#039; variable to create multiple aws_instances.<br />Commit message: &#039;Added EIP and specified private ip addresses.&#039;<br /><br />main.tf :<br /><pre><br /># extract public ssh key from private ssh key<br /># [acool@localhost EC2-VPC]$ ssh-keygen -y -f ./COOL_SSH_PRIVATEKEY.pem &gt; COOL_SSH_PUBLICKEY.pub <br /><br />// set region to use<br />provider &quot;aws&quot; {<br />    region = &quot;us-east-2&quot;<br />}<br /><br />// create ssh key pair<br />resource &quot;aws_key_pair&quot; &quot;COOL_KEY_PAIR&quot; {<br />  key_name   = &quot;COOL_SSH_KEYPAIR&quot;<br />  public_key = &quot;${file(&quot;./COOL_SSH_PUBLICKEY.pub&quot;)}&quot;<br />}<br /><br />// create vpc resource<br />resource &quot;aws_vpc&quot; &quot;COOL_VPC&quot; {<br />    enable_dns_support = true<br />    enable_dns_hostnames = true<br />    assign_generated_ipv6_cidr_block = true<br />    cidr_block = &quot;10.0.0.0/16&quot;<br />}<br /><br />// create subnet<br />resource &quot;aws_subnet&quot; &quot;COOL_PVC_SUBNET&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.cidr_block, 4, 1)}&quot;<br />    map_public_ip_on_launch = true<br /><br />    ipv6_cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.ipv6_cidr_block, 8, 1)}&quot;<br />    assign_ipv6_address_on_creation = true<br />}<br /><br />// create internet gateway<br />resource &quot;aws_internet_gateway&quot; &quot;COOL_GATEWAY&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />}<br /><br />// create routing table<br />resource &quot;aws_default_route_table&quot; &quot;COOL_VPC_ROUTING_TABLE&quot; {<br />    default_route_table_id = &quot;${aws_vpc.COOL_VPC.default_route_table_id}&quot;<br /><br />    route {<br />        cidr_block = &quot;0.0.0.0/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br /><br />    route {<br />        ipv6_cidr_block = &quot;::/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br />}<br /><br />// create some sort of association needed<br />resource &quot;aws_route_table_association&quot; &quot;COOL_SUBNET_ROUTE_TABLE_ASSOCIATION&quot; {<br />    subnet_id      = &quot;${aws_subnet.COOL_PVC_SUBNET.id}&quot;<br />    route_table_id = &quot;${aws_default_route_table.COOL_VPC_ROUTING_TABLE.id}&quot;<br />}<br /><br />// create security group<br />resource &quot;aws_security_group&quot; &quot;COOL_SECURITY_GROUP&quot; {<br />    name = &quot;COOL_SECURITY_GROUP&quot;<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    <br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmpv6&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br />}<br /><br />// server names<br />variable app_servers {<br />    description = &quot;name of app servers&quot;<br />    type = list(map(any))<br />    default = [<br />        {name:&quot;COOL_LB01&quot;, ip:&quot;10.0.16.4&quot;},<br />        {name:&quot;COOL_LB02&quot;, ip:&quot;10.0.16.5&quot;},<br />        {name:&quot;COOL_APP01&quot;, ip:&quot;10.0.16.6&quot;},<br />        {name:&quot;COOL_APP02&quot;, ip:&quot;10.0.16.7&quot;},<br />    ]<br />}<br /><br />// create EC2 instance<br />resource &quot;aws_instance&quot; &quot;COOL_SERVERS&quot; {<br />    ami = &quot;ami-01d5ac8f5f8804300&quot;<br />    key_name = &quot;COOL_SSH_KEYPAIR&quot;<br />    instance_type = &quot;t2.micro&quot;<br />    subnet_id = &quot;${aws_subnet.COOL_PVC_SUBNET.id}&quot;<br />    ipv6_address_count = 1<br />    vpc_security_group_ids = [&quot;${aws_security_group.COOL_SECURITY_GROUP.id}&quot;]<br />    for_each = {for server in var.app_servers:  server.name =&gt; server}<br />    private_ip = each.value[&quot;ip&quot;]<br /><br />    tags = {<br />        Name = each.value[&quot;name&quot;]<br />    }<br /><br />    depends_on = [aws_internet_gateway.COOL_GATEWAY]<br />}<br /><br />// elastic IP<br />resource &quot;aws_eip&quot; &quot;COOL_EIP&quot; {<br />  instance = aws_instance.COOL_SERVERS[&quot;COOL_LB01&quot;].id<br />  vpc      = true<br />}<br /><br />// print instance IPs<br />output &quot;COOL_INSTANCE_APP01_IPv4&quot; {<br />    value = {for k, v in aws_instance.COOL_SERVERS: k =&gt; v.public_ip}<br />}<br /><br />output &quot;COOL_INSTANCE_APP01_IPv6&quot; {<br />  value = {for k, v in aws_instance.COOL_SERVERS: k =&gt; v.ipv6_addresses}<br />}<br /><br />output &quot;COOL_VPC_IPV6_BLOCK&quot; {<br />  value = aws_subnet.COOL_PVC_SUBNET.ipv6_cidr_block<br />}<br /><br />// SSH to instance:<br />// [acool@localhost EC2-VPC]$ ssh -i ./COOL_SSH_PRIVATEKEY.pem centos@ip_address<br /><br />// remove eip from COOL_LB01<br />// [acool@localhost EC2-VPC]$ aws ec2 disassociate-address --region us-east-2 --public-ip 3.131.249.150<br /><br />// assign eip to COOL_LB02, adjust instance id to match LB02. The same commands work to return eip to LB01<br />// [acool@localhost EC2-VPC]$ aws ec2 associate-address --region us-east-2 --public-ip 3.131.249.150 --instance-id i-05a634252654b7b34<br /></pre><br />]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry210705-012044</id>
		<issued>2021-07-05T00:00:00Z</issued>
		<modified>2021-07-05T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Terraform: AWS EC2 single instance example.</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry210704-200235" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[acool@localhost terraform-tests]$ terraform --version<br />Terraform v1.0.1<br />...<br />[acool@localhost terraform-tests]$ aws --version<br />aws-cli/1.18.223 Python/3.9.5 Linux/5.12.12-200.fc33.x86_64 botocore/1.19.63<br />...<br /></pre><br />The gist of this post:<br /><pre>[acool@localhost EC2-SINGLE-INSTANCE]$ cat main.tf <br />provider &quot;aws&quot; {<br />    region = &quot;us-east-2&quot;<br />}<br /><br />// create ssh key<br />resource &quot;tls_private_key&quot; &quot;COOL_SSH_PK&quot; {<br />  algorithm = &quot;RSA&quot;<br />  rsa_bits  = 4096<br />}<br /><br />// create ssh key pair<br />resource &quot;aws_key_pair&quot; &quot;COOL_KEY_PAIR&quot; {<br />  key_name   = &quot;COOL_SSH_KEYNAME&quot;<br />  public_key = tls_private_key.COOL_SSH_PK.public_key_openssh<br /><br />  provisioner &quot;local-exec&quot; { # Create &quot;myKey.pem&quot; to your computer!!<br />    command = &quot;echo &#039;${tls_private_key.COOL_SSH_PK.private_key_pem}&#039; &gt; ./COOL_SSH_PK.pem&quot;<br />  }<br />}<br /><br />// create aws ec2 instance<br />resource &quot;aws_instance&quot; &quot;COOLAPP01&quot; {<br />    ami = &quot;ami-01d5ac8f5f8804300&quot;<br />    instance_type = &quot;t2.micro&quot;<br />    key_name = aws_key_pair.COOL_KEY_PAIR.key_name<br />    vpc_security_group_ids = [aws_security_group.COOLAPP01_security_group.id]<br /><br />  tags = {<br />    Name = &quot;COOLAPP01_tag_name&quot;<br />  }<br />}<br /><br />// create security group<br />resource &quot;aws_security_group&quot; &quot;COOLAPP01_security_group&quot; {<br /><br />    name=&quot;terraform_COOLAPP01_security_group&quot;<br /><br />    // allow port 80 tcp<br />    ingress{<br />        from_port = 80<br />        to_port = 80<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow port 22 tcp<br />    ingress{<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow all outbound traffic<br />    egress {<br />        from_port   = 0<br />        to_port     = 0<br />        protocol    = &quot;-1&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br />}<br /><br />// TODO: enable IPV6<br /><br />output &quot;public_ip&quot; {<br />    value = aws_instance.COOLAPP01.public_ip<br />    description = &quot;public ip for COOLAPP01&quot;<br />}<br />[acool@localhost EC2-SINGLE-INSTANCE]$ <br />[acool@localhost EC2-SINGLE-INSTANCE]$terraform apply<br />...</pre><br /><br />Happy 4th of July, 2021 ya&#039;ll!!]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry210704-200235</id>
		<issued>2021-07-04T00:00:00Z</issued>
		<modified>2021-07-04T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Highly Available HAproxy Balancer with Keepalived</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry210522-030737" />
		<content type="text/html" mode="escaped"><![CDATA[We&#039;re gonna use Keepalived&#039;s VRRP feature.<br /><br />Floating ip address will be 192.168.121.179<br /><br />Vagrantfile needed parameters:<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.180&quot;<br />config.vm.hostname = &quot;lb01.localhost&quot;<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.181&quot;<br />config.vm.hostname = &quot;lb02.localhost&quot;<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.191&quot;<br />config.vm.hostname = &quot;app01.localhost&quot;<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.192&quot;<br />config.vm.hostname = &quot;app02.localhost&quot;<br /><br />------------------------------------------------------------------------<br />app01 and app02 will have nginx installed running its default welcome page.<br /><pre><br />angel@acool:~/Documents/haproxy-cluster$ date<br />Fri 21 May 2021 07:11:52 PM PDT<br />angel@acool:~/Documents/haproxy-cluster$ cat /etc/lsb-release<br />DISTRIB_ID=Ubuntu<br />DISTRIB_RELEASE=20.04<br />DISTRIB_CODENAME=focal<br />DISTRIB_DESCRIPTION=&quot;Ubuntu 20.04.2 LTS&quot;<br />angel@acool:~/Documents/haproxy-cluster$ <br />angel@acool:~/Documents/haproxy-cluster$ tree<br />.<br />├── app01<br />│   └── Vagrantfile<br />├── app02<br />│   └── Vagrantfile<br />├── lb01<br />│   └── Vagrantfile<br />├── lb02<br />│   └── Vagrantfile<br />└── NOTES.txt<br /><br />4 directories, 5 files<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster$ sudo vagrant global-status<br />id       name    provider state   directory                                   <br />------------------------------------------------------------------------------<br />1553a24  default libvirt shutoff /home/angel/Documents/haproxy-cluster/lb01  <br />3c33424  default libvirt shutoff /home/angel/Documents/haproxy-cluster/lb02  <br />1d9af06  default libvirt shutoff /home/angel/Documents/haproxy-cluster/app01 <br />5bc8220  default libvirt shutoff /home/angel/Documents/haproxy-cluster/app02 <br />...<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster/lb01$ vagrant --version<br />Vagrant 2.2.6<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster$ cd lb01/<br />angel@acool:~/Documents/haproxy-cluster/lb01$ sudo vagrant up<br />...<br />angel@acool:~/Documents/haproxy-cluster/lb01$ sudo vagrant ssh<br />Last login: Sat May 22 02:08:45 2021 from 192.168.121.1<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ cat /etc/redhat-release <br />CentOS Linux release 8.3.2011<br />[vagrant@lb01 ~]$ sudo dnf install haproxy keepalived<br /><br />[vagrant@lb01 ~]$ haproxy -v<br />HA-Proxy version 1.8.23 2019/11/25<br />Copyright 2000-2019 Willy Tarreau &lt;willy@haproxy.org&gt;<br /><br />[vagrant@lb01 ~]$ keepalived --version<br />Keepalived v2.0.10 (11/12,2018)<br />...<br />[vagrant@lb01 ~]$<br />[vagrant@lb01 ~]$ # HAProxy need this to bind to floating ip when ip is missing locally <br />[vagrant@lb01 ~]$ cat /etc/sysctl.conf <br />...<br />net.ipv4.ip_nonlocal_bind=1<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ sudo sysctl -p<br />net.ipv4.ip_nonlocal_bind = 1<br />[vagrant@lb01 ~]$<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ cat /etc/haproxy/haproxy.cfg <br />...<br />## enable stats<br />listen stats<br />    bind :9000<br />    stats enable<br />    stats uri /stats<br />    stats refresh 10s<br />    stats admin if LOCALHOST<br /><br />## enable www frontend, bind floating ip address<br />frontend www<br />    bind 192.168.121.179:80<br />    mode http<br />    default_backend www_servers<br /><br />## enable www backend<br />backend www_servers<br />    balance roundrobin<br />    option forwardfor<br />    http-request set-header X-Forwarded-Port %[dst_port]<br />    http-request add-header X-Forwarded-Proto https if { ssl_fc }<br />    option httpchk HEAD / HTTP/1.1\r\nHost:localhost<br />    server app01 192.168.121.191:80 check<br />    server app02 192.168.121.192:80 check<br /><br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ cat /etc/keepalived/keepalived.conf<br />     vrrp_script chk_haproxy {      # Requires keepalived-1.1.13<br />       #script &quot;killall -0 haproxy&quot;  # cheaper than pidof<br />       script &quot;pidof haproxy&quot;  # this one worked better for me.<br />       interval 2 # check every 2 seconds<br />       weight 2 # add 2 points of priority if OK<br />     }<br />     vrrp_instance VI_1 {<br />       interface eth0<br />       state MASTER<br />       virtual_router_id 51<br />       priority 101 # 101 on lb01, 100 on lb02<br />       virtual_ipaddress {<br />         192.168.121.179<br />       }<br />       track_script {<br />         chk_haproxy<br />       }<br />     }<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ # this should be the end result, the floating ip should be listed.<br />[vagrant@lb01 ~]$ ip a |grep 179<br />    inet 192.168.121.179/32 scope global eth0<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ # if you stop haproxy (or shutdown lb01), lb02 should take over the floating ip!<br />[vagrant@lb01 ~]$ # when haproxy is back, lb01 will reclaim the floating ip, the end result is<br />[vagrant@lb01 ~]$ # the floating ip will be available even if lb01 goes down.<br /></pre><br /><br />Cheers!<br /><br />UPDATE: November 11, 2021 -  Adding lb02 details in order to remove ambiguities when I see this post in the future.<br /><pre><br />[vagrant@lb02 ~]$ cat /etc/sysctl.conf <br />...<br />net.ipv4.ip_nonlocal_bind=1<br />[vagrant@lb02 ~]$<br /></pre><br /><pre><br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ cat /etc/keepalived/keepalived.conf<br />     vrrp_script chk_haproxy {      # Requires keepalived-1.1.13<br />       #script &quot;killall -0 haproxy&quot;  # cheaper than pidof<br />       script &quot;pidof haproxy&quot;<br />       interval 2 # check every 2 seconds<br />       weight 2 # add 2 points of priority if OK<br />     }<br />     vrrp_instance VI_1 {<br />       interface eth0<br />       state MASTER<br />       virtual_router_id 51<br />       priority 100 # 101 on primary, 100 on secondary<br />       virtual_ipaddress {<br />         192.168.121.179<br />       }<br />       track_script {<br />         chk_haproxy<br />       }<br />     }<br /><br />[vagrant@lb02 ~]$<br /></pre><br /><pre><br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ cat /etc/haproxy/haproxy.cfg<br />#---------------------------------------------------------------------<br /># Example configuration for a possible web application.  See the<br /># full configuration options online.<br />#<br />#   <a href="https://www.haproxy.org/download/1.8/doc/configuration.txt" >https://www.haproxy.org/download/1.8/doc/configuration.txt</a><br />#<br />#---------------------------------------------------------------------<br /><br />#---------------------------------------------------------------------<br /># Global settings<br />#---------------------------------------------------------------------<br />global<br />    # to have these messages end up in /var/log/haproxy.log you will<br />    # need to:<br />    #<br />    # 1) configure syslog to accept network log events.  This is done<br />    #    by adding the &#039;-r&#039; option to the SYSLOGD_OPTIONS in<br />    #    /etc/sysconfig/syslog<br />    #<br />    # 2) configure local2 events to go to the /var/log/haproxy.log<br />    #   file. A line like the following can be added to<br />    #   /etc/sysconfig/syslog<br />    #<br />    #    local2.*                       /var/log/haproxy.log<br />    #<br />    log         127.0.0.1 local2<br /><br />    chroot      /var/lib/haproxy<br />    pidfile     /var/run/haproxy.pid<br />    maxconn     4000<br />    user        haproxy<br />    group       haproxy<br />    daemon<br /><br />    # turn on stats unix socket<br />    stats socket /var/lib/haproxy/stats<br /><br />    # utilize system-wide crypto-policies<br />    ssl-default-bind-ciphers PROFILE=SYSTEM<br />    ssl-default-server-ciphers PROFILE=SYSTEM<br /><br />#---------------------------------------------------------------------<br /># common defaults that all the &#039;listen&#039; and &#039;backend&#039; sections will<br /># use if not designated in their block<br />#---------------------------------------------------------------------<br />defaults<br />    mode                    http<br />    log                     global<br />    option                  httplog<br />    option                  dontlognull<br />    option http-server-close<br />    option forwardfor       except 127.0.0.0/8<br />    option                  redispatch<br />    retries                 3<br />    timeout http-request    10s<br />    timeout queue           1m<br />    timeout connect         10s<br />    timeout client          1m<br />    timeout server          1m<br />    timeout http-keep-alive 10s<br />    timeout check           10s<br />    maxconn                 3000<br /><br /># ME: enable stats<br />listen stats<br />    bind :9000<br />    stats enable<br />    stats uri /stats<br />    stats refresh 10s<br />    stats admin if LOCALHOST<br /><br /># ME: <br />frontend www<br />    bind 192.168.121.179:80<br />    mode http<br />    default_backend www_servers<br /><br /># ME:<br />backend www_servers<br />    balance roundrobin<br />    option forwardfor<br />    http-request set-header X-Forwarded-Port %[dst_port]<br />    http-request add-header X-Forwarded-Proto https if { ssl_fc }<br />    option httpchk HEAD / HTTP/1.1\r\nHost:localhost<br />    server app01 192.168.121.191:80 check<br />    server app02 192.168.121.192:80 check<br /><br />#---------------------------------------------------------------------<br /># main frontend which proxys to the backends<br />#---------------------------------------------------------------------<br />frontend main<br />    bind *:5000<br />    acl url_static       path_beg       -i /static /images /javascript /stylesheets<br />    acl url_static       path_end       -i .jpg .gif .png .css .js<br /><br />    use_backend static          if url_static<br />    default_backend             app<br /><br />#---------------------------------------------------------------------<br /># static backend for serving up images, stylesheets and such<br />#---------------------------------------------------------------------<br />backend static<br />    balance     roundrobin<br />    server      static 127.0.0.1:4331 check<br /><br />#---------------------------------------------------------------------<br /># round robin balancing between the various backends<br />#---------------------------------------------------------------------<br />backend app<br />    balance     roundrobin<br />    server  app1 127.0.0.1:5001 check<br />    server  app2 127.0.0.1:5002 check<br />    server  app3 127.0.0.1:5003 check<br />    server  app4 127.0.0.1:5004 check<br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ <br /></pre><br /><br /><br />]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry210522-030737</id>
		<issued>2021-05-22T00:00:00Z</issued>
		<modified>2021-05-22T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Docker: reference information for SWARMS, NODES, SERVICES, STACKS and NETWORKS</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry201211-183259" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[vagrant@box1 ~]$ date<br />Fri Dec 11 18:34:51 UTC 2020<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ docker --version<br />Docker version 20.10.0, build 7287ab3<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$  ########## Docker SWARM info ##########<br />[vagrant@box1 ~]$ docker swarm<br /><br />Usage:  docker swarm COMMAND<br /><br />Manage Swarm<br /><br />Commands:<br />  ca          Display and rotate the root CA<br />  init        Initialize a swarm<br />  join        Join a swarm as a node and/or manager<br />  join-token  Manage join tokens<br />  leave       Leave the swarm<br />  unlock      Unlock swarm<br />  unlock-key  Manage the unlock key<br />  update      Update the swarm<br /><br />Run &#039;docker swarm COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$  ########## Docker NODE info ##########<br />[vagrant@box1 ~]$ docker node<br /><br />Usage:  docker node COMMAND<br /><br />Manage Swarm nodes<br /><br />Commands:<br />  demote      Demote one or more nodes from manager in the swarm<br />  inspect     Display detailed information on one or more nodes<br />  ls          List nodes in the swarm<br />  promote     Promote one or more nodes to manager in the swarm<br />  ps          List tasks running on one or more nodes, defaults to current node<br />  rm          Remove one or more nodes from the swarm<br />  update      Update a node<br /><br />Run &#039;docker node COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$  ########## Docker SERVICE info ##########<br />[vagrant@box1 ~]$ docker service<br /><br />Usage:  docker service COMMAND<br /><br />Manage services<br /><br />Commands:<br />  create      Create a new service<br />  inspect     Display detailed information on one or more services<br />  logs        Fetch the logs of a service or task<br />  ls          List services<br />  ps          List the tasks of one or more services<br />  rm          Remove one or more services<br />  rollback    Revert changes to a service&#039;s configuration<br />  scale       Scale one or multiple replicated services<br />  update      Update a service<br /><br />Run &#039;docker service COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$  ########## Docker STACK info ##########<br />[vagrant@box1 ~]$ docker stack<br /><br />Usage:  docker stack [OPTIONS] COMMAND<br /><br />Manage Docker stacks<br /><br />Options:<br />      --orchestrator string   Orchestrator to use (swarm|kubernetes|all)<br /><br />Commands:<br />  deploy      Deploy a new stack or update an existing stack<br />  ls          List stacks<br />  ps          List the tasks in the stack<br />  rm          Remove one or more stacks<br />  services    List the services in the stack<br /><br />Run &#039;docker stack COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$  ########## Docker NETWORK info ##########<br />vagrant@box1 ~]$ docker network<br /><br />Usage:  docker network COMMAND<br /><br />Manage networks<br /><br />Commands:<br />  connect     Connect a container to a network<br />  create      Create a network<br />  disconnect  Disconnect a container from a network<br />  inspect     Display detailed information on one or more networks<br />  ls          List networks<br />  prune       Remove all unused networks<br />  rm          Remove one or more networks<br /><br />Run &#039;docker network COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$  ########## All the crap available under Docker binary ##########<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ docker<br /><br />Usage:  docker [OPTIONS] COMMAND<br /><br />A self-sufficient runtime for containers<br /><br />Options:<br />      --config string      Location of client config files (default &quot;/home/vagrant/.docker&quot;)<br />  -c, --context string     Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with &quot;docker<br />                           context use&quot;)<br />  -D, --debug              Enable debug mode<br />  -H, --host list          Daemon socket(s) to connect to<br />  -l, --log-level string   Set the logging level (&quot;debug&quot;|&quot;info&quot;|&quot;warn&quot;|&quot;error&quot;|&quot;fatal&quot;) (default &quot;info&quot;)<br />      --tls                Use TLS; implied by --tlsverify<br />      --tlscacert string   Trust certs signed only by this CA (default &quot;/home/vagrant/.docker/ca.pem&quot;)<br />      --tlscert string     Path to TLS certificate file (default &quot;/home/vagrant/.docker/cert.pem&quot;)<br />      --tlskey string      Path to TLS key file (default &quot;/home/vagrant/.docker/key.pem&quot;)<br />      --tlsverify          Use TLS and verify the remote<br />  -v, --version            Print version information and quit<br /><br />Management Commands:<br />  app*        Docker App (Docker Inc., v0.9.1-beta3)<br />  builder     Manage builds<br />  buildx*     Build with BuildKit (Docker Inc., v0.4.2-docker)<br />  config      Manage Docker configs<br />  container   Manage containers<br />  context     Manage contexts<br />  image       Manage images<br />  manifest    Manage Docker image manifests and manifest lists<br />  network     Manage networks<br />  node        Manage Swarm nodes<br />  plugin      Manage plugins<br />  secret      Manage Docker secrets<br />  service     Manage services<br />  stack       Manage Docker stacks<br />  swarm       Manage Swarm<br />  system      Manage Docker<br />  trust       Manage trust on Docker images<br />  volume      Manage volumes<br /><br />Commands:<br />  attach      Attach local standard input, output, and error streams to a running container<br />  build       Build an image from a Dockerfile<br />  commit      Create a new image from a container&#039;s changes<br />  cp          Copy files/folders between a container and the local filesystem<br />  create      Create a new container<br />  diff        Inspect changes to files or directories on a container&#039;s filesystem<br />  events      Get real time events from the server<br />  exec        Run a command in a running container<br />  export      Export a container&#039;s filesystem as a tar archive<br />  history     Show the history of an image<br />  images      List images<br />  import      Import the contents from a tarball to create a filesystem image<br />  info        Display system-wide information<br />  inspect     Return low-level information on Docker objects<br />  kill        Kill one or more running containers<br />  load        Load an image from a tar archive or STDIN<br />  login       Log in to a Docker registry<br />  logout      Log out from a Docker registry<br />  logs        Fetch the logs of a container<br />  pause       Pause all processes within one or more containers<br />  port        List port mappings or a specific mapping for the container<br />  ps          List containers<br />  pull        Pull an image or a repository from a registry<br />  push        Push an image or a repository to a registry<br />  rename      Rename a container<br />  restart     Restart one or more containers<br />  rm          Remove one or more containers<br />  rmi         Remove one or more images<br />  run         Run a command in a new container<br />  save        Save one or more images to a tar archive (streamed to STDOUT by default)<br />  search      Search the Docker Hub for images<br />  start       Start one or more stopped containers<br />  stats       Display a live stream of container(s) resource usage statistics<br />  stop        Stop one or more running containers<br />  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE<br />  top         Display the running processes of a container<br />  unpause     Unpause all processes within one or more containers<br />  update      Update configuration of one or more containers<br />  version     Show the Docker version information<br />  wait        Block until one or more containers stop, then print their exit codes<br /><br />Run &#039;docker COMMAND --help&#039; for more information on a command.<br />To get more help with docker, check out guides at <a href="https://docs.docker.com/go/guides/" >https://docs.docker.com/go/guides/</a><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ </pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry201211-183259</id>
		<issued>2020-12-11T00:00:00Z</issued>
		<modified>2020-12-11T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Nagios: Miscellaneous notes on installing and configuring Nagios.</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry201208-014642" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[acool@localhost ~]$ <br />[acool@localhost ~]$ date<br />Mon 07 Dec 2020 05:45:53 PM PST<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /etc/redhat-release <br />Fedora release 31 (Thirty One)<br />[acool@localhost ~]$  <br />[acool@localhost ~]$ sudo dnf install httpd nagios nagios-common nagios-plugins-all<br />Last metadata expiration check: 0:36:12 ago on Mon 07 Dec 2020 05:09:52 PM PST.<br />Package httpd-2.4.46-1.fc31.x86_64 is already installed.<br />Package nagios-4.4.5-7.fc31.x86_64 is already installed.<br />Package nagios-common-4.4.5-7.fc31.x86_64 is already installed.<br />Package nagios-plugins-all-2.3.3-2.fc31.x86_64 is already installed.<br />Dependencies resolved.<br />Nothing to do.<br />Complete!<br />[acool@localhost ~]$<br />[acool@localhost ~]$ cat /etc/httpd/conf.d/nagios.conf<br />...<br />[acool@localhost ~]$<br />[acool@localhost ~]$ # default password for web ui nagiosadmin:nagiosadmin? I think yes.<br />[acool@localhost ~]$ ll /etc/nagios/<br />total 92<br />-rw-rw-r--. 1 root root   13699 Apr  7  2020 cgi.cfg<br />-rw-rw-r--. 1 root root   45886 Nov  4 23:23 nagios.cfg<br />-rw-r--r--. 1 root root   12839 Apr 29  2020 nrpe.cfg<br />drwxr-x---. 2 root nagios  4096 Nov  5 11:05 objects<br />-rw-r-----. 1 root apache    27 Apr  7  2020 passwd<br />drwxr-x---. 2 root nagios  4096 Nov  3 12:22 private<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ # <a href="http://localhost:8080/nagios/" >http://localhost:8080/nagios/</a> should now load (adjust port as needed)<br /></pre><br /><br />TODO: nagiosgraph. NEEDS TESTING!!<br /><br />A.- Looks like we need this in commands.cfg :<br /><br />define command {<br />  command_name process-service-perfdata-for-nagiosgraph<br />  command_line /usr/local/nagiosgraph/bin/insert.pl<br />}<br /><br />B.- And this in templates.cfg :<br /><br />define service {<br />      name              graphed-service<br />      action_url        /nagiosgraph/cgi-bin/show.cgi?host=$HOSTNAME$&amp;service=$SERVICEDESC$&#039; onMouseOver=&#039;showGraphPopup(this)&#039; onMouseOut=&#039;hideGraphPopup()&#039; rel=&#039;/nagiosgraph/cgi-bin/showgraph.cgi?host=$HOSTNAME$&amp;service=$SERVICEDESC$&amp;period=week&amp;rrdopts=-w+450+-j<br />      register        0<br />}<br /><br />C.- Then we need to add &#039;graphed-service&#039; to services in localhost.cfg for example:<br /><br /># Define a service to &quot;ping&quot; the local machine<br />define service {<br /><br />    use                     local-service,graphed-service; Name of service template to use<br />    host_name               localhost<br />    service_description     PING<br />    check_command           check_ping!100.0,20%!500.0,60%<br />}<br /><br />D.- In these in /etc/nagios/nagios.cfg : - NEEDS TO BE VERIFIED<br /><br />process_performance_data=1<br />service_perfdata_file=/tmp/perfdata.log<br />service_perfdata_file_template=$LASTSERVICECHECK$||$HOSTNAME$||$SERVICEDESC$||$SERVICEOUTPUT$||$SERVICEPERFDATA$<br />service_perfdata_file_mode=a<br />service_perfdata_file_processing_interval=30<br />service_perfdata_file_processing_command=process-service-perfdata-for-nagiosgraph<br /><br />More hints :<br /><pre>[root@localhost nagiosgraph]# <br />[root@localhost nagiosgraph]# grep -nri nagiosgraph /etc/httpd/<br />/etc/httpd/conf/httpd.conf:354:#### NAGIOSGRAPH #####<br />/etc/httpd/conf/httpd.conf:355:include /usr/local/nagiosgraph/etc/nagiosgraph-apache.conf<br />[root@localhost nagiosgraph]#</pre><br /><br />See nagiosgraph settings:<br /><br /><a href="http://localhost:8080/nagiosgraph/cgi-bin/showconfig.cgi" >http://localhost:8080/nagiosgraph/cgi-b ... config.cgi</a><br />]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry201208-014642</id>
		<issued>2020-12-08T00:00:00Z</issued>
		<modified>2020-12-08T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Solr: Starting Solr 4.7 for development purposes.</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry200925-183146" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[acool@localhost solr-4.7.0]$ date<br />Fri 25 Sep 2020 09:33:39 AM PDT<br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ sudo yum install java-1.8.0-openjdk<br />...<br />[acool@localhost solr-4.7.0]$ java -version<br />openjdk version &quot;1.8.0_265&quot;<br />OpenJDK Runtime Environment (build 1.8.0_265-b01)<br />OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)<br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ ll<br />total 460<br />-rw-r--r--.  1 acool acool 362968 Feb 21  2014 CHANGES.txt<br />drwxr-xr-x. 12 acool acool   4096 Feb 21  2014 contrib<br />drwxrwxr-x.  4 acool acool   4096 Feb  1  2020 dist<br />drwxrwxr-x. 17 acool acool   4096 Feb  1  2020 docs<br />drwxr-xr-x. 15 acool acool   4096 Feb  2  2020 example<br />drwxr-xr-x.  2 acool acool  32768 Feb  1  2020 licenses<br />-rw-r--r--.  1 acool acool  12646 Feb 18  2014 LICENSE.txt<br />-rw-r--r--.  1 acool acool  26762 Feb 18  2014 NOTICE.txt<br />-rw-r--r--.  1 acool acool   5344 Feb 18  2014 README.txt<br />-rw-r--r--.  1 acool acool    686 Feb 18  2014 SYSTEM_REQUIREMENTS.txt<br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ # Starting server<br />[acool@localhost solr-4.7.0]$ cd example/<br />[acool@localhost example]$ <br />[acool@localhost example]$ java -jar start.jar <br />...<br />[acool@localhost example]$ <br />[acool@localhost example]$  # <a href="http://localhost:8983/solr" >http://localhost:8983/solr</a> should now render the dashboard<br />[acool@localhost example]$</pre><br /><br />12/7/2020 Sample query:<br /><pre>http://app01.example.com:8098/search/query/article_index?sort=score DESC<br />&amp;q={!edismax}how to become a millionaire<br />&amp;qf=authorName^6 objectId^4 headline^2 deck<br />&amp;fq={!lucene}<br />    edition:us<br />    AND statusId:4<br />    AND objectTypeId:(1 2 4 12 15)<br />    AND publicationDateISO8601:[NOW-10YEAR TO NOW]<br />&amp;qs=5<br />&amp;bq=publicationDateISO8601:[NOW-2YEAR TO NOW]<br />&amp;fl=*,score<br />&amp;hl=true<br />&amp;mm=3&lt;80%<br />&amp;wt=json<br />&amp;rows=20<br />&amp;start=0<br />&amp;df=entspellcheck<br />&amp;spellcheck=true<br />&amp;spellcheck.q=&quot;how to become a millionaire&quot;~10<br />&amp;spellcheck.collate=true<br />&amp;spellcheck.maxCollations=30<br />&amp;spellcheck.maxCollationTries=30<br />&amp;spellcheck.maxCollationEvaluations=30<br />&amp;spellcheck.collateExtendedResults=true<br />&amp;spellcheck.collateMaxCollectDocs=30<br />&amp;spellcheck.count=10<br />&amp;spellcheck.extendedResults=true<br />&amp;spellcheck.maxResultsForSuggest=5<br />&amp;spellcheck.alternativeTermCount=10<br />&amp;spellcheck.accuracy=0.5</pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry200925-183146</id>
		<issued>2020-09-25T00:00:00Z</issued>
		<modified>2020-09-25T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Docker: Swarm Demo</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry200223-012017" />
		<content type="text/html" mode="escaped"><![CDATA[In this demo I:<br /><br />a) create 3 CentOS 7 vagrant VMs<br />b) install docker in each VM<br />c) create a Docker Swarm (Swarm mode) with one manager and 2 workers<br />d) create a service with nginx image, update the service to use httpd image and update replicas memory limit<br /><br /><pre>[acool@localhost docker-swarm-demo]$ date<br />Sat 22 Feb 2020 04:35:36 PM PST<br />[acool@localhost docker-swarm-demo]$ cat /etc/redhat-release <br />Fedora release 31 (Thirty One)<br />[acool@localhost docker-swarm-demo]$ vagrant --version<br />Vagrant 2.2.6<br />[acool@localhost docker-swarm-demo]$ tree<br />.<br />├── vagrant-box-1<br />│   └── Vagrantfile<br />├── vagrant-box-2<br />│   └── Vagrantfile<br />└── vagrant-box-3<br />    └── Vagrantfile<br /><br />3 directories, 3 files<br />[acool@localhost docker-swarm-demo]$ <br />[acool@localhost docker-swarm-demo]$ cd vagrant-box-1<br />[acool@localhost vagrant-box-1]$ vagrant up<br />...<br />[acool@localhost vagrant-box-1]$ vagrant ssh<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.6.1810 (Core)<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ ip address show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:de:6e:43 brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.102/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3307sec preferred_lft 3307sec<br />    inet6 fe80::5054:ff:fede:6e43/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ sudo yum install docker<br />...<br />[vagrant@box1 ~]$ sudo systemctl start docker<br />[vagrant@box1 ~]$ sudo docker version<br />Client:<br /> Version:         1.13.1<br /> API version:     1.26<br /> Package version: docker-1.13.1-108.git4ef4b30.el7.centos.x86_64<br /> Go version:      go1.10.3<br /> Git commit:      4ef4b30/1.13.1<br /> Built:           Tue Jan 21 17:16:25 2020<br /> OS/Arch:         linux/amd64<br /><br />Server:<br /> Version:         1.13.1<br /> API version:     1.26 (minimum version 1.12)<br /> Package version: docker-1.13.1-108.git4ef4b30.el7.centos.x86_64<br /> Go version:      go1.10.3<br /> Git commit:      4ef4b30/1.13.1<br /> Built:           Tue Jan 21 17:16:25 2020<br /> OS/Arch:         linux/amd64<br /> Experimental:    false<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ # disable firewall for the sake of keeping this demo simple<br />[vagrant@box1 ~]$ sudo systemctl disable firewalld.service<br />[vagrant@box1 ~]$<br /><br />[acool@localhost docker-swarm-demo]$ # create box2 and box3 via vagrant<br /><br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ # install and start docker as previously shown in box1 <br />[vagrant@box2 ~]$ # disable firewall as previously shown in box1<br />[vagrant@box2 ~]$<br />[vagrant@box2 ~]$ ip address show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:e1:c4:f9 brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.27/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3436sec preferred_lft 3436sec<br />    inet6 fe80::5054:ff:fee1:c4f9/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@box2 ~]$<br /><br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ # install and start docker as previously shown in box1 <br />[vagrant@box3 ~]$ # disable firewall as previously shown in box1<br />[vagrant@box3 ~]$<br />[vagrant@box3 ~]$ ip address show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:18:5a:8c brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.88/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3323sec preferred_lft 3323sec<br />    inet6 fe80::5054:ff:fe18:5a8c/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ # make sure all boxes can ping each other<br />[vagrant@box3 ~]$ ping -c2 192.168.122.102<br />PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data.<br />64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=0.562 ms<br />64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.619 ms<br /><br />--- 192.168.122.102 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 1000ms<br />rtt min/avg/max/mdev = 0.562/0.590/0.619/0.037 ms<br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ ping -c2 192.168.122.27<br />PING 192.168.122.27 (192.168.122.27) 56(84) bytes of data.<br />64 bytes from 192.168.122.27: icmp_seq=1 ttl=64 time=0.457 ms<br />64 bytes from 192.168.122.27: icmp_seq=2 ttl=64 time=0.312 ms<br /><br />--- 192.168.122.27 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 1000ms<br />rtt min/avg/max/mdev = 0.312/0.384/0.457/0.075 ms<br />[vagrant@box3 ~]$<br /><br /><br /><br />The gist of this demo:<br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker swarm init --advertise-addr 192.168.122.102<br />Swarm initialized: current node (325hn4zrumoinjslhiw3p9c1j) is now a manager.<br /><br />To add a worker to this swarm, run the following command:<br /><br />    docker swarm join \<br />    --token SWMTKN-1-1qm592qpo4j2ka5nxqx98vizi6z9dtag4rou49zxvrr7rww72g-agsgzbalcyw0c7saupqvk90sl \<br />    192.168.122.102:2377<br /><br />To add a manager to this swarm, run &#039;docker swarm join-token manager&#039; and follow the instructions.<br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br /><br /><br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ sudo docker swarm join \<br />&gt;     --token SWMTKN-1-1qm592qpo4j2ka5nxqx98vizi6z9dtag4rou49zxvrr7rww72g-agsgzbalcyw0c7saupqvk90sl \<br />&gt;     192.168.122.102:2377<br />This node joined a swarm as a worker.<br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$<br /><br /><br />[vagrant@box3 ~]<br />[vagrant@box3 ~]<br />[vagrant@box3 ~]$ sudo docker swarm join \<br />&gt;     --token SWMTKN-1-1qm592qpo4j2ka5nxqx98vizi6z9dtag4rou49zxvrr7rww72g-agsgzbalcyw0c7saupqvk90sl \<br />&gt;     192.168.122.102:2377<br />This node joined a swarm as a worker.<br />[vagrant@box3 ~]<br />[vagrant@box3 ~]<br /><br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker node ls<br />ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS<br />325hn4zrumoinjslhiw3p9c1j *  box1      Ready   Active        Leader<br />78uis92n6z7lg2glmsbkzuag0    box3      Ready   Active        <br />ehjej7f2ol2svf4nci0k9x4if    box2      Ready   Active        <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ # lets create a service<br />[vagrant@box1 ~]$ sudo docker service create --replicas 5 -p 80:80 --name web nginx<br />ytr9c94iieku7akjlp1gsq8mt<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ls<br />ID            NAME  MODE        REPLICAS  IMAGE<br />ytr9c94iieku  web   replicated  0/5       nginx:latest<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ sudo docker service ps web<br />ID            NAME   IMAGE         NODE  DESIRED STATE  CURRENT STATE                   ERROR  PORTS<br />n4n6xun4dlmn  web.1  nginx:latest  box2  Running        Preparing 20 seconds ago               <br />ks1cnh8oko1r  web.2  nginx:latest  box3  Running        Running less than a second ago         <br />lhqha4nd2sj2  web.3  nginx:latest  box1  Running        Preparing 20 seconds ago               <br />dy48ok6b1clb  web.4  nginx:latest  box2  Running        Preparing 20 seconds ago               <br />81dkfenyjrbz  web.5  nginx:latest  box3  Running        Running less than a second ago         <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # nginx should be available via any box ip in your browser: <a href="http://192.168.122.88/" >http://192.168.122.88/</a>, <a href="http://192.168.122.27/" >http://192.168.122.27/</a> or <a href="http://192.168.122.102/" >http://192.168.122.102/</a><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # we can try curl too<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ curl 192.168.122.102<br />...<br />[vagrant@box1 ~]$ curl 192.168.122.88<br />...<br />[vagrant@box1 ~]$ curl 192.168.122.27<br />...<br /><br />[vagrant@box2 ~]$ # lets see how much memory each replica is assigned<br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ sudo docker stats --no-stream<br />CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O             BLOCK I/O           PIDS<br />19467a26755f        0.00%               1.402 MiB / 487.1 MiB   0.29%               8.65 kB / 9.52 kB   0 B / 0 B           2<br />427cf3658a03        0.00%               1.383 MiB / 487.1 MiB   0.28%               4.65 kB / 2.86 kB   1.83 MB / 0 B       2<br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$<br /><br />[vagrant@box1 ~]$ # lets update each replica memory limit to 250M<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service update --limit-memory 250M web<br />web<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br /><br /><br />[vagrant@box3 ~]$  # verify memory adjustment<br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ sudo docker stats --no-stream<br />CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS<br />8990a7fa2489        0.00%               1.375 MiB / 250 MiB   0.55%               2.19 kB / 1.31 kB   0 B / 0 B           2<br />e6d71ec0caf8        0.00%               1.375 MiB / 250 MiB   0.55%               2.62 kB / 1.31 kB   0 B / 0 B           2<br />[vagrant@box3 ~]$<br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # lets update our service with a different image, we&#039;ll try httpd instead of nginx :)<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service update --image httpd web<br />web<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ps web<br />ID            NAME       IMAGE         NODE  DESIRED STATE  CURRENT STATE                    ERROR  PORTS<br />opf7ks9q5rj4  web.1      httpd:latest  box2  Running        Starting less than a second ago         <br />sbbs4g9shkzm   \_ web.1  nginx:latest  box2  Shutdown       Shutdown 5 seconds ago                  <br />n4n6xun4dlmn   \_ web.1  nginx:latest  box2  Shutdown       Shutdown 3 minutes ago                  <br />vvv6018iym4j  web.2      nginx:latest  box3  Running        Running 3 minutes ago                   <br />ks1cnh8oko1r   \_ web.2  nginx:latest  box3  Shutdown       Shutdown 3 minutes ago                  <br />nl0oddf682d3  web.3      nginx:latest  box1  Running        Running 3 minutes ago                   <br />lhqha4nd2sj2   \_ web.3  nginx:latest  box1  Shutdown       Shutdown 3 minutes ago                  <br />xgcgisnlz5kd  web.4      nginx:latest  box1  Running        Running 3 minutes ago                   <br />dy48ok6b1clb   \_ web.4  nginx:latest  box2  Shutdown       Shutdown 3 minutes ago                  <br />jw9btp4h734o  web.5      nginx:latest  box3  Running        Running 3 minutes ago                   <br />81dkfenyjrbz   \_ web.5  nginx:latest  box3  Shutdown       Shutdown 3 minutes ago                  <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ls<br />ID            NAME  MODE        REPLICAS  IMAGE<br />ytr9c94iieku  web   replicated  5/5       httpd:latest<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # all nodes should render apache httpd welcome message now! <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # lets increase the number of replicas<br />[vagrant@box1 ~]$ sudo docker service scale web=8<br />web scaled to 8<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ls<br />ID            NAME  MODE        REPLICAS  IMAGE<br />ytr9c94iieku  web   replicated  8/8       httpd:latest<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ exit<br />logout<br />Connection to 192.168.122.102 closed.<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ </pre><br /><br />Enjoy!]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry200223-012017</id>
		<issued>2020-02-23T00:00:00Z</issued>
		<modified>2020-02-23T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Vagrant: Creating two CentOS VMs and ping each other.</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry200221-232810" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[acool@localhost ~]$ date<br />Fri 21 Feb 2020 02:53:59 PM PST<br />[acool@localhost ~]$<br />[acool@localhost ~]$ cat /etc/redhat-release <br />Fedora release 31 (Thirty One)<br />[acool@localhost ~]$<br />[acool@localhost ~]$ sudo dnf install vagrant-libvirt<br />...<br />[acool@localhost ~]$ vagrant --version<br />Vagrant 2.2.6<br />[acool@localhost ~]$<br />[acool@localhost ~]$ mkdir vagrant-box-1<br />[acool@localhost ~]$ cd vagrant-box-1/<br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$ vagrant init centos/7<br />A `Vagrantfile` has been placed in this directory. You are now<br />ready to `vagrant up` your first virtual environment! Please read<br />the comments in the Vagrantfile as well as documentation on<br />`vagrantup.com` for more information on using Vagrant.<br />[acool@localhost vagrant-box-1]<br />[acool@localhost vagrant-box-1]$ vagrant up<br />...<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ vagrant status<br />Current machine states:<br /><br />default                   running (libvirt)<br /><br />The Libvirt domain is running. To stop this machine, you can run<br />`vagrant halt`. To destroy the machine, you can run `vagrant destroy`.<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ # you can now visually access this VM via &quot;Boxes&quot; which is like virt-manager<br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$ # or you can ssh into this box via vagrant<br />[acool@localhost vagrant-box-1]$ vagrant ssh<br />Last login: Fri Feb 21 23:05:54 2020 from 192.168.122.1<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.6.1810 (Core) <br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ ip a<br />1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000<br />    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br />    inet 127.0.0.1/8 scope host lo<br />       valid_lft forever preferred_lft forever<br />    inet6 ::1/128 scope host <br />       valid_lft forever preferred_lft forever<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:7f:e1:0c brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.194/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3068sec preferred_lft 3068sec<br />    inet6 fe80::5054:ff:fe7f:e10c/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ exit<br />logout<br />Connection to 192.168.122.194 closed.<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$ # lets create another box<br />[acool@localhost vagrant-box-1]$ cd ../ &amp;&amp; mkdir vagrant-box-2<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cd vagrant-box-2<br />[acool@localhost vagrant-box-2]$ <br />[acool@localhost vagrant-box-2]$ vagrant init centos/7<br />A `Vagrantfile` has been placed in this directory. You are now<br />ready to `vagrant up` your first virtual environment! Please read<br />the comments in the Vagrantfile as well as documentation on<br />`vagrantup.com` for more information on using Vagrant.<br />[acool@localhost vagrant-box-2]$ <br />[acool@localhost vagrant-box-2]$ vagrant up<br />...<br />[acool@localhost vagrant-box-2]$ <br />[acool@localhost vagrant-box-2]$ vagrant ssh<br />Last login: Fri Feb 21 23:17:20 2020 from 192.168.122.1<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ ip a show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:69:74:25 brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.27/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 2908sec preferred_lft 2908sec<br />    inet6 fe80::5054:ff:fe69:7425/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ # let&#039;s ping box-1 from box-2<br />[vagrant@localhost ~]$ ping -c 2 192.168.122.194<br />PING 192.168.122.194 (192.168.122.194) 56(84) bytes of data.<br />64 bytes from 192.168.122.194: icmp_seq=1 ttl=64 time=0.589 ms<br />64 bytes from 192.168.122.194: icmp_seq=2 ttl=64 time=0.548 ms<br /><br />--- 192.168.122.194 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 999ms<br />rtt min/avg/max/mdev = 0.548/0.568/0.589/0.031 ms<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.6.1810 (Core) <br />[vagrant@localhost ~]$ exit<br />logout<br />Connection to 192.168.122.27 closed.<br />[acool@localhost vagrant-box-2]$<br />[acool@localhost vagrant-box-2]$<br />[acool@localhost vagrant-box-2]$<br />[acool@localhost vagrant-box-2]$# le&#039;s cleanup our tests<br />[acool@localhost vagrant-box-2]$ vagrant destroy<br />...<br />[acool@localhost vagrant-box-1]$ vagrant destroy<br /></pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry200221-232810</id>
		<issued>2020-02-21T00:00:00Z</issued>
		<modified>2020-02-21T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Docker: CentOS 7 Fun.</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry181212-030136" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ cat /etc/redhat-release <br />Fedora release 24 (Twenty Four)<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker images<br />REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE<br />[aesteban@localhost ~]$ sudo docker ps -a<br />CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker pull centos:7<br />Trying to pull repository docker.io/library/centos ... <br />7: Pulling from docker.io/library/centos<br /><br />a02a4930cb5d: Pull complete <br />Digest: sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426<br />Status: Downloaded newer image for docker.io/centos:7<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker images<br />REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE<br />docker.io/centos    7                   1e1148e4cc2c        6 days ago          201.8 MB<br />[aesteban@localhost ~]$ sudo docker ps -a<br />CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES<br />[aesteban@localhost ~]$ sudo docker run -d --privileged -p 80:80 docker.io/centos:7 /sbin/init<br />f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker exec -it f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36  bash<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# yum install epel-release<br />...<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# yum install nginx<br />...<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl enable nginx<br />Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl start nginx<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl status nginx<br />● nginx.service - The nginx HTTP and reverse proxy server<br />   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)<br />   Active: active (running) since Wed 2018-12-12 02:55:32 UTC; 4s ago<br />  Process: 2643 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)<br />  Process: 2642 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)<br />  Process: 2641 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)<br /> Main PID: 2644 (nginx)<br />   CGroup: /system.slice/docker-f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36.scope/system.slice/nginx.service<br />           ├─2644 nginx: master process /usr/sbin/nginx<br />           ├─2645 nginx: worker process<br />           ├─2646 nginx: worker process<br />           ├─2647 nginx: worker process<br />           └─2648 nginx: worker process<br /><br />Dec 12 02:55:31 f0faf6197fbc systemd[1]: Starting The nginx HTTP and reverse proxy server...<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: configuration file /etc/nginx/nginx.conf test is successful<br />Dec 12 02:55:32 f0faf6197fbc systemd[1]: Started The nginx HTTP and reverse proxy server.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]#<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# exit<br />exit<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ # localhost should now be accessible in browser<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker exec -it f0faf6197fbc  systemctl status nginx<br />● nginx.service - The nginx HTTP and reverse proxy server<br />   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)<br />   Active: active (running) since Wed 2018-12-12 02:55:32 UTC; 1min 57s ago<br />  Process: 2643 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)<br />  Process: 2642 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)<br />  Process: 2641 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)<br /> Main PID: 2644 (nginx)<br />   CGroup: /system.slice/docker-f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36.scope/system.slice/nginx.service<br />           ├─2644 nginx: master process /usr/sbin/nginx<br />           ├─2645 nginx: worker process<br />           ├─2646 nginx: worker process<br />           ├─2647 nginx: worker process<br />           └─2648 nginx: worker process<br /><br />Dec 12 02:55:31 f0faf6197fbc systemd[1]: Starting The nginx HTTP and reverse proxy server...<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: configuration file /etc/nginx/nginx.conf test is successful<br />Dec 12 02:55:32 f0faf6197fbc systemd[1]: Started The nginx HTTP and reverse proxy server.<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker exec -it f0faf6197fbc  bash<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl status postfix<br />Unit postfix.service could not be found.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl status memcached<br />Unit memcached.service could not be found.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]#  # We can install postfix and memcached with yum using the same procedure! <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]#  # Exercise Done :) !! </pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry181212-030136</id>
		<issued>2018-12-12T00:00:00Z</issued>
		<modified>2018-12-12T00:00:00Z</modified>
	</entry>
	<entry>
		<title>PostgreSQL Fun</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry180506-013458" />
		<content type="text/html" mode="escaped"><![CDATA[<pre><br />#Fedora 23<br />[acool@localhost ~]$ sudo dnf install postgresql-server.x86_64<br />[acool@localhost ~]$ sudo postgresql-setup --initdb<br />[acool@localhost ~]$ sudo systemctl start postgresql<br />[acool@localhost ~]$ sudo su - postgres<br />-bash-4.3$ <br />-bash-4.3$ <br />-bash-4.3$ createuser --pwprompt acool<br />Enter password for new role: <br />Enter it again: <br />-bash-4.3$ <br />-bash-4.3$ <br />-bash-4.3$ psql<br />psql (9.4.9)<br />Type &quot;help&quot; for help.<br />postgres=# <br />postgres=# <br />postgres=#<br />postgres-# # list databases<br />postgres-# \l<br />                                  List of databases<br />   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   <br />-----------+----------+----------+-------------+-------------+-----------------------<br /> postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | <br /> template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +<br />           |          |          |             |             | postgres=CTc/postgres<br /> template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +<br />           |          |          |             |             | postgres=CTc/postgres<br />(3 rows)<br /><br />postgres-#<br />postgres-# #list users (roles)<br />postgres-# \du<br />                             List of roles<br /> Role name |                   Attributes                   | Member of <br />-----------+------------------------------------------------+-----------<br /> acool     |                                                | {}<br /> postgres  | Superuser, Create role, Create DB, Replication | {}<br /><br />postgres-#<br />postgres=# <br />postgres=# ALTER USER acool SUPERUSER;<br />ALTER ROLE<br />postgres=# ALTER USER acool CREATEROLE;<br />ALTER ROLE<br />postgres=# ALTER USER acool CREATEDB;<br />ALTER ROLE<br />postgres=# ALTER USER acool REPLICATION;<br />ALTER ROLE<br />postgres=# \du<br />                             List of roles<br /> Role name |                   Attributes                   | Member of <br />-----------+------------------------------------------------+-----------<br /> acool     | Superuser, Create role, Create DB, Replication | {}<br /> postgres  | Superuser, Create role, Create DB, Replication | {}<br /><br />postgres=#<br /><br /><br />[acool@localhost ~]$<br />[acool@localhost ~]$<br />[acool@localhost ~]$ cat postgresql-test.sql <br />CREATE TABLE soccer_teams<br />(<br />	name varchar(250),<br />	city varchar(250)<br />);<br />[acool@localhost ~]$  <br />[acool@localhost ~]$ createdb my-db-test<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ psql -d my-db-test<br />psql (9.4.9)<br />Type &quot;help&quot; for help.<br /><br />my-db-test=# <br />my-db-test=#<br />my-db-test=# \i postgresql-test.sql <br />CREATE TABLE<br />my-db-test=#<br />my-db-test=# \dt<br />           List of relations<br /> Schema |     Name     | Type  | Owner <br />--------+--------------+-------+-------<br /> public | soccer_teams | table | acool<br />(1 row)<br /><br />my-db-test=#<br />my-db-test-# <br />my-db-test-# \d+ soccer_teams <br />                             Table &quot;public.soccer_teams&quot;<br /> Column |          Type          | Modifiers | Storage  | Stats target | Description <br />--------+------------------------+-----------+----------+--------------+-------------<br /> name   | character varying(250) |           | extended |              | <br /> city   | character varying(250) |           | extended |              | <br /><br />my-db-test=# <br />my-db-test=# INSERT INTO soccer_teams(name,city) VALUES(&#039;LA Galaxy&#039;,&#039;Los Angeles&#039;);<br />INSERT 0 1<br />my-db-test=# <br />my-db-test=# SELECT * FROM soccer_teams;<br />   name    |    city     <br />-----------+-------------<br /> LA Galaxy | Los Angeles<br />(1 row)<br /><br />my-db-test-# <br />my-db-test-# \q<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ psql -V<br />psql (PostgreSQL) 9.4.9<br />[acool@localhost ~]$<br />[acool@localhost ~]$<br />[acool@localhost ~]$<br /><br />// other crap<br />postgres=# #delete user (without roles?, DROP ROLE acool works too?, objects depend on it?)<br />postgres=# DROP USER acool;<br />DROP ROLE<br /></pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry180506-013458</id>
		<issued>2018-05-06T00:00:00Z</issued>
		<modified>2018-05-06T00:00:00Z</modified>
	</entry>
	<entry>
		<title>AWS Notes</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry180504-224203" />
		<content type="text/html" mode="escaped"><![CDATA[S3 crap<br /><pre><br />#list s3 buckets<br />[acool@acool2 www]$ aws s3 ls<br /><br />#list files in bucket<br />[acool@acool2 www]$ aws s3 ls s3://logs-fastly/www/<br />...<br />[aesteban@localhost ~]$ aws s3 ls s3://my-db-backup/all_dbs-2019-10-08/ --human-readable --summarize<br />...<br /><br /><br />#downloading a list of files from bucket<br />[acool@acool2 www]$ for i in  x.log y.log z.log; do aws s3 cp &quot;s3://logs-fastly/www/$i&quot; .; done</pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry180504-224203</id>
		<issued>2018-05-04T00:00:00Z</issued>
		<modified>2018-05-04T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Docker: Tasks 101</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry180411-220752" />
		<content type="text/html" mode="escaped"><![CDATA[So in my opinion containers are more like chroot-in-steroids isolation than fully virtualized environments. That&#039;s how it feels to me so far as of 4/11/2018. - Angel<br /><br /><pre><br /># installing docker (1.10.3)<br />[aesteban@localhost ~]$ sudo dnf install docker<br /><br /># start and enable it<br />[aesteban@localhost ~]$ sudo systemctl start docker<br />[aesteban@localhost ~]$ sudo systemctl enable docker<br /><br /># download and run hello-world image<br />[aesteban@localhost ~]$ sudo docker run hello-world <br /><br /># download ubuntu image and run it<br />[aesteban@localhost ~]$ sudo docker run -it ubuntu bash<br /><br /># listing images<br />[aesteban@localhost ~]$<br />[aesteban@localhost ~]$ sudo docker images<br />REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE<br />docker.io/hello-world   latest              e38bc07ac18e        3 hours ago         1.848 kB<br />docker.io/ubuntu        latest              f975c5035748        5 weeks ago         112.4 MB<br />[aesteban@localhost ~]$ <br /><br /># listing all containers<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker ps -a<br />CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                     PORTS               NAMES<br />61d35d591f53        ubuntu              &quot;bash&quot;              3 minutes ago       Exited (0) 3 minutes ago                       clever_brahmagupta<br />1c61c7a6f4d7        hello-world         &quot;/hello&quot;            4 minutes ago       Exited (0) 4 minutes ago                       tiny_boyd<br />[aesteban@localhost ~]$ <br /><br /># removing a container (container id)<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker rm 61d35d591f53<br />61d35d591f53<br />[aesteban@localhost ~]$ <br /><br /># removing an image (image id)<br />[aesteban@localhost ~]$ sudo docker rmi f975c5035748<br />Untagged: docker.io/ubuntu:latest<br />Untagged: docker.io/ubuntu@sha256:e348fbbea0e0a0e73ab0370de151e7800684445c509d46195aef73e090a49bd6<br />Deleted: sha256:f975c50357489439eb9145dbfa16bb7cd06c02c31aa4df45c77de4d2baa4e232<br />Deleted: sha256:0bd983fc698ee9453dd7d21f8572ea1016ec9255346ceabb0f9e173b4348644f<br />Deleted: sha256:08fe90e1a1644431accc00cc80f519f4628dbf06a653c76800b116d3333d2b6d<br />Deleted: sha256:5dc5eef2b94edd185b4d39586e7beb385a54b6bac05d165c9d47494492448235<br />Deleted: sha256:14a40a140881d18382e13b37588b3aa70097bb4f3fb44085bc95663bdc68fe20<br />Deleted: sha256:a94e0d5a7c404d0e6fa15d8cd4010e69663bd8813b5117fbad71365a73656df9<br />[aesteban@localhost ~]$ <br /><br /># deleting all containers<br />[aesteban@localhost ~]$ sudo docker rm $(sudo docker ps -a -q)<br />1c61c7a6f4d7<br />[aesteban@localhost ~]$ <br /><br /># deleting all images<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker rmi $(sudo docker images -q)<br />Untagged: docker.io/hello-world:latest<br />Untagged: docker.io/hello-world@sha256:6c88d0eedd6a5e71f0affaf150f8b7b286c7bdc679f23d726d12781803e727d3<br />Deleted: sha256:e38bc07ac18ee64e6d59cf2eafcdddf9cec2364dfe129fe0af75f1b0194e0c96<br />Deleted: sha256:2b8cbd0846c5aeaa7265323e7cf085779eaf244ccbdd982c4931aef9be0d2faf<br />[aesteban@localhost ~]$ <br /><br /># installing docker-compose<br />[aesteban@localhost ~]$ sudo dnf install docker-compose<br /><br /># saving a modified container (eg. after installing RPMs)<br />[aesteban@localhost ~]$ sudo docker commit &quot;container-id&quot; &quot;image_name&quot; # repo name<br /><br />#<br />[aesteban@localhost ~]$ sudo docker start &quot;container-id&quot;<br /><br />#<br />[aesteban@localhost ~]$ sudo docker attach &quot;container-id&quot;<br /><br /># exporting an image to a file<br />[aesteban@localhost ~]$ sudo docker save &quot;image-id&quot; | gzip -c image_name.tgz<br /><br /># importing an image from a file<br />[aesteban@localhost ~]$ sudo docker load &lt; image_name.tgz<br /><br /># sample docker-compose.yml<br />[aesteban@localhost docker-practice]$ cat docker-compose.yml <br />web:<br /> image: nginx:latest<br /> ports:<br />  - &quot;8888:80&quot;<br /> volumes:<br />  - ./code:/code<br />  - ./site.conf:/etc/nginx/conf.d/site.conf<br /> links:<br />  - php<br />php:<br />    image: cytopia/php-fpm-7.1 # could also be and id (imageid? or containerid? I think the first.) <br />    volumes:<br />      - ./code/:/code/<br />[aesteban@localhost docker-practice]$<br /><br /># executing docker-compose.yml<br />[aesteban@localhost docker-practice]$ sudo docker-compose up<br />...<br /><br /># sample docker file<br />[aesteban@localhost docker-practice]$ cat Dockerfile <br />FROM miveo/centos-php-fpm:7.1<br />RUN yum -y install php71u-pecl-memcached.x86_64<br />[aesteban@localhost docker-practice]$<br />[aesteban@localhost docker-practice]$<br /><br /># building a docker file<br />[aesteban@localhost docker-practice]$ sudo docker build  -t ent:ent .<br />...<br /><br /># verifying build<br />[aesteban@localhost docker-practice]$ <br />[aesteban@localhost docker-practice]$ sudo docker images<br />REPOSITORY                       TAG                 IMAGE ID            CREATED              SIZE<br />ent                              ent                 3d83d51abac9        About a minute ago   429.3 MB<br />docker.io/nginx                  latest              b175e7467d66        27 hours ago         108.9 MB<br />docker.io/cytopia/php-fpm-7.1    latest              927ad858fb6a        7 months ago         1.098 GB<br />docker.io/miveo/centos-php-fpm   7.1                 95cae7821f24        15 months ago        287.6 MB<br />[aesteban@localhost docker-practice]$ <br />[aesteban@localhost docker-practice]$ sudo docker run -it 3d83d51abac9  bash<br />[root@43bb9a049b54 /]# <br />[root@43bb9a049b54 /]# rpm -qa | grep php | grep memcached<br />php71u-pecl-memcached-3.0.4-2.ius.centos7.x86_64<br />[root@43bb9a049b54 /]# <br />[root@43bb9a049b54 /]# <br /></pre><br /><br />12/11/2018<br /><pre><br />#stop container id<br />[aesteban@localhost docker-practice-2]$ sudo docker stop 2d900ed18675<br />2d900ed18675<br />[aesteban@localhost docker-practice-2]$</pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry180411-220752</id>
		<issued>2018-04-11T00:00:00Z</issued>
		<modified>2018-04-11T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Virsh: Moving a VM image to a new machine.</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry171020-180737" />
		<content type="text/html" mode="escaped"><![CDATA[1.- rsync .img image file to new machine.<br />2.- Output the xml definition file in the source machine:<br /><pre>[acool@oldmachine ~]$sudo virsh dumpxml VMNAME &gt; VMNAME.xml</pre><br />3.- Adjust path of .img file image in xml file as needed.<br />4.- On the destination machine define the new VM:<br /><pre>[acool@newmachine ~]$sudo virsh define VMNAME.xml</pre><br />5.- Start VM.<br /><br />Big shout-out to <a href="https://serverfault.com/questions/434064/correct-way-to-move-kvm-vm" >dyasny</a>.]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry171020-180737</id>
		<issued>2017-10-20T00:00:00Z</issued>
		<modified>2017-10-20T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Ansible: Tasks 101</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry170716-003842" />
		<content type="text/html" mode="escaped"><![CDATA[More crap will be added to this post sometime in the future, stay tuned if you want...<br /><br />-Angel<br /><pre><br />[acool@hydra2 ansible]$ # running a playbook<br />[acool@hydra2 ansible]$ ansible-playbook dev-hosts-playbook.yml -i dev-hosts.txt <br />[acool@hydra2 ansible]$<br />[acool@hydra2 ansible]$# content of dev-hosts.txt<br />[acool@hydra2 ansible]$ cat dev-hosts.txt <br />acool2.10-network.net ansible_user=root<br />userA_2.10-network.net ansible_user=root<br />userB_2.10-network.net ansible_user=root<br />userC_2.10-network.net ansible_user=root<br /><br /><br />[acool@hydra2 ansible]$ <br />[acool@hydra2 ansible]$ # playbook content<br />[acool@hydra2 ansible]$ cat dev-hosts-playbook.yml <br />---<br />- hosts: all<br />  tasks:<br />  - name: Installing EPEL repo.<br />    yum: pkg=epel-release.noarch state=installed<br />  - name : Installing RPMs<br />    yum: pkg={{item}} state=installed<br />    with_items:<br />        - centos-release-scl<br />        - centos-release-scl-rh<br />        - rh-php70<br />        - rh-php70-php-mysqlnd<br />        - rh-php70-php-bcmath<br />        - rh-php70-php-gd<br />        - rh-php70-php-soap<br />        - rh-php70-php-mbstring<br />        - rh-php70-php-fpm<br />        - sclo-php70-php-pecl-memcached<br />        - git<br />        #- rabbitmq-server<br />        - openvpn<br />        - nginx<br />        - composer<br />        - memcached<br />        - npm<br />        - http-parser<br />  - name: Open firewall ports<br />    firewalld:<br />        port: &quot;{{item.port}}/tcp&quot;<br />        zone: public<br />        permanent: true<br />        state: enabled<br />        immediate: yes<br />    with_items:<br />        - { port: &#039;80&#039; }<br />        - { port: &#039;443&#039; }        <br />  - name: Starting services.<br />    action: service name={{item}} state=started enabled=yes<br />    with_items:<br />        - nginx<br />        - memcached<br />        - rh-php70-php-fpm<br />  - name: Enabling php 7<br />    copy:<br />        src: /home/acool/ansible/files/dev-vms/rh-php70.sh<br />        dest: /etc/profile.d/rh-php70.sh<br />  - name: Setting SELINUX to permissive.<br />    selinux:<br />        policy: targeted<br />        state: permissive<br />  - name: Copying nginx config files<br />    template:<br />        src: /home/acool/ansible/templates/dev-vms/10-network-net.conf<br />        dest: /etc/nginx/conf.d/10-network-net.conf<br />  - name: Installing gulp globally.<br />    command: npm install gulp -g<br />[acool@hydra2 ansible]$ <br />[acool@hydra2 ansible]$<br />[acool@hydra2 ansible]$<br />[acool@hydra2 ansible]$<br />[acool@hydra2 ansible]$ <br />[acool@hydra2 ansible]$ <br />[acool@hydra2 ansible]$ # Ad-Hoc commands, -i stands for inventory and -l for limit<br />[acool@hydra2 ansible]$  ansible all -i dev-hosts.txt -a &#039;free -h&#039; -l acool2.10-network.net<br />acool2.10-network.net | SUCCESS | rc=0 &gt;&gt;<br />              total        used        free      shared  buff/cache   available<br />Mem:           1.8G        142M        1.5G        8.6M        165M        1.5G<br />Swap:          2.0G          0B        2.0G<br /><br />[acool@hydra2 ansible]$ <br />[acool@hydra2 ansible]$<br /></pre><br /><br /><br />9/1/2018 - more stuff :)<br /><pre>[aesteban@localhost ansible]$ ## adding a new role<br />[aesteban@localhost ansible]$ ansible-galaxy init roles/dev --offline<br />[aesteban@localhost ansible]$<br />[aesteban@localhost ansible]$ ll roles/<br />total 16<br />drwxrwxr-x 8 aesteban aesteban 4096 Sep  1 13:04 app<br />drwxrwxr-x 8 aesteban aesteban 4096 Sep  1 11:17 cms<br />drwxrwxr-x 8 aesteban aesteban 4096 Sep  1 11:10 common<br />drwxrwxr-x 8 aesteban aesteban 4096 Sep  1 13:04 dev<br />[aesteban@localhost ansible]$ <br />[aesteban@localhost ansible]$ <br />[aesteban@localhost ansible]$ ls -l<br />total 32<br />-rw-rw-r-- 1 aesteban aesteban   62 Sep  1 12:43 app-machines.yml<br />-rw-rw-r-- 1 aesteban aesteban   62 Sep  1 12:43 cms-machines.yml<br />-rw-rw-r-- 1 aesteban aesteban   70 Sep  1 16:24 dev-machines.yml<br />drwxrwxr-x 2 aesteban aesteban 4096 Sep  1 12:48 files<br />-rw-rw-r-- 1 aesteban aesteban   65 Sep  1 16:19 hosts.txt<br />drwxrwxr-x 6 aesteban aesteban 4096 Sep  1 13:04 roles<br />drwxrwxr-x 2 aesteban aesteban 4096 Sep  1 11:10 templates<br />[aesteban@localhost ansible]$<br />[aesteban@localhost ansible]$<br />[aesteban@localhost ansible]$ ansible-playbook -i hosts.txt dev-machines.yml --check --limit &quot;dev3.example.com&quot;<br />...<br />[aesteban@localhost ansible]$<br />[aesteban@localhost ansible]$ cat hosts.txt <br />[devmachines]<br />dev3.example.com<br /><br />[cmsmachines]<br /><br />[appmachies]<br />[aesteban@localhost ansible]$ <br />[aesteban@localhost ansible]$ ansible --version<br />ansible 2.3.1.0<br />  config file = /etc/ansible/ansible.cfg<br />  configured module search path = Default w/o overrides<br />  python version = 2.7.13 (default, May 10 2017, 20:04:36) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]<br />[aesteban@localhost ansible]$ <br />[aesteban@localhost ansible]$ <br />[aesteban@localhost ansible]$ ansible-playbook -i hosts.txt dev-machines.yml  --syntax-check<br /><br />playbook: dev-machines.yml<br />[aesteban@localhost ansible]$ <br /></pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry170716-003842</id>
		<issued>2017-07-16T00:00:00Z</issued>
		<modified>2017-07-16T00:00:00Z</modified>
	</entry>
	<entry>
		<title>LVM - Logical Volume Manager Commands 101</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry170426-014224" />
		<content type="text/html" mode="escaped"><![CDATA[<pre>[aesteban@localhost ~]$  # PVS, VGS and LVS commands<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo pvs<br />  PV         VG     Fmt  Attr PSize   PFree<br />  /dev/sda2  fedora lvm2 a--  237.98g 4.00m<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo vgs<br />  VG     #PV #LV #SN Attr   VSize   VFree<br />  fedora   1   3   0 wz--n- 237.98g 4.00m<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo lvs<br />  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert<br />  home fedora -wi-ao---- 180.17g                                                    <br />  root fedora -wi-ao----  50.00g                                                    <br />  swap fedora -wi-ao----   7.81g                                                    <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ </pre><br /><pre>[aesteban@localhost ~]$ # LVSCAN and LVDISPLAY commands<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo lvscan <br />  ACTIVE            &#039;/dev/fedora/swap&#039; [7.81 GiB] inherit<br />  ACTIVE            &#039;/dev/fedora/home&#039; [180.17 GiB] inherit<br />  ACTIVE            &#039;/dev/fedora/root&#039; [50.00 GiB] inherit<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo lvdisplay /dev/fedora/home<br />  --- Logical volume ---<br />  LV Path                /dev/fedora/home<br />  LV Name                home<br />  VG Name                fedora<br />  LV UUID                V6WFgj-PA3l-TYA7-fZ2J-IC0z-3yL4-4Rttov<br />  LV Write Access        read/write<br />  LV Creation host, time localhost.localdomain, 2016-10-19 10:59:08 -0700<br />  LV Status              available<br />  # open                 1<br />  LV Size                180.17 GiB<br />  Current LE             46123<br />  Segments               1<br />  Allocation             inherit<br />  Read ahead sectors     auto<br />  - currently set to     256<br />  Block device           253:2<br />   <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ </pre><br /><pre>[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo lvm<br />lvm&gt; <br />lvm&gt; <br />lvm&gt; <br />lvm&gt; lvscan<br />  ACTIVE            &#039;/dev/fedora/swap&#039; [7.81 GiB] inherit<br />  ACTIVE            &#039;/dev/fedora/home&#039; [180.17 GiB] inherit<br />  ACTIVE            &#039;/dev/fedora/root&#039; [50.00 GiB] inherit<br />lvm&gt; <br />lvm&gt; </pre><br /><br />Physical volumes commands:<br />pvcreate<br />pvmove<br />pvresize<br />...etc.<br /><br />Volume groups commands:<br />vgcreate<br />vgextend<br />vgconvert<br />vgreduce<br />...etc.<br /><br />Logical volumes commands:<br />lvmcache<br />lvmthin<br />lvconvert<br />lvchange<br />lvextend<br />lvreduce<br />lvremove<br />lvrename<br />...etc.<br /><br />See new kid on the block (as of 2017) : SSM, system storage manager.<br />]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry170426-014224</id>
		<issued>2017-04-26T00:00:00Z</issued>
		<modified>2017-04-26T00:00:00Z</modified>
	</entry>
	<entry>
		<title>CentOS 7: Recovering data from RAID 1 member.</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry170211-195449" />
		<content type="text/html" mode="escaped"><![CDATA[Scenario: We have a RAID 1 member, the other members are missing. We will re-assemble the MD array and mount it to recover the data.<br /><br /><pre><br />[acool@localhost sdX]$ #connect surviving hd in any available sata port,<br />[acool@localhost sdX]$ #copy partition from surviving HD (sdd)<br />[acool@localhost sdX]$ sudo dd if=/dev/sdd1 of=./sdX1.img status=progress<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$ #set image as loop device<br />[acool@localhost sdX]$ sudo losetup /dev/loop200 sdX1.img <br />[acool@localhost sdX]$<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$ <br />[acool@localhost sdX]$ #examine raid1 member <br />[acool@localhost sdX]$ sudo mdadm --examine /dev/loop200 <br />/dev/loop200:<br />          Magic : a92b4efc<br />        Version : 1.2<br />    Feature Map : 0x1<br />     Array UUID : 626c8ef2:f11c73eb:d3fb3366:bbf7a200<br />           Name : localhost.localdomain:root  (local to host localhost.localdomain)<br />  Creation Time : Thu Jan 19 10:05:17 2017<br />     Raid Level : raid1<br />   Raid Devices : 3<br /><br /> Avail Dev Size : 25165824 (12.00 GiB 12.88 GB)<br />     Array Size : 12582912 (12.00 GiB 12.88 GB)<br />    Data Offset : 16384 sectors<br />   Super Offset : 8 sectors<br />   Unused Space : before=16296 sectors, after=0 sectors<br />          State : clean<br />    Device UUID : 626c8ef2:f11c73eb:d3fb3366:bbf7ae4b<br /><br />Internal Bitmap : 8 sectors from superblock<br />    Update Time : Sat Jan 28 23:33:56 2017<br />  Bad Block Log : 512 entries available at offset 72 sectors<br />       Checksum : b33f59c9 - correct<br />         Events : 1540<br /><br /><br />   Device Role : Active device 0<br />   Array State : AAA (&#039;A&#039; == active, &#039;.&#039; == missing, &#039;R&#039; == replacing)<br />[acool@localhost sdX]$ <br />[acool@localhost sdX]$<br />[acool@localhost sdX]$ # reassemble array (I had to change UUID)<br />[acool@localhost sdX]$ sudo mdadm --assemble --run --force /dev/md200 --update=uuid --uuid=626c8ef2:f11c73eb:d3fb3366:bbf7a200 /dev/loop200<br />mdadm: /dev/md200 has been started with 1 drive (out of 3).<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$ sudo mdadm --detail /dev/md200<br />/dev/md200:<br />        Version : 1.2<br />  Creation Time : Thu Jan 19 10:05:17 2017<br />     Raid Level : raid1<br />     Array Size : 12582912 (12.00 GiB 12.88 GB)<br />  Used Dev Size : 12582912 (12.00 GiB 12.88 GB)<br />   Raid Devices : 3<br />  Total Devices : 1<br />    Persistence : Superblock is persistent<br /><br />  Intent Bitmap : Internal<br /><br />    Update Time : Sat Jan 28 23:33:56 2017<br />          State : clean, degraded <br /> Active Devices : 1<br />Working Devices : 1<br /> Failed Devices : 0<br />  Spare Devices : 0<br /><br />           Name : localhost.localdomain:root  (local to host localhost.localdomain)<br />           UUID : 626c8ef2:f11c73eb:d3fb3366:bbf7a200<br />         Events : 1540<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       7      200        0      active sync   /dev/loop200<br />       -       0        0        1      removed<br />       -       0        0        2      removed<br />[acool@localhost sdX]$ <br />[acool@localhost sdX]$ <br />[acool@localhost sdX]$ # mount md device in order to access content<br />[acool@localhost sdX]$ sudo mount /dev/md200 sdX1_mount/<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$ # you can now ls sdX1_mount directory to see contents<br />[acool@localhost sdX]$<br />[acool@localhost sdX]$ #also, see partscan option in losetup </pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry170211-195449</id>
		<issued>2017-02-11T00:00:00Z</issued>
		<modified>2017-02-11T00:00:00Z</modified>
	</entry>
	<entry>
		<title>FDISK - GDISK: List of known partition types</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry170210-204159" />
		<content type="text/html" mode="escaped"><![CDATA[FDISK<br /><pre>[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo fdisk /dev/sda<br /><br />Welcome to fdisk (util-linux 2.28.2).<br />Changes will remain in memory only, until you decide to write them.<br />Be careful before using the write command.<br /><br /><br />Command (m for help): l<br /><br /> 0  Empty           24  NEC DOS         81  Minix / old Lin bf  Solaris        <br /> 1  FAT12           27  Hidden NTFS Win 82  Linux swap / So c1  DRDOS/sec (FAT-<br /> 2  XENIX root      39  Plan 9          83  Linux           c4  DRDOS/sec (FAT-<br /> 3  XENIX usr       3c  PartitionMagic  84  OS/2 hidden or  c6  DRDOS/sec (FAT-<br /> 4  FAT16 &lt;32M      40  Venix 80286     85  Linux extended  c7  Syrinx         <br /> 5  Extended        41  PPC PReP Boot   86  NTFS volume set da  Non-FS data    <br /> 6  FAT16           42  SFS             87  NTFS volume set db  CP/M / CTOS / .<br /> 7  HPFS/NTFS/exFAT 4d  QNX4.x          88  Linux plaintext de  Dell Utility   <br /> 8  AIX             4e  QNX4.x 2nd part 8e  Linux LVM       df  BootIt         <br /> 9  AIX bootable    4f  QNX4.x 3rd part 93  Amoeba          e1  DOS access     <br /> a  OS/2 Boot Manag 50  OnTrack DM      94  Amoeba BBT      e3  DOS R/O        <br /> b  W95 FAT32       51  OnTrack DM6 Aux 9f  BSD/OS          e4  SpeedStor      <br /> c  W95 FAT32 (LBA) 52  CP/M            a0  IBM Thinkpad hi ea  Rufus alignment<br /> e  W95 FAT16 (LBA) 53  OnTrack DM6 Aux a5  FreeBSD         eb  BeOS fs        <br /> f  W95 Ext&#039;d (LBA) 54  OnTrackDM6      a6  OpenBSD         ee  GPT            <br />10  OPUS            55  EZ-Drive        a7  NeXTSTEP        ef  EFI (FAT-12/16/<br />11  Hidden FAT12    56  Golden Bow      a8  Darwin UFS      f0  Linux/PA-RISC b<br />12  Compaq diagnost 5c  Priam Edisk     a9  NetBSD          f1  SpeedStor      <br />14  Hidden FAT16 &lt;3 61  SpeedStor       ab  Darwin boot     f4  SpeedStor      <br />16  Hidden FAT16    63  GNU HURD or Sys af  HFS / HFS+      f2  DOS secondary  <br />17  Hidden HPFS/NTF 64  Novell Netware  b7  BSDI fs         fb  VMware VMFS    <br />18  AST SmartSleep  65  Novell Netware  b8  BSDI swap       fc  VMware VMKCORE <br />1b  Hidden W95 FAT3 70  DiskSecure Mult bb  Boot Wizard hid fd  Linux raid auto<br />1c  Hidden W95 FAT3 75  PC/IX           bc  Acronis FAT32 L fe  LANstep        <br />1e  Hidden W95 FAT1 80  Old Minix       be  Solaris boot    ff  BBT            <br /><br />Command (m for help): quit<br /><br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ </pre><br /><br />GDISK<br /><pre>[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo gdisk /dev/sda<br />GPT fdisk (gdisk) version 1.0.1<br /><br />Partition table scan:<br />  MBR: MBR only<br />  BSD: not present<br />  APM: not present<br />  GPT: not present<br /><br /><br />***************************************************************<br />Found invalid GPT and valid MBR; converting MBR to GPT format<br />in memory. THIS OPERATION IS POTENTIALLY DESTRUCTIVE! Exit by<br />typing &#039;q&#039; if you don&#039;t want to convert your MBR partitions<br />to GPT format!<br />***************************************************************<br /><br /><br />Command (? for help): l<br />0700 Microsoft basic data  0c01 Microsoft reserved    2700 Windows RE          <br />3000 ONIE boot             3001 ONIE config           3900 Plan 9              <br />4100 PowerPC PReP boot     4200 Windows LDM data      4201 Windows LDM metadata<br />4202 Windows Storage Spac  7501 IBM GPFS              7f00 ChromeOS kernel     <br />7f01 ChromeOS root         7f02 ChromeOS reserved     8200 Linux swap          <br />8300 Linux filesystem      8301 Linux reserved        8302 Linux /home         <br />8303 Linux x86 root (/)    8304 Linux x86-64 root (/  8305 Linux ARM64 root (/)<br />8306 Linux /srv            8307 Linux ARM32 root (/)  8400 Intel Rapid Start   <br />8e00 Linux LVM             a500 FreeBSD disklabel     a501 FreeBSD boot        <br />a502 FreeBSD swap          a503 FreeBSD UFS           a504 FreeBSD ZFS         <br />a505 FreeBSD Vinum/RAID    a580 Midnight BSD data     a581 Midnight BSD boot   <br />a582 Midnight BSD swap     a583 Midnight BSD UFS      a584 Midnight BSD ZFS    <br />a585 Midnight BSD Vinum    a600 OpenBSD disklabel     a800 Apple UFS           <br />a901 NetBSD swap           a902 NetBSD FFS            a903 NetBSD LFS          <br />a904 NetBSD concatenated   a905 NetBSD encrypted      a906 NetBSD RAID         <br />ab00 Recovery HD           af00 Apple HFS/HFS+        af01 Apple RAID          <br />af02 Apple RAID offline    af03 Apple label           af04 AppleTV recovery    <br />af05 Apple Core Storage    bc00 Acronis Secure Zone   be00 Solaris boot        <br />bf00 Solaris root          bf01 Solaris /usr &amp; Mac Z  bf02 Solaris swap        <br />bf03 Solaris backup        bf04 Solaris /var          bf05 Solaris /home       <br />bf06 Solaris alternate se  bf07 Solaris Reserved 1    bf08 Solaris Reserved 2  <br />Press the &lt;Enter&gt; key to see more codes: <br />bf09 Solaris Reserved 3    bf0a Solaris Reserved 4    bf0b Solaris Reserved 5  <br />c001 HP-UX data            c002 HP-UX service         ea00 Freedesktop $BOOT   <br />eb00 Haiku BFS             ed00 Sony system partitio  ed01 Lenovo system partit<br />ef00 EFI System            ef01 MBR partition scheme  ef02 BIOS boot partition <br />f800 Ceph OSD              f801 Ceph dm-crypt OSD     f802 Ceph journal        <br />f803 Ceph dm-crypt journa  f804 Ceph disk in creatio  f805 Ceph dm-crypt disk i<br />fb00 VMWare VMFS           fb01 VMWare reserved       fc00 VMWare kcore crash p<br />fd00 Linux RAID            <br /><br />Command (? for help): quit<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ </pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry170210-204159</id>
		<issued>2017-02-10T00:00:00Z</issued>
		<modified>2017-02-10T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Centos 7: Replace not-yet-failed RAID1 memeber with a new hd. (DRAFT)</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry170129-034959" />
		<content type="text/html" mode="escaped"><![CDATA[Scenario:<br /><br />Replace not-yet-failed RAID1 memeber with a new hd (sdd).<br />(http://unix.stackexchange.com/questions/74924/how-to-safely-replace-a-not-yet-failed-disk-in-a-linux-raid5-array)<br /><br /><pre><br /><br />// versions<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.3.1611 (Core) <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ mdadm --version<br />mdadm - v3.4 - 28th January 2016<br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br /><br /><br /><br />//check devices<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo lsblk <br />[sudo] password for acool: <br />NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT<br />sda         8:0    0 465.8G  0 disk  <br />├─sda1      8:1    0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sda2      8:2    0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sda3      8:3    0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sda4      8:4    0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sda5      8:5    0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdb         8:16   0  55.9G  0 disk  <br />├─sdb1      8:17   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdb2      8:18   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdb3      8:19   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdb4      8:20   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdb5      8:21   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdc         8:32   0 232.9G  0 disk  <br />├─sdc1      8:33   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdc2      8:34   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdc3      8:35   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdc4      8:36   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdc5      8:37   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdd         8:48   0 111.8G  0 disk  <br />├─sdd1      8:49   0 111.8G  0 part  <br />└─sdd5      8:53   0     4G  0 part  <br />sr0        11:0    1  1024M  0 rom   <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br />//check partitions<br /><br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo fdisk -l /dev/sd?<br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 4096 bytes<br />I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br /><br />Disk /dev/sdd: 120.0 GB, 120034123776 bytes, 234441648 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: dos<br />Disk identifier: 0x90909090<br /><br />   Device Boot      Start         End      Blocks   Id  System<br />/dev/sdd1   *          63   234441647   117220792+  a5  FreeBSD<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br /><br />//copy GPT table to sdd drive and generate random guids<br />//(idk why I got caution messages)<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo sgdisk /dev/sda -R /dev/sdd<br />Caution! Secondary header was placed beyond the disk&#039;s limits! Moving the<br />header, but other problems may occur!<br />The operation has completed successfully.<br />[acool@localhost ~]$ sudo sgdisk -G /dev/sdd<br />The operation has completed successfully.<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br /><br />//verify partitons<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo fdisk -l /dev/sd?<br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 4096 bytes<br />I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdd: 120.0 GB, 120034123776 bytes, 234441648 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo lsblk <br />NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT<br />sda         8:0    0 465.8G  0 disk  <br />├─sda1      8:1    0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sda2      8:2    0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sda3      8:3    0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sda4      8:4    0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sda5      8:5    0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdb         8:16   0  55.9G  0 disk  <br />├─sdb1      8:17   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdb2      8:18   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdb3      8:19   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdb4      8:20   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdb5      8:21   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdc         8:32   0 232.9G  0 disk  <br />├─sdc1      8:33   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdc2      8:34   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdc3      8:35   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdc4      8:36   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdc5      8:37   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdd         8:48   0 111.8G  0 disk  <br />├─sdd1      8:49   0    12G  0 part  <br />├─sdd2      8:50   0   6.9G  0 part  <br />├─sdd3      8:51   0     1G  0 part  <br />├─sdd4      8:52   0   201M  0 part  <br />└─sdd5      8:53   0    12G  0 part  <br />sr0        11:0    1  1024M  0 rom   <br />[acool@localhost ~]$ <br /><br /><br />// replace sda with sdd<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sdd4<br />[sudo] password for acool: <br />mdadm: added /dev/sdd4<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sdd5<br />mdadm: added /dev/sdd5<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sdd3<br />mdadm: added /dev/sdd3<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sdd2<br />mdadm: added /dev/sdd2<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sdd1<br />mdadm: added /dev/sdd1<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --replace /dev/sda4 --with /dev/sdd4<br />mdadm: Marked /dev/sda4 (device 2 in /dev/md123) for replacement<br />mdadm: Marked /dev/sdd4 in /dev/md123 as replacement for device 2<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --replace /dev/sda5 --with /dev/sdd5<br />mdadm: Marked /dev/sda5 (device 2 in /dev/md124) for replacement<br />mdadm: Marked /dev/sdd5 in /dev/md124 as replacement for device 2<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --replace /dev/sda3 --with /dev/sdd3<br />mdadm: Marked /dev/sda3 (device 2 in /dev/md125) for replacement<br />mdadm: Marked /dev/sdd3 in /dev/md125 as replacement for device 2<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --replace /dev/sda2 --with /dev/sdd2<br />mdadm: Marked /dev/sda2 (device 2 in /dev/md126) for replacement<br />mdadm: Marked /dev/sdd2 in /dev/md126 as replacement for device 2<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --replace /dev/sda1 --with /dev/sdd1<br />mdadm: Marked /dev/sda1 (device 2 in /dev/md127) for replacement<br />mdadm: Marked /dev/sdd1 in /dev/md127 as replacement for device 2<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br /><br />// monitor progress<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /proc/mdstat <br />Personalities : [raid1] <br />md123 : active raid1 sdd4[4] sda4[3](F) sdc4[1] sdb4[0]<br />      205760 blocks super 1.0 [3/3] [UUU]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md124 : active raid1 sdd5[4] sda5[3](F) sdc5[1] sdb5[0]<br />      12582912 blocks super 1.2 [3/3] [UUU]<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />md125 : active raid1 sdd3[4](R) sda3[3] sdb3[0] sdc3[1]<br />      1049536 blocks super 1.0 [3/3] [UUU]<br />      	resync=DELAYED<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md126 : active raid1 sdd2[4](R) sda2[3] sdb2[0] sdc2[1]<br />      7208960 blocks super 1.2 [3/3] [UUU]<br />      [=&gt;...................]  recovery =  5.1% (370560/7208960) finish=3.6min speed=30880K/sec<br />      <br />md127 : active raid1 sdd1[4](R) sda1[3] sdb1[0] sdc1[1]<br />      12582912 blocks super 1.2 [3/3] [UUU]<br />      	resync=DELAYED<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />unused devices: &lt;none&gt;<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --detail /dev/md123<br />/dev/md123:<br />        Version : 1.0<br />  Creation Time : Thu Jan 19 07:04:56 2017<br />     Raid Level : raid1<br />     Array Size : 205760 (200.94 MiB 210.70 MB)<br />  Used Dev Size : 205760 (200.94 MiB 210.70 MB)<br />   Raid Devices : 3<br />  Total Devices : 4<br />    Persistence : Superblock is persistent<br /><br />  Intent Bitmap : Internal<br /><br />    Update Time : Sun Jan 22 15:58:24 2017<br />          State : clean <br /> Active Devices : 3<br />Working Devices : 3<br /> Failed Devices : 1<br />  Spare Devices : 0<br /><br />           Name : localhost.localdomain:boot_efi  (local to host localhost.localdomain)<br />           UUID : 89085253:47b4f9e9:dd804932:ef766c2a<br />         Events : 70<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       8       20        0      active sync   /dev/sdb4<br />       1       8       36        1      active sync   /dev/sdc4<br />       4       8       52        2      active sync   /dev/sdd4<br /><br />       3       8        4        -      faulty   /dev/sda4<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --detail /dev/md126<br />/dev/md126:<br />        Version : 1.2<br />  Creation Time : Thu Jan 19 07:04:48 2017<br />     Raid Level : raid1<br />     Array Size : 7208960 (6.88 GiB 7.38 GB)<br />  Used Dev Size : 7208960 (6.88 GiB 7.38 GB)<br />   Raid Devices : 3<br />  Total Devices : 4<br />    Persistence : Superblock is persistent<br /><br />    Update Time : Sun Jan 22 16:06:59 2017<br />          State : clean, recovering <br /> Active Devices : 3<br />Working Devices : 4<br /> Failed Devices : 0<br />  Spare Devices : 1<br /><br /> Rebuild Status : 13% complete<br /><br />           Name : localhost.localdomain:swap  (local to host localhost.localdomain)<br />           UUID : 0701fcab:0d6eadef:98a73bd8:45b1bd0b<br />         Events : 64<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       8       18        0      active sync   /dev/sdb2<br />       1       8       34        1      active sync   /dev/sdc2<br />       3       8        2        2      active sync   /dev/sda2<br />       4       8       50        2      spare rebuilding   /dev/sdd2<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --detail /dev/md124<br />/dev/md124:<br />        Version : 1.2<br />  Creation Time : Thu Jan 19 07:05:04 2017<br />     Raid Level : raid1<br />     Array Size : 12582912 (12.00 GiB 12.88 GB)<br />  Used Dev Size : 12582912 (12.00 GiB 12.88 GB)<br />   Raid Devices : 3<br />  Total Devices : 4<br />    Persistence : Superblock is persistent<br /><br />  Intent Bitmap : Internal<br /><br />    Update Time : Sun Jan 22 16:11:37 2017<br />          State : clean <br /> Active Devices : 3<br />Working Devices : 3<br /> Failed Devices : 1<br />  Spare Devices : 0<br /><br />           Name : localhost.localdomain:home  (local to host localhost.localdomain)<br />           UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566<br />         Events : 2393<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       8       21        0      active sync   /dev/sdb5<br />       1       8       37        1      active sync   /dev/sdc5<br />       4       8       53        2      active sync   /dev/sdd5<br /><br />       3       8        5        -      faulty   /dev/sda5<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br /><br /><br /><br />//remove sda partitions from md devices<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /proc/mdstat <br />Personalities : [raid1] <br />md123 : active raid1 sdd4[4] sda4[3](F) sdc4[1] sdb4[0]<br />      205760 blocks super 1.0 [3/3] [UUU]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md124 : active raid1 sdd5[4] sda5[3](F) sdc5[1] sdb5[0]<br />      12582912 blocks super 1.2 [3/3] [UUU]<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />md125 : active raid1 sdd3[4] sda3[3](F) sdb3[0] sdc3[1]<br />      1049536 blocks super 1.0 [3/3] [UUU]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md126 : active raid1 sdd2[4] sda2[3](F) sdb2[0] sdc2[1]<br />      7208960 blocks super 1.2 [3/3] [UUU]<br />      <br />md127 : active raid1 sdd1[4] sda1[3](F) sdb1[0] sdc1[1]<br />      12582912 blocks super 1.2 [3/3] [UUU]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />unused devices: &lt;none&gt;<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --remove /dev/sda4<br />mdadm: hot removed /dev/sda4 from /dev/md123<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --remove /dev/sda5<br />mdadm: hot removed /dev/sda5 from /dev/md124<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --remove /dev/sda3<br />mdadm: hot removed /dev/sda3 from /dev/md125<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --remove /dev/sda2<br />mdadm: hot removed /dev/sda2 from /dev/md126<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --remove /dev/sda1<br />mdadm: hot removed /dev/sda1 from /dev/md127<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br /><br /><br />//verify<br />[acool@localhost ~]$ <br />[acool@localhost ~]$  <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --detail /dev/md124<br />/dev/md124:<br />        Version : 1.2<br />  Creation Time : Thu Jan 19 07:05:04 2017<br />     Raid Level : raid1<br />     Array Size : 12582912 (12.00 GiB 12.88 GB)<br />  Used Dev Size : 12582912 (12.00 GiB 12.88 GB)<br />   Raid Devices : 3<br />  Total Devices : 3<br />    Persistence : Superblock is persistent<br /><br />  Intent Bitmap : Internal<br /><br />    Update Time : Sun Jan 22 16:22:22 2017<br />          State : clean <br /> Active Devices : 3<br />Working Devices : 3<br /> Failed Devices : 0<br />  Spare Devices : 0<br /><br />           Name : localhost.localdomain:home  (local to host localhost.localdomain)<br />           UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566<br />         Events : 2394<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       8       21        0      active sync   /dev/sdb5<br />       1       8       37        1      active sync   /dev/sdc5<br />       4       8       53        2      active sync   /dev/sdd5<br />[acool@localhost ~]$ cat /proc/mdstat <br />Personalities : [raid1] <br />md123 : active raid1 sdd4[4] sdc4[1] sdb4[0]<br />      205760 blocks super 1.0 [3/3] [UUU]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md124 : active raid1 sdd5[4] sdc5[1] sdb5[0]<br />      12582912 blocks super 1.2 [3/3] [UUU]<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />md125 : active raid1 sdd3[4] sdb3[0] sdc3[1]<br />      1049536 blocks super 1.0 [3/3] [UUU]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md126 : active raid1 sdd2[4] sdb2[0] sdc2[1]<br />      7208960 blocks super 1.2 [3/3] [UUU]<br />      <br />md127 : active raid1 sdd1[4] sdb1[0] sdc1[1]<br />      12582912 blocks super 1.2 [3/3] [UUU]<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />unused devices: &lt;none&gt;<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo lsblk <br />NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT<br />sda         8:0    0 465.8G  0 disk  <br />├─sda1      8:1    0    12G  0 part  <br />├─sda2      8:2    0   6.9G  0 part  <br />├─sda3      8:3    0     1G  0 part  <br />├─sda4      8:4    0   201M  0 part  <br />└─sda5      8:5    0    12G  0 part  <br />sdb         8:16   0  55.9G  0 disk  <br />├─sdb1      8:17   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdb2      8:18   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdb3      8:19   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdb4      8:20   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdb5      8:21   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdc         8:32   0 232.9G  0 disk  <br />├─sdc1      8:33   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdc2      8:34   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdc3      8:35   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdc4      8:36   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdc5      8:37   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdd         8:48   0 111.8G  0 disk  <br />├─sdd1      8:49   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdd2      8:50   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdd3      8:51   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdd4      8:52   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdd5      8:53   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sr0        11:0    1  1024M  0 rom   <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br /><br />// insteresting fact: after shutting down, physically removed sda and restart<br />/ sdd became sda<br /><br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo lsblk <br />[sudo] password for acool: <br />NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT<br />sda         8:0    0 111.8G  0 disk  <br />├─sda1      8:1    0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sda2      8:2    0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sda3      8:3    0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sda4      8:4    0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sda5      8:5    0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdb         8:16   0  55.9G  0 disk  <br />├─sdb1      8:17   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdb2      8:18   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdb3      8:19   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdb4      8:20   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdb5      8:21   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdc         8:32   0 232.9G  0 disk  <br />├─sdc1      8:33   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdc2      8:34   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdc3      8:35   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdc4      8:36   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdc5      8:37   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sr0        11:0    1  1024M  0 rom   <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /></pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry170129-034959</id>
		<issued>2017-01-29T00:00:00Z</issued>
		<modified>2017-01-29T00:00:00Z</modified>
	</entry>
	<entry>
		<title>Centos7: Replaced failed drive in a 3-disk RAID 1 array. (DRAFT)</title>
		<link rel="alternate" type="text/html" href="https://angelcool.net/sphpblog/blog_index.php?entry=entry170129-034126" />
		<content type="text/html" mode="escaped"><![CDATA[Scenario:Replaced failed drive in a 3-disk RAID 1 array.<br /><pre><br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.3.1611 (Core) <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ mdadm --version<br />mdadm - v3.4 - 28th January 2016<br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br /><br />//Inspect (I manually disconnected power and SATA cables on sda to simulate a hw failure)<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /proc/mdstat <br />Personalities : [raid1] <br />md123 : active raid1 sdc4[1] sdb4[0]<br />      205760 blocks super 1.0 [3/2] [UU_]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md124 : active raid1 sdc5[1] sdb5[0]<br />      12582912 blocks super 1.2 [3/2] [UU_]<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />md125 : active raid1 sdb3[0] sdc3[1]<br />      1049536 blocks super 1.0 [3/2] [UU_]<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />md126 : active raid1 sdb2[0] sdc2[1]<br />      7208960 blocks super 1.2 [3/2] [UU_]<br />      <br />md127 : active raid1 sdb1[0] sdc1[1]<br />      12582912 blocks super 1.2 [3/2] [UU_]<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />unused devices: &lt;none&gt;<br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --detail /dev/md123<br />[sudo] password for acool: <br />/dev/md123:<br />        Version : 1.0<br />  Creation Time : Thu Jan 19 10:04:56 2017<br />     Raid Level : raid1<br />     Array Size : 205760 (200.94 MiB 210.70 MB)<br />  Used Dev Size : 205760 (200.94 MiB 210.70 MB)<br />   Raid Devices : 3<br />  Total Devices : 2<br />    Persistence : Superblock is persistent<br /><br />  Intent Bitmap : Internal<br /><br />    Update Time : Sun Jan 22 14:06:44 2017<br />          State : clean, degraded <br /> Active Devices : 2<br />Working Devices : 2<br /> Failed Devices : 0<br />  Spare Devices : 0<br /><br />           Name : localhost.localdomain:boot_efi  (local to host localhost.localdomain)<br />           UUID : 89085253:47b4f9e9:dd804932:ef766c2a<br />         Events : 46<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       8       20        0      active sync   /dev/sdb4<br />       1       8       36        1      active sync   /dev/sdc4<br />       -       0        0        2      removed<br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo lsblk <br />NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT<br />sdb         8:16   0  55.9G  0 disk  <br />├─sdb1      8:17   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdb2      8:18   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdb3      8:19   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdb4      8:20   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdb5      8:21   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdc         8:32   0 232.9G  0 disk  <br />├─sdc1      8:33   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdc2      8:34   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdc3      8:35   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdc4      8:36   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdc5      8:37   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sr0        11:0    1  1024M  0 rom   <br />[acool@localhost ~]$<br /><br />// the following messages appear because<br />// this drive is no longer available imo...<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --fail sda4<br />mdadm: sda4 does not appear to be a component of /dev/md123<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --remove sda4<br />mdadm: sda4 does not appear to be a component of /dev/md123<br />[acool@localhost ~]$<br /><br />//.. so we&#039;ll just plug in a new hd (in same SATA port)<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo lsblk <br />NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT<br />sda         8:0    0 465.8G  0 disk  <br />├─sda1      8:1    0 465.8G  0 part  <br />└─sda5      8:5    0     4G  0 part  <br />sdb         8:16   0  55.9G  0 disk  <br />├─sdb1      8:17   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdb2      8:18   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdb3      8:19   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdb4      8:20   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdb5      8:21   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdc         8:32   0 232.9G  0 disk  <br />├─sdc1      8:33   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdc2      8:34   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdc3      8:35   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdc4      8:36   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdc5      8:37   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sr0        11:0    1  1024M  0 rom   <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br />//inspect partition tables<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo fdisk -l /dev/sd?<br /><br />Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 4096 bytes<br />I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br />Disk label type: dos<br />Disk identifier: 0x90909090<br /><br />   Device Boot      Start         End      Blocks   Id  System<br />/dev/sda1   *          63   976772789   488386363+  a5  FreeBSD<br />Partition 1 does not start on physical sector boundary.<br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br />//copy gpt table to new disk (sda) and randomize guids<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo sgdisk /dev/sdc -R /dev/sda<br />[sudo] password for acool: <br />The operation has completed successfully.<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo sgdisk -G /dev/sda<br />The operation has completed successfully.<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br /><br />//check again partition tables<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo fdisk -l /dev/sd?<br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 4096 bytes<br />I/O size (minimum/optimal): 4096 bytes / 4096 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdb: 60.0 GB, 60022480896 bytes, 117231408 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.<br /><br />Disk /dev/sdc: 250.1 GB, 250059350016 bytes, 488397168 sectors<br />Units = sectors of 1 * 512 = 512 bytes<br />Sector size (logical/physical): 512 bytes / 512 bytes<br />I/O size (minimum/optimal): 512 bytes / 512 bytes<br />Disk label type: gpt<br /><br /><br />#         Start          End    Size  Type            Name<br /> 1         2048     25184255     12G  Linux RAID      <br /> 2     25184256     39610367    6.9G  Linux RAID      <br /> 3     39610368     41709567      1G  Linux RAID      <br /> 4     41709568     42121215    201M  Linux RAID      <br /> 5     42121216     67303423     12G  Linux RAID      <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo lsblk <br />NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT<br />sda         8:0    0 465.8G  0 disk  <br />├─sda1      8:1    0    12G  0 part  <br />├─sda2      8:2    0   6.9G  0 part  <br />├─sda3      8:3    0     1G  0 part  <br />├─sda4      8:4    0   201M  0 part  <br />└─sda5      8:5    0    12G  0 part  <br />sdb         8:16   0  55.9G  0 disk  <br />├─sdb1      8:17   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdb2      8:18   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdb3      8:19   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdb4      8:20   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdb5      8:21   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sdc         8:32   0 232.9G  0 disk  <br />├─sdc1      8:33   0    12G  0 part  <br />│ └─md127   9:127  0    12G  0 raid1 /<br />├─sdc2      8:34   0   6.9G  0 part  <br />│ └─md126   9:126  0   6.9G  0 raid1 [SWAP]<br />├─sdc3      8:35   0     1G  0 part  <br />│ └─md125   9:125  0     1G  0 raid1 /boot<br />├─sdc4      8:36   0   201M  0 part  <br />│ └─md123   9:123  0   201M  0 raid1 /boot/efi<br />└─sdc5      8:37   0    12G  0 part  <br />  └─md124   9:124  0    12G  0 raid1 /home<br />sr0        11:0    1  1024M  0 rom   <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$<br /><br />// now we&#039;re ready to add the new partitions in sda to the md devices<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --manage /dev/md123 --add /dev/sda4<br />[sudo] password for acool: <br />mdadm: added /dev/sda4<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md124 --add /dev/sda5<br />mdadm: added /dev/sda5<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md125 --add /dev/sda3<br />mdadm: added /dev/sda3<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md126 --add /dev/sda2<br />mdadm: added /dev/sda2<br />[acool@localhost ~]$ sudo mdadm --manage /dev/md127 --add /dev/sda1<br />mdadm: added /dev/sda1<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /><br /><br />// monitor progress<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /proc/mdstat <br />Personalities : [raid1] <br />md123 : active raid1 sda4[3] sdc4[1] sdb4[0]<br />      205760 blocks super 1.0 [3/3] [UUU]<br />      bitmap: 0/1 pages [0KB], 65536KB chunk<br /><br />md124 : active raid1 sda5[3] sdc5[1] sdb5[0]<br />      12582912 blocks super 1.2 [3/2] [UU_]<br />      [==&gt;..................]  recovery = 13.7% (1730176/12582912) finish=6.2min speed=28829K/sec<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />md125 : active raid1 sda3[3] sdb3[0] sdc3[1]<br />      1049536 blocks super 1.0 [3/2] [UU_]<br />      	resync=DELAYED<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />md126 : active raid1 sda2[3] sdb2[0] sdc2[1]<br />      7208960 blocks super 1.2 [3/2] [UU_]<br />      	resync=DELAYED<br />      <br />md127 : active raid1 sda1[3] sdb1[0] sdc1[1]<br />      12582912 blocks super 1.2 [3/2] [UU_]<br />      	resync=DELAYED<br />      bitmap: 1/1 pages [4KB], 65536KB chunk<br /><br />unused devices: &lt;none&gt;<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --detail /dev/md124<br />/dev/md124:<br />        Version : 1.2<br />  Creation Time : Thu Jan 19 10:05:04 2017<br />     Raid Level : raid1<br />     Array Size : 12582912 (12.00 GiB 12.88 GB)<br />  Used Dev Size : 12582912 (12.00 GiB 12.88 GB)<br />   Raid Devices : 3<br />  Total Devices : 3<br />    Persistence : Superblock is persistent<br /><br />  Intent Bitmap : Internal<br /><br />    Update Time : Sun Jan 22 15:18:53 2017<br />          State : clean, degraded, recovering <br /> Active Devices : 2<br />Working Devices : 3<br /> Failed Devices : 0<br />  Spare Devices : 1<br /><br /> Rebuild Status : 51% complete<br /><br />           Name : localhost.localdomain:home  (local to host localhost.localdomain)<br />           UUID : 24ec8d5c:94b7c61c:3eed2130:fbec1566<br />         Events : 2220<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       8       21        0      active sync   /dev/sdb5<br />       1       8       37        1      active sync   /dev/sdc5<br />       3       8        5        2      spare rebuilding   /dev/sda5<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ sudo mdadm --detail /dev/md123<br />/dev/md123:<br />        Version : 1.0<br />  Creation Time : Thu Jan 19 10:04:56 2017<br />     Raid Level : raid1<br />     Array Size : 205760 (200.94 MiB 210.70 MB)<br />  Used Dev Size : 205760 (200.94 MiB 210.70 MB)<br />   Raid Devices : 3<br />  Total Devices : 3<br />    Persistence : Superblock is persistent<br /><br />  Intent Bitmap : Internal<br /><br />    Update Time : Sun Jan 22 15:14:54 2017<br />          State : clean <br /> Active Devices : 3<br />Working Devices : 3<br /> Failed Devices : 0<br />  Spare Devices : 0<br /><br />           Name : localhost.localdomain:boot_efi  (local to host localhost.localdomain)<br />           UUID : 89085253:47b4f9e9:dd804932:ef766c2a<br />         Events : 66<br /><br />    Number   Major   Minor   RaidDevice State<br />       0       8       20        0      active sync   /dev/sdb4<br />       1       8       36        1      active sync   /dev/sdc4<br />       3       8        4        2      active sync   /dev/sda4<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br />[acool@localhost ~]$ <br /></pre>]]></content>
		<id>https://angelcool.net/sphpblog/blog_index.php?entry=entry170129-034126</id>
		<issued>2017-01-29T00:00:00Z</issued>
		<modified>2017-01-29T00:00:00Z</modified>
	</entry>
</feed>
