<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
	<channel>
		<title>Angel's Blog</title>
		<link>https://angelcool.net/sphpblog/blog_index.php</link>
		<description><![CDATA[No Footer]]></description>
		<copyright>Copyright 2026, Angel</copyright>
		<managingEditor>Angel</managingEditor>
		<language>en-US</language>
		<generator>SPHPBLOG 0.7.0</generator>
		<item>
			<title>Iperf - 1000BASE-LX SMF LC/LC Fiber Link Speed Test</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry240303-221911</link>
			<description><![CDATA[HOST A - SERVER<br /><pre><br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$ date<br />Sun Mar  3 02:04:35 PM PST 2024<br /><br />#IPv4<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$ iperf -s<br />------------------------------------------------------------<br />Server listening on TCP port 5001<br />TCP window size:  128 KByte (default)<br />------------------------------------------------------------<br />[  1] local 192.168.1.184 port 5001 connected with 192.168.1.192 port 57642 (icwnd/mss/irtt=14/1448/515)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.01 sec  1.10 GBytes   941 Mbits/sec<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$<br /><br /># IPv6<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$ iperf -s -V<br />------------------------------------------------------------<br />Server listening on TCP port 5001<br />TCP window size:  128 KByte (default)<br />------------------------------------------------------------<br />[  1] local 2603:8000:6a00:5748:xxxx:xxxx:xxxx:xxxx port 5001 connected with 2603:8000:6a00:5748:xxxx:xxxx:xxxx:xxxx port 56868 (icwnd/mss/irtt=13/1428/460)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.02 sec  1.08 GBytes   928 Mbits/sec<br />angelcool@2603-8000-6a00-5748-xxxx-xxxx-xxxx-xxxx:~$<br /></pre><br /><br />HOST B - CLIENT<br /><pre><br />acool@localhost ~]$<br /># IPv4<br />[acool@localhost ~]$ iperf -c 192.168.1.184<br />------------------------------------------------------------<br />Client connecting to 192.168.1.184, TCP port 5001<br />TCP window size: 16.0 KByte (default)<br />------------------------------------------------------------<br />[  1] local 192.168.1.192 port 57642 connected with 192.168.1.184 port 5001 (icwnd/mss/irtt=14/1448/731)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.02 sec  1.10 GBytes   940 Mbits/sec<br /><br /># IPv6<br />[acool@localhost ~]$ iperf -c 2603:8000:6a00:xxxx:xxxx:xxxx:xxxx<br />------------------------------------------------------------<br />Client connecting to 2603:8000:6a00:xxxx:xxxx:xxxx:xxxx, TCP port 5001<br />TCP window size: 16.0 KByte (default)<br />------------------------------------------------------------<br />[  1] local 2603:8000:6a00:5748:: port 56868 connected with 2603:8000:6a00:5748:xxxx:xxxx:xxxx:xxxx port 5001 (icwnd/mss/irtt=13/1428/783)<br />[ ID] Interval       Transfer     Bandwidth<br />[  1] 0.00-10.02 sec  1.08 GBytes   928 Mbits/sec<br />[acool@localhost ~]$ <br /></pre>]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry240303-221911</guid>
			<author>Angel</author>
			<pubDate>Sun, 03 Mar 2024 22:19:11 GMT</pubDate>
		</item>
		<item>
			<title>Terraform: AWS VPC with IPV6 support</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry210705-012044</link>
			<description><![CDATA[<pre>[acool@localhost EC2-VPC]$ <br />[acool@localhost EC2-VPC]$ date<br />Sun Jul  4 06:19:34 PM PDT 2021<br />[acool@localhost EC2-VPC]$ cat /etc/redhat-release <br />Fedora release 33 (Thirty Three)<br />[acool@localhost EC2-VPC]$ aws --version<br />aws-cli/1.18.223 Python/3.9.5 Linux/5.12.13-200.fc33.x86_64 botocore/1.19.63<br />[acool@localhost EC2-VPC]$ terraform -v<br />Terraform v1.0.1<br />on linux_amd64<br />+ provider registry.terraform.io/hashicorp/aws v3.48.0<br />[acool@localhost EC2-VPC]$<br /></pre><br />The gist of this post:<br /><pre> <br />[acool@localhost EC2-VPC]$ <br />[acool@localhost EC2-VPC]$ cat main.tf <br /># extract public ssh key from private ssh key<br /># [acool@localhost EC2-VPC]$ ssh-keygen -y -f ./COOL_SSH_PRIVATEKEY.pem &gt; COOL_SSH_PUBLICKEY.pub <br /><br />// a.- set region to use<br />provider &quot;aws&quot; {<br />    region = &quot;us-east-2&quot;<br />}<br /><br />// b.- create ssh key pair<br />resource &quot;aws_key_pair&quot; &quot;COOL_KEY_PAIR&quot; {<br />  key_name   = &quot;COOL_SSH_KEYPAIR&quot;<br />  public_key = &quot;${file(&quot;./COOL_SSH_PUBLICKEY.pub&quot;)}&quot;<br />}<br /><br />// c.- create vpc resource<br />resource &quot;aws_vpc&quot; &quot;COOL_VPC&quot; {<br />    enable_dns_support = true<br />    enable_dns_hostnames = true<br />    assign_generated_ipv6_cidr_block = true<br />    cidr_block = &quot;10.0.0.0/16&quot;<br />}<br /><br />// d.- create subnet<br />resource &quot;aws_subnet&quot; &quot;COOL_VPC_SUBNET&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.cidr_block, 4, 1)}&quot;<br />    map_public_ip_on_launch = true<br /><br />    ipv6_cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.ipv6_cidr_block, 8, 1)}&quot;<br />    assign_ipv6_address_on_creation = true<br />}<br /><br />// e.- create internet gateway<br />resource &quot;aws_internet_gateway&quot; &quot;COOL_GATEWAY&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />}<br /><br />// f.- create routing table<br />resource &quot;aws_default_route_table&quot; &quot;COOL_VPC_ROUTING_TABLE&quot; {<br />    default_route_table_id = &quot;${aws_vpc.COOL_VPC.default_route_table_id}&quot;<br /><br />    route {<br />        cidr_block = &quot;0.0.0.0/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br /><br />    route {<br />        ipv6_cidr_block = &quot;::/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br />}<br /><br />// g.- create some sort of association needed<br />resource &quot;aws_route_table_association&quot; &quot;COOL_SUBNET_ROUTE_TABLE_ASSOCIATION&quot; {<br />    subnet_id      = &quot;${aws_subnet.COOL_VPC_SUBNET.id}&quot;<br />    route_table_id = &quot;${aws_default_route_table.COOL_VPC_ROUTING_TABLE.id}&quot;<br />}<br /><br />// h.- create security group<br />resource &quot;aws_security_group&quot; &quot;COOL_SECURITY_GROUP&quot; {<br />    name = &quot;COOL_SECURITY_GROUP&quot;<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    <br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmpv6&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br />}<br /><br />// i.- create EC2 instance<br />resource &quot;aws_instance&quot; &quot;COOL_INSTANCE_APP01&quot; {<br />    ami = &quot;ami-01d5ac8f5f8804300&quot;<br />    key_name = &quot;COOL_SSH_KEYPAIR&quot;<br />    instance_type = &quot;t2.micro&quot;<br />    subnet_id = &quot;${aws_subnet.COOL_VPC_SUBNET.id}&quot;<br />    ipv6_address_count = 1<br />    vpc_security_group_ids = [&quot;${aws_security_group.COOL_SECURITY_GROUP.id}&quot;]<br /><br />    tags = {<br />        Name = &quot;COOL_INSTANCE_APP01&quot;<br />    }<br /><br />    depends_on = [aws_internet_gateway.COOL_GATEWAY]<br />}<br /><br />//j.- print instance IPs<br />output &quot;COOL_INSTANCE_APP01_IPv4&quot; {<br />  value = &quot;${aws_instance.COOL_INSTANCE_APP01.public_ip}&quot;<br />}<br /><br />output &quot;COOL_INSTANCE_APP01_IPv6&quot; {<br />  value = [&quot;${aws_instance.COOL_INSTANCE_APP01.ipv6_addresses}&quot;]<br />}<br />[acool@localhost EC2-VPC]$<br />[acool@localhost EC2-VPC]$ terraform init<br />...<br />[acool@localhost EC2-VPC]$ <br />[acool@localhost EC2-VPC]$ terraform apply<br />...<br />[acool@localhost EC2-VPC]$</pre><br /><br />Happy 4th of July, 2021! and cheers!<br /><br /><br />UPDATE - November 9, 2021<br />Added &#039;app_servers&#039; variable to create multiple aws_instances.<br />Commit message: &#039;Added EIP and specified private ip addresses.&#039;<br /><br />main.tf :<br /><pre><br /># extract public ssh key from private ssh key<br /># [acool@localhost EC2-VPC]$ ssh-keygen -y -f ./COOL_SSH_PRIVATEKEY.pem &gt; COOL_SSH_PUBLICKEY.pub <br /><br />// set region to use<br />provider &quot;aws&quot; {<br />    region = &quot;us-east-2&quot;<br />}<br /><br />// create ssh key pair<br />resource &quot;aws_key_pair&quot; &quot;COOL_KEY_PAIR&quot; {<br />  key_name   = &quot;COOL_SSH_KEYPAIR&quot;<br />  public_key = &quot;${file(&quot;./COOL_SSH_PUBLICKEY.pub&quot;)}&quot;<br />}<br /><br />// create vpc resource<br />resource &quot;aws_vpc&quot; &quot;COOL_VPC&quot; {<br />    enable_dns_support = true<br />    enable_dns_hostnames = true<br />    assign_generated_ipv6_cidr_block = true<br />    cidr_block = &quot;10.0.0.0/16&quot;<br />}<br /><br />// create subnet<br />resource &quot;aws_subnet&quot; &quot;COOL_PVC_SUBNET&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.cidr_block, 4, 1)}&quot;<br />    map_public_ip_on_launch = true<br /><br />    ipv6_cidr_block = &quot;${cidrsubnet(aws_vpc.COOL_VPC.ipv6_cidr_block, 8, 1)}&quot;<br />    assign_ipv6_address_on_creation = true<br />}<br /><br />// create internet gateway<br />resource &quot;aws_internet_gateway&quot; &quot;COOL_GATEWAY&quot; {<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />}<br /><br />// create routing table<br />resource &quot;aws_default_route_table&quot; &quot;COOL_VPC_ROUTING_TABLE&quot; {<br />    default_route_table_id = &quot;${aws_vpc.COOL_VPC.default_route_table_id}&quot;<br /><br />    route {<br />        cidr_block = &quot;0.0.0.0/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br /><br />    route {<br />        ipv6_cidr_block = &quot;::/0&quot;<br />        gateway_id = &quot;${aws_internet_gateway.COOL_GATEWAY.id}&quot;<br />    }<br />}<br /><br />// create some sort of association needed<br />resource &quot;aws_route_table_association&quot; &quot;COOL_SUBNET_ROUTE_TABLE_ASSOCIATION&quot; {<br />    subnet_id      = &quot;${aws_subnet.COOL_PVC_SUBNET.id}&quot;<br />    route_table_id = &quot;${aws_default_route_table.COOL_VPC_ROUTING_TABLE.id}&quot;<br />}<br /><br />// create security group<br />resource &quot;aws_security_group&quot; &quot;COOL_SECURITY_GROUP&quot; {<br />    name = &quot;COOL_SECURITY_GROUP&quot;<br />    vpc_id = &quot;${aws_vpc.COOL_VPC.id}&quot;<br />    <br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    ingress {<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmpv6&quot;<br />        ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    egress {<br />      from_port = 0<br />      to_port = 0<br />      protocol = &quot;-1&quot;<br />      ipv6_cidr_blocks = [&quot;::/0&quot;]<br />    }<br />}<br /><br />// server names<br />variable app_servers {<br />    description = &quot;name of app servers&quot;<br />    type = list(map(any))<br />    default = [<br />        {name:&quot;COOL_LB01&quot;, ip:&quot;10.0.16.4&quot;},<br />        {name:&quot;COOL_LB02&quot;, ip:&quot;10.0.16.5&quot;},<br />        {name:&quot;COOL_APP01&quot;, ip:&quot;10.0.16.6&quot;},<br />        {name:&quot;COOL_APP02&quot;, ip:&quot;10.0.16.7&quot;},<br />    ]<br />}<br /><br />// create EC2 instance<br />resource &quot;aws_instance&quot; &quot;COOL_SERVERS&quot; {<br />    ami = &quot;ami-01d5ac8f5f8804300&quot;<br />    key_name = &quot;COOL_SSH_KEYPAIR&quot;<br />    instance_type = &quot;t2.micro&quot;<br />    subnet_id = &quot;${aws_subnet.COOL_PVC_SUBNET.id}&quot;<br />    ipv6_address_count = 1<br />    vpc_security_group_ids = [&quot;${aws_security_group.COOL_SECURITY_GROUP.id}&quot;]<br />    for_each = {for server in var.app_servers:  server.name =&gt; server}<br />    private_ip = each.value[&quot;ip&quot;]<br /><br />    tags = {<br />        Name = each.value[&quot;name&quot;]<br />    }<br /><br />    depends_on = [aws_internet_gateway.COOL_GATEWAY]<br />}<br /><br />// elastic IP<br />resource &quot;aws_eip&quot; &quot;COOL_EIP&quot; {<br />  instance = aws_instance.COOL_SERVERS[&quot;COOL_LB01&quot;].id<br />  vpc      = true<br />}<br /><br />// print instance IPs<br />output &quot;COOL_INSTANCE_APP01_IPv4&quot; {<br />    value = {for k, v in aws_instance.COOL_SERVERS: k =&gt; v.public_ip}<br />}<br /><br />output &quot;COOL_INSTANCE_APP01_IPv6&quot; {<br />  value = {for k, v in aws_instance.COOL_SERVERS: k =&gt; v.ipv6_addresses}<br />}<br /><br />output &quot;COOL_VPC_IPV6_BLOCK&quot; {<br />  value = aws_subnet.COOL_PVC_SUBNET.ipv6_cidr_block<br />}<br /><br />// SSH to instance:<br />// [acool@localhost EC2-VPC]$ ssh -i ./COOL_SSH_PRIVATEKEY.pem centos@ip_address<br /><br />// remove eip from COOL_LB01<br />// [acool@localhost EC2-VPC]$ aws ec2 disassociate-address --region us-east-2 --public-ip 3.131.249.150<br /><br />// assign eip to COOL_LB02, adjust instance id to match LB02. The same commands work to return eip to LB01<br />// [acool@localhost EC2-VPC]$ aws ec2 associate-address --region us-east-2 --public-ip 3.131.249.150 --instance-id i-05a634252654b7b34<br /></pre><br />]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry210705-012044</guid>
			<author>Angel</author>
			<pubDate>Mon, 05 Jul 2021 01:20:44 GMT</pubDate>
		</item>
		<item>
			<title>Terraform: AWS EC2 single instance example.</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry210704-200235</link>
			<description><![CDATA[<pre>[acool@localhost terraform-tests]$ terraform --version<br />Terraform v1.0.1<br />...<br />[acool@localhost terraform-tests]$ aws --version<br />aws-cli/1.18.223 Python/3.9.5 Linux/5.12.12-200.fc33.x86_64 botocore/1.19.63<br />...<br /></pre><br />The gist of this post:<br /><pre>[acool@localhost EC2-SINGLE-INSTANCE]$ cat main.tf <br />provider &quot;aws&quot; {<br />    region = &quot;us-east-2&quot;<br />}<br /><br />// create ssh key<br />resource &quot;tls_private_key&quot; &quot;COOL_SSH_PK&quot; {<br />  algorithm = &quot;RSA&quot;<br />  rsa_bits  = 4096<br />}<br /><br />// create ssh key pair<br />resource &quot;aws_key_pair&quot; &quot;COOL_KEY_PAIR&quot; {<br />  key_name   = &quot;COOL_SSH_KEYNAME&quot;<br />  public_key = tls_private_key.COOL_SSH_PK.public_key_openssh<br /><br />  provisioner &quot;local-exec&quot; { # Create &quot;myKey.pem&quot; to your computer!!<br />    command = &quot;echo &#039;${tls_private_key.COOL_SSH_PK.private_key_pem}&#039; &gt; ./COOL_SSH_PK.pem&quot;<br />  }<br />}<br /><br />// create aws ec2 instance<br />resource &quot;aws_instance&quot; &quot;COOLAPP01&quot; {<br />    ami = &quot;ami-01d5ac8f5f8804300&quot;<br />    instance_type = &quot;t2.micro&quot;<br />    key_name = aws_key_pair.COOL_KEY_PAIR.key_name<br />    vpc_security_group_ids = [aws_security_group.COOLAPP01_security_group.id]<br /><br />  tags = {<br />    Name = &quot;COOLAPP01_tag_name&quot;<br />  }<br />}<br /><br />// create security group<br />resource &quot;aws_security_group&quot; &quot;COOLAPP01_security_group&quot; {<br /><br />    name=&quot;terraform_COOLAPP01_security_group&quot;<br /><br />    // allow port 80 tcp<br />    ingress{<br />        from_port = 80<br />        to_port = 80<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow port 22 tcp<br />    ingress{<br />        from_port = 22<br />        to_port = 22<br />        protocol = &quot;tcp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow ping<br />    ingress{<br />        from_port = -1<br />        to_port = -1<br />        protocol = &quot;icmp&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br /><br />    // allow all outbound traffic<br />    egress {<br />        from_port   = 0<br />        to_port     = 0<br />        protocol    = &quot;-1&quot;<br />        cidr_blocks = [&quot;0.0.0.0/0&quot;]<br />    }<br />}<br /><br />// TODO: enable IPV6<br /><br />output &quot;public_ip&quot; {<br />    value = aws_instance.COOLAPP01.public_ip<br />    description = &quot;public ip for COOLAPP01&quot;<br />}<br />[acool@localhost EC2-SINGLE-INSTANCE]$ <br />[acool@localhost EC2-SINGLE-INSTANCE]$terraform apply<br />...</pre><br /><br />Happy 4th of July, 2021 ya&#039;ll!!]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry210704-200235</guid>
			<author>Angel</author>
			<pubDate>Sun, 04 Jul 2021 20:02:35 GMT</pubDate>
		</item>
		<item>
			<title>Highly Available HAproxy Balancer with Keepalived</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry210522-030737</link>
			<description><![CDATA[We&#039;re gonna use Keepalived&#039;s VRRP feature.<br /><br />Floating ip address will be 192.168.121.179<br /><br />Vagrantfile needed parameters:<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.180&quot;<br />config.vm.hostname = &quot;lb01.localhost&quot;<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.181&quot;<br />config.vm.hostname = &quot;lb02.localhost&quot;<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.191&quot;<br />config.vm.hostname = &quot;app01.localhost&quot;<br /><br />config.vm.box = &quot;centos/8&quot;<br />config.vm.network &quot;private_network&quot;, ip: &quot;192.168.121.192&quot;<br />config.vm.hostname = &quot;app02.localhost&quot;<br /><br />------------------------------------------------------------------------<br />app01 and app02 will have nginx installed running its default welcome page.<br /><pre><br />angel@acool:~/Documents/haproxy-cluster$ date<br />Fri 21 May 2021 07:11:52 PM PDT<br />angel@acool:~/Documents/haproxy-cluster$ cat /etc/lsb-release<br />DISTRIB_ID=Ubuntu<br />DISTRIB_RELEASE=20.04<br />DISTRIB_CODENAME=focal<br />DISTRIB_DESCRIPTION=&quot;Ubuntu 20.04.2 LTS&quot;<br />angel@acool:~/Documents/haproxy-cluster$ <br />angel@acool:~/Documents/haproxy-cluster$ tree<br />.<br />├── app01<br />│   └── Vagrantfile<br />├── app02<br />│   └── Vagrantfile<br />├── lb01<br />│   └── Vagrantfile<br />├── lb02<br />│   └── Vagrantfile<br />└── NOTES.txt<br /><br />4 directories, 5 files<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster$ sudo vagrant global-status<br />id       name    provider state   directory                                   <br />------------------------------------------------------------------------------<br />1553a24  default libvirt shutoff /home/angel/Documents/haproxy-cluster/lb01  <br />3c33424  default libvirt shutoff /home/angel/Documents/haproxy-cluster/lb02  <br />1d9af06  default libvirt shutoff /home/angel/Documents/haproxy-cluster/app01 <br />5bc8220  default libvirt shutoff /home/angel/Documents/haproxy-cluster/app02 <br />...<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster/lb01$ vagrant --version<br />Vagrant 2.2.6<br />angel@acool:~/Documents/haproxy-cluster$<br />angel@acool:~/Documents/haproxy-cluster$ cd lb01/<br />angel@acool:~/Documents/haproxy-cluster/lb01$ sudo vagrant up<br />...<br />angel@acool:~/Documents/haproxy-cluster/lb01$ sudo vagrant ssh<br />Last login: Sat May 22 02:08:45 2021 from 192.168.121.1<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ cat /etc/redhat-release <br />CentOS Linux release 8.3.2011<br />[vagrant@lb01 ~]$ sudo dnf install haproxy keepalived<br /><br />[vagrant@lb01 ~]$ haproxy -v<br />HA-Proxy version 1.8.23 2019/11/25<br />Copyright 2000-2019 Willy Tarreau &lt;willy@haproxy.org&gt;<br /><br />[vagrant@lb01 ~]$ keepalived --version<br />Keepalived v2.0.10 (11/12,2018)<br />...<br />[vagrant@lb01 ~]$<br />[vagrant@lb01 ~]$ # HAProxy need this to bind to floating ip when ip is missing locally <br />[vagrant@lb01 ~]$ cat /etc/sysctl.conf <br />...<br />net.ipv4.ip_nonlocal_bind=1<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ sudo sysctl -p<br />net.ipv4.ip_nonlocal_bind = 1<br />[vagrant@lb01 ~]$<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ cat /etc/haproxy/haproxy.cfg <br />...<br />## enable stats<br />listen stats<br />    bind :9000<br />    stats enable<br />    stats uri /stats<br />    stats refresh 10s<br />    stats admin if LOCALHOST<br /><br />## enable www frontend, bind floating ip address<br />frontend www<br />    bind 192.168.121.179:80<br />    mode http<br />    default_backend www_servers<br /><br />## enable www backend<br />backend www_servers<br />    balance roundrobin<br />    option forwardfor<br />    http-request set-header X-Forwarded-Port %[dst_port]<br />    http-request add-header X-Forwarded-Proto https if { ssl_fc }<br />    option httpchk HEAD / HTTP/1.1\r\nHost:localhost<br />    server app01 192.168.121.191:80 check<br />    server app02 192.168.121.192:80 check<br /><br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ cat /etc/keepalived/keepalived.conf<br />     vrrp_script chk_haproxy {      # Requires keepalived-1.1.13<br />       #script &quot;killall -0 haproxy&quot;  # cheaper than pidof<br />       script &quot;pidof haproxy&quot;  # this one worked better for me.<br />       interval 2 # check every 2 seconds<br />       weight 2 # add 2 points of priority if OK<br />     }<br />     vrrp_instance VI_1 {<br />       interface eth0<br />       state MASTER<br />       virtual_router_id 51<br />       priority 101 # 101 on lb01, 100 on lb02<br />       virtual_ipaddress {<br />         192.168.121.179<br />       }<br />       track_script {<br />         chk_haproxy<br />       }<br />     }<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ # this should be the end result, the floating ip should be listed.<br />[vagrant@lb01 ~]$ ip a |grep 179<br />    inet 192.168.121.179/32 scope global eth0<br />[vagrant@lb01 ~]$ <br />[vagrant@lb01 ~]$ # if you stop haproxy (or shutdown lb01), lb02 should take over the floating ip!<br />[vagrant@lb01 ~]$ # when haproxy is back, lb01 will reclaim the floating ip, the end result is<br />[vagrant@lb01 ~]$ # the floating ip will be available even if lb01 goes down.<br /></pre><br /><br />Cheers!<br /><br />UPDATE: November 11, 2021 -  Adding lb02 details in order to remove ambiguities when I see this post in the future.<br /><pre><br />[vagrant@lb02 ~]$ cat /etc/sysctl.conf <br />...<br />net.ipv4.ip_nonlocal_bind=1<br />[vagrant@lb02 ~]$<br /></pre><br /><pre><br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ cat /etc/keepalived/keepalived.conf<br />     vrrp_script chk_haproxy {      # Requires keepalived-1.1.13<br />       #script &quot;killall -0 haproxy&quot;  # cheaper than pidof<br />       script &quot;pidof haproxy&quot;<br />       interval 2 # check every 2 seconds<br />       weight 2 # add 2 points of priority if OK<br />     }<br />     vrrp_instance VI_1 {<br />       interface eth0<br />       state MASTER<br />       virtual_router_id 51<br />       priority 100 # 101 on primary, 100 on secondary<br />       virtual_ipaddress {<br />         192.168.121.179<br />       }<br />       track_script {<br />         chk_haproxy<br />       }<br />     }<br /><br />[vagrant@lb02 ~]$<br /></pre><br /><pre><br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ cat /etc/haproxy/haproxy.cfg<br />#---------------------------------------------------------------------<br /># Example configuration for a possible web application.  See the<br /># full configuration options online.<br />#<br />#   <a href="https://www.haproxy.org/download/1.8/doc/configuration.txt" >https://www.haproxy.org/download/1.8/doc/configuration.txt</a><br />#<br />#---------------------------------------------------------------------<br /><br />#---------------------------------------------------------------------<br /># Global settings<br />#---------------------------------------------------------------------<br />global<br />    # to have these messages end up in /var/log/haproxy.log you will<br />    # need to:<br />    #<br />    # 1) configure syslog to accept network log events.  This is done<br />    #    by adding the &#039;-r&#039; option to the SYSLOGD_OPTIONS in<br />    #    /etc/sysconfig/syslog<br />    #<br />    # 2) configure local2 events to go to the /var/log/haproxy.log<br />    #   file. A line like the following can be added to<br />    #   /etc/sysconfig/syslog<br />    #<br />    #    local2.*                       /var/log/haproxy.log<br />    #<br />    log         127.0.0.1 local2<br /><br />    chroot      /var/lib/haproxy<br />    pidfile     /var/run/haproxy.pid<br />    maxconn     4000<br />    user        haproxy<br />    group       haproxy<br />    daemon<br /><br />    # turn on stats unix socket<br />    stats socket /var/lib/haproxy/stats<br /><br />    # utilize system-wide crypto-policies<br />    ssl-default-bind-ciphers PROFILE=SYSTEM<br />    ssl-default-server-ciphers PROFILE=SYSTEM<br /><br />#---------------------------------------------------------------------<br /># common defaults that all the &#039;listen&#039; and &#039;backend&#039; sections will<br /># use if not designated in their block<br />#---------------------------------------------------------------------<br />defaults<br />    mode                    http<br />    log                     global<br />    option                  httplog<br />    option                  dontlognull<br />    option http-server-close<br />    option forwardfor       except 127.0.0.0/8<br />    option                  redispatch<br />    retries                 3<br />    timeout http-request    10s<br />    timeout queue           1m<br />    timeout connect         10s<br />    timeout client          1m<br />    timeout server          1m<br />    timeout http-keep-alive 10s<br />    timeout check           10s<br />    maxconn                 3000<br /><br /># ME: enable stats<br />listen stats<br />    bind :9000<br />    stats enable<br />    stats uri /stats<br />    stats refresh 10s<br />    stats admin if LOCALHOST<br /><br /># ME: <br />frontend www<br />    bind 192.168.121.179:80<br />    mode http<br />    default_backend www_servers<br /><br /># ME:<br />backend www_servers<br />    balance roundrobin<br />    option forwardfor<br />    http-request set-header X-Forwarded-Port %[dst_port]<br />    http-request add-header X-Forwarded-Proto https if { ssl_fc }<br />    option httpchk HEAD / HTTP/1.1\r\nHost:localhost<br />    server app01 192.168.121.191:80 check<br />    server app02 192.168.121.192:80 check<br /><br />#---------------------------------------------------------------------<br /># main frontend which proxys to the backends<br />#---------------------------------------------------------------------<br />frontend main<br />    bind *:5000<br />    acl url_static       path_beg       -i /static /images /javascript /stylesheets<br />    acl url_static       path_end       -i .jpg .gif .png .css .js<br /><br />    use_backend static          if url_static<br />    default_backend             app<br /><br />#---------------------------------------------------------------------<br /># static backend for serving up images, stylesheets and such<br />#---------------------------------------------------------------------<br />backend static<br />    balance     roundrobin<br />    server      static 127.0.0.1:4331 check<br /><br />#---------------------------------------------------------------------<br /># round robin balancing between the various backends<br />#---------------------------------------------------------------------<br />backend app<br />    balance     roundrobin<br />    server  app1 127.0.0.1:5001 check<br />    server  app2 127.0.0.1:5002 check<br />    server  app3 127.0.0.1:5003 check<br />    server  app4 127.0.0.1:5004 check<br />[vagrant@lb02 ~]$ <br />[vagrant@lb02 ~]$ <br /></pre><br /><br /><br />]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry210522-030737</guid>
			<author>Angel</author>
			<pubDate>Sat, 22 May 2021 03:07:37 GMT</pubDate>
		</item>
		<item>
			<title>Docker: reference information for SWARMS, NODES, SERVICES, STACKS and NETWORKS</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry201211-183259</link>
			<description><![CDATA[<pre>[vagrant@box1 ~]$ date<br />Fri Dec 11 18:34:51 UTC 2020<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ docker --version<br />Docker version 20.10.0, build 7287ab3<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$  ########## Docker SWARM info ##########<br />[vagrant@box1 ~]$ docker swarm<br /><br />Usage:  docker swarm COMMAND<br /><br />Manage Swarm<br /><br />Commands:<br />  ca          Display and rotate the root CA<br />  init        Initialize a swarm<br />  join        Join a swarm as a node and/or manager<br />  join-token  Manage join tokens<br />  leave       Leave the swarm<br />  unlock      Unlock swarm<br />  unlock-key  Manage the unlock key<br />  update      Update the swarm<br /><br />Run &#039;docker swarm COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$  ########## Docker NODE info ##########<br />[vagrant@box1 ~]$ docker node<br /><br />Usage:  docker node COMMAND<br /><br />Manage Swarm nodes<br /><br />Commands:<br />  demote      Demote one or more nodes from manager in the swarm<br />  inspect     Display detailed information on one or more nodes<br />  ls          List nodes in the swarm<br />  promote     Promote one or more nodes to manager in the swarm<br />  ps          List tasks running on one or more nodes, defaults to current node<br />  rm          Remove one or more nodes from the swarm<br />  update      Update a node<br /><br />Run &#039;docker node COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$  ########## Docker SERVICE info ##########<br />[vagrant@box1 ~]$ docker service<br /><br />Usage:  docker service COMMAND<br /><br />Manage services<br /><br />Commands:<br />  create      Create a new service<br />  inspect     Display detailed information on one or more services<br />  logs        Fetch the logs of a service or task<br />  ls          List services<br />  ps          List the tasks of one or more services<br />  rm          Remove one or more services<br />  rollback    Revert changes to a service&#039;s configuration<br />  scale       Scale one or multiple replicated services<br />  update      Update a service<br /><br />Run &#039;docker service COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$  ########## Docker STACK info ##########<br />[vagrant@box1 ~]$ docker stack<br /><br />Usage:  docker stack [OPTIONS] COMMAND<br /><br />Manage Docker stacks<br /><br />Options:<br />      --orchestrator string   Orchestrator to use (swarm|kubernetes|all)<br /><br />Commands:<br />  deploy      Deploy a new stack or update an existing stack<br />  ls          List stacks<br />  ps          List the tasks in the stack<br />  rm          Remove one or more stacks<br />  services    List the services in the stack<br /><br />Run &#039;docker stack COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$  ########## Docker NETWORK info ##########<br />vagrant@box1 ~]$ docker network<br /><br />Usage:  docker network COMMAND<br /><br />Manage networks<br /><br />Commands:<br />  connect     Connect a container to a network<br />  create      Create a network<br />  disconnect  Disconnect a container from a network<br />  inspect     Display detailed information on one or more networks<br />  ls          List networks<br />  prune       Remove all unused networks<br />  rm          Remove one or more networks<br /><br />Run &#039;docker network COMMAND --help&#039; for more information on a command.<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$  ########## All the crap available under Docker binary ##########<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ docker<br /><br />Usage:  docker [OPTIONS] COMMAND<br /><br />A self-sufficient runtime for containers<br /><br />Options:<br />      --config string      Location of client config files (default &quot;/home/vagrant/.docker&quot;)<br />  -c, --context string     Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with &quot;docker<br />                           context use&quot;)<br />  -D, --debug              Enable debug mode<br />  -H, --host list          Daemon socket(s) to connect to<br />  -l, --log-level string   Set the logging level (&quot;debug&quot;|&quot;info&quot;|&quot;warn&quot;|&quot;error&quot;|&quot;fatal&quot;) (default &quot;info&quot;)<br />      --tls                Use TLS; implied by --tlsverify<br />      --tlscacert string   Trust certs signed only by this CA (default &quot;/home/vagrant/.docker/ca.pem&quot;)<br />      --tlscert string     Path to TLS certificate file (default &quot;/home/vagrant/.docker/cert.pem&quot;)<br />      --tlskey string      Path to TLS key file (default &quot;/home/vagrant/.docker/key.pem&quot;)<br />      --tlsverify          Use TLS and verify the remote<br />  -v, --version            Print version information and quit<br /><br />Management Commands:<br />  app*        Docker App (Docker Inc., v0.9.1-beta3)<br />  builder     Manage builds<br />  buildx*     Build with BuildKit (Docker Inc., v0.4.2-docker)<br />  config      Manage Docker configs<br />  container   Manage containers<br />  context     Manage contexts<br />  image       Manage images<br />  manifest    Manage Docker image manifests and manifest lists<br />  network     Manage networks<br />  node        Manage Swarm nodes<br />  plugin      Manage plugins<br />  secret      Manage Docker secrets<br />  service     Manage services<br />  stack       Manage Docker stacks<br />  swarm       Manage Swarm<br />  system      Manage Docker<br />  trust       Manage trust on Docker images<br />  volume      Manage volumes<br /><br />Commands:<br />  attach      Attach local standard input, output, and error streams to a running container<br />  build       Build an image from a Dockerfile<br />  commit      Create a new image from a container&#039;s changes<br />  cp          Copy files/folders between a container and the local filesystem<br />  create      Create a new container<br />  diff        Inspect changes to files or directories on a container&#039;s filesystem<br />  events      Get real time events from the server<br />  exec        Run a command in a running container<br />  export      Export a container&#039;s filesystem as a tar archive<br />  history     Show the history of an image<br />  images      List images<br />  import      Import the contents from a tarball to create a filesystem image<br />  info        Display system-wide information<br />  inspect     Return low-level information on Docker objects<br />  kill        Kill one or more running containers<br />  load        Load an image from a tar archive or STDIN<br />  login       Log in to a Docker registry<br />  logout      Log out from a Docker registry<br />  logs        Fetch the logs of a container<br />  pause       Pause all processes within one or more containers<br />  port        List port mappings or a specific mapping for the container<br />  ps          List containers<br />  pull        Pull an image or a repository from a registry<br />  push        Push an image or a repository to a registry<br />  rename      Rename a container<br />  restart     Restart one or more containers<br />  rm          Remove one or more containers<br />  rmi         Remove one or more images<br />  run         Run a command in a new container<br />  save        Save one or more images to a tar archive (streamed to STDOUT by default)<br />  search      Search the Docker Hub for images<br />  start       Start one or more stopped containers<br />  stats       Display a live stream of container(s) resource usage statistics<br />  stop        Stop one or more running containers<br />  tag         Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE<br />  top         Display the running processes of a container<br />  unpause     Unpause all processes within one or more containers<br />  update      Update configuration of one or more containers<br />  version     Show the Docker version information<br />  wait        Block until one or more containers stop, then print their exit codes<br /><br />Run &#039;docker COMMAND --help&#039; for more information on a command.<br />To get more help with docker, check out guides at <a href="https://docs.docker.com/go/guides/" >https://docs.docker.com/go/guides/</a><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ </pre>]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry201211-183259</guid>
			<author>Angel</author>
			<pubDate>Fri, 11 Dec 2020 18:32:59 GMT</pubDate>
		</item>
		<item>
			<title>Nagios: Miscellaneous notes on installing and configuring Nagios.</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry201208-014642</link>
			<description><![CDATA[<pre>[acool@localhost ~]$ <br />[acool@localhost ~]$ date<br />Mon 07 Dec 2020 05:45:53 PM PST<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cat /etc/redhat-release <br />Fedora release 31 (Thirty One)<br />[acool@localhost ~]$  <br />[acool@localhost ~]$ sudo dnf install httpd nagios nagios-common nagios-plugins-all<br />Last metadata expiration check: 0:36:12 ago on Mon 07 Dec 2020 05:09:52 PM PST.<br />Package httpd-2.4.46-1.fc31.x86_64 is already installed.<br />Package nagios-4.4.5-7.fc31.x86_64 is already installed.<br />Package nagios-common-4.4.5-7.fc31.x86_64 is already installed.<br />Package nagios-plugins-all-2.3.3-2.fc31.x86_64 is already installed.<br />Dependencies resolved.<br />Nothing to do.<br />Complete!<br />[acool@localhost ~]$<br />[acool@localhost ~]$ cat /etc/httpd/conf.d/nagios.conf<br />...<br />[acool@localhost ~]$<br />[acool@localhost ~]$ # default password for web ui nagiosadmin:nagiosadmin? I think yes.<br />[acool@localhost ~]$ ll /etc/nagios/<br />total 92<br />-rw-rw-r--. 1 root root   13699 Apr  7  2020 cgi.cfg<br />-rw-rw-r--. 1 root root   45886 Nov  4 23:23 nagios.cfg<br />-rw-r--r--. 1 root root   12839 Apr 29  2020 nrpe.cfg<br />drwxr-x---. 2 root nagios  4096 Nov  5 11:05 objects<br />-rw-r-----. 1 root apache    27 Apr  7  2020 passwd<br />drwxr-x---. 2 root nagios  4096 Nov  3 12:22 private<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ # <a href="http://localhost:8080/nagios/" >http://localhost:8080/nagios/</a> should now load (adjust port as needed)<br /></pre><br /><br />TODO: nagiosgraph. NEEDS TESTING!!<br /><br />A.- Looks like we need this in commands.cfg :<br /><br />define command {<br />  command_name process-service-perfdata-for-nagiosgraph<br />  command_line /usr/local/nagiosgraph/bin/insert.pl<br />}<br /><br />B.- And this in templates.cfg :<br /><br />define service {<br />      name              graphed-service<br />      action_url        /nagiosgraph/cgi-bin/show.cgi?host=$HOSTNAME$&amp;service=$SERVICEDESC$&#039; onMouseOver=&#039;showGraphPopup(this)&#039; onMouseOut=&#039;hideGraphPopup()&#039; rel=&#039;/nagiosgraph/cgi-bin/showgraph.cgi?host=$HOSTNAME$&amp;service=$SERVICEDESC$&amp;period=week&amp;rrdopts=-w+450+-j<br />      register        0<br />}<br /><br />C.- Then we need to add &#039;graphed-service&#039; to services in localhost.cfg for example:<br /><br /># Define a service to &quot;ping&quot; the local machine<br />define service {<br /><br />    use                     local-service,graphed-service; Name of service template to use<br />    host_name               localhost<br />    service_description     PING<br />    check_command           check_ping!100.0,20%!500.0,60%<br />}<br /><br />D.- In these in /etc/nagios/nagios.cfg : - NEEDS TO BE VERIFIED<br /><br />process_performance_data=1<br />service_perfdata_file=/tmp/perfdata.log<br />service_perfdata_file_template=$LASTSERVICECHECK$||$HOSTNAME$||$SERVICEDESC$||$SERVICEOUTPUT$||$SERVICEPERFDATA$<br />service_perfdata_file_mode=a<br />service_perfdata_file_processing_interval=30<br />service_perfdata_file_processing_command=process-service-perfdata-for-nagiosgraph<br /><br />More hints :<br /><pre>[root@localhost nagiosgraph]# <br />[root@localhost nagiosgraph]# grep -nri nagiosgraph /etc/httpd/<br />/etc/httpd/conf/httpd.conf:354:#### NAGIOSGRAPH #####<br />/etc/httpd/conf/httpd.conf:355:include /usr/local/nagiosgraph/etc/nagiosgraph-apache.conf<br />[root@localhost nagiosgraph]#</pre><br /><br />See nagiosgraph settings:<br /><br /><a href="http://localhost:8080/nagiosgraph/cgi-bin/showconfig.cgi" >http://localhost:8080/nagiosgraph/cgi-b ... config.cgi</a><br />]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry201208-014642</guid>
			<author>Angel</author>
			<pubDate>Tue, 08 Dec 2020 01:46:42 GMT</pubDate>
		</item>
		<item>
			<title>Solr: Starting Solr 4.7 for development purposes.</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry200925-183146</link>
			<description><![CDATA[<pre>[acool@localhost solr-4.7.0]$ date<br />Fri 25 Sep 2020 09:33:39 AM PDT<br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ sudo yum install java-1.8.0-openjdk<br />...<br />[acool@localhost solr-4.7.0]$ java -version<br />openjdk version &quot;1.8.0_265&quot;<br />OpenJDK Runtime Environment (build 1.8.0_265-b01)<br />OpenJDK 64-Bit Server VM (build 25.265-b01, mixed mode)<br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ ll<br />total 460<br />-rw-r--r--.  1 acool acool 362968 Feb 21  2014 CHANGES.txt<br />drwxr-xr-x. 12 acool acool   4096 Feb 21  2014 contrib<br />drwxrwxr-x.  4 acool acool   4096 Feb  1  2020 dist<br />drwxrwxr-x. 17 acool acool   4096 Feb  1  2020 docs<br />drwxr-xr-x. 15 acool acool   4096 Feb  2  2020 example<br />drwxr-xr-x.  2 acool acool  32768 Feb  1  2020 licenses<br />-rw-r--r--.  1 acool acool  12646 Feb 18  2014 LICENSE.txt<br />-rw-r--r--.  1 acool acool  26762 Feb 18  2014 NOTICE.txt<br />-rw-r--r--.  1 acool acool   5344 Feb 18  2014 README.txt<br />-rw-r--r--.  1 acool acool    686 Feb 18  2014 SYSTEM_REQUIREMENTS.txt<br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ <br />[acool@localhost solr-4.7.0]$ # Starting server<br />[acool@localhost solr-4.7.0]$ cd example/<br />[acool@localhost example]$ <br />[acool@localhost example]$ java -jar start.jar <br />...<br />[acool@localhost example]$ <br />[acool@localhost example]$  # <a href="http://localhost:8983/solr" >http://localhost:8983/solr</a> should now render the dashboard<br />[acool@localhost example]$</pre><br /><br />12/7/2020 Sample query:<br /><pre>http://app01.example.com:8098/search/query/article_index?sort=score DESC<br />&amp;q={!edismax}how to become a millionaire<br />&amp;qf=authorName^6 objectId^4 headline^2 deck<br />&amp;fq={!lucene}<br />    edition:us<br />    AND statusId:4<br />    AND objectTypeId:(1 2 4 12 15)<br />    AND publicationDateISO8601:[NOW-10YEAR TO NOW]<br />&amp;qs=5<br />&amp;bq=publicationDateISO8601:[NOW-2YEAR TO NOW]<br />&amp;fl=*,score<br />&amp;hl=true<br />&amp;mm=3&lt;80%<br />&amp;wt=json<br />&amp;rows=20<br />&amp;start=0<br />&amp;df=entspellcheck<br />&amp;spellcheck=true<br />&amp;spellcheck.q=&quot;how to become a millionaire&quot;~10<br />&amp;spellcheck.collate=true<br />&amp;spellcheck.maxCollations=30<br />&amp;spellcheck.maxCollationTries=30<br />&amp;spellcheck.maxCollationEvaluations=30<br />&amp;spellcheck.collateExtendedResults=true<br />&amp;spellcheck.collateMaxCollectDocs=30<br />&amp;spellcheck.count=10<br />&amp;spellcheck.extendedResults=true<br />&amp;spellcheck.maxResultsForSuggest=5<br />&amp;spellcheck.alternativeTermCount=10<br />&amp;spellcheck.accuracy=0.5</pre>]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry200925-183146</guid>
			<author>Angel</author>
			<pubDate>Fri, 25 Sep 2020 18:31:46 GMT</pubDate>
		</item>
		<item>
			<title>Docker: Swarm Demo</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry200223-012017</link>
			<description><![CDATA[In this demo I:<br /><br />a) create 3 CentOS 7 vagrant VMs<br />b) install docker in each VM<br />c) create a Docker Swarm (Swarm mode) with one manager and 2 workers<br />d) create a service with nginx image, update the service to use httpd image and update replicas memory limit<br /><br /><pre>[acool@localhost docker-swarm-demo]$ date<br />Sat 22 Feb 2020 04:35:36 PM PST<br />[acool@localhost docker-swarm-demo]$ cat /etc/redhat-release <br />Fedora release 31 (Thirty One)<br />[acool@localhost docker-swarm-demo]$ vagrant --version<br />Vagrant 2.2.6<br />[acool@localhost docker-swarm-demo]$ tree<br />.<br />├── vagrant-box-1<br />│   └── Vagrantfile<br />├── vagrant-box-2<br />│   └── Vagrantfile<br />└── vagrant-box-3<br />    └── Vagrantfile<br /><br />3 directories, 3 files<br />[acool@localhost docker-swarm-demo]$ <br />[acool@localhost docker-swarm-demo]$ cd vagrant-box-1<br />[acool@localhost vagrant-box-1]$ vagrant up<br />...<br />[acool@localhost vagrant-box-1]$ vagrant ssh<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.6.1810 (Core)<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ ip address show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:de:6e:43 brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.102/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3307sec preferred_lft 3307sec<br />    inet6 fe80::5054:ff:fede:6e43/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ sudo yum install docker<br />...<br />[vagrant@box1 ~]$ sudo systemctl start docker<br />[vagrant@box1 ~]$ sudo docker version<br />Client:<br /> Version:         1.13.1<br /> API version:     1.26<br /> Package version: docker-1.13.1-108.git4ef4b30.el7.centos.x86_64<br /> Go version:      go1.10.3<br /> Git commit:      4ef4b30/1.13.1<br /> Built:           Tue Jan 21 17:16:25 2020<br /> OS/Arch:         linux/amd64<br /><br />Server:<br /> Version:         1.13.1<br /> API version:     1.26 (minimum version 1.12)<br /> Package version: docker-1.13.1-108.git4ef4b30.el7.centos.x86_64<br /> Go version:      go1.10.3<br /> Git commit:      4ef4b30/1.13.1<br /> Built:           Tue Jan 21 17:16:25 2020<br /> OS/Arch:         linux/amd64<br /> Experimental:    false<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ # disable firewall for the sake of keeping this demo simple<br />[vagrant@box1 ~]$ sudo systemctl disable firewalld.service<br />[vagrant@box1 ~]$<br /><br />[acool@localhost docker-swarm-demo]$ # create box2 and box3 via vagrant<br /><br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ # install and start docker as previously shown in box1 <br />[vagrant@box2 ~]$ # disable firewall as previously shown in box1<br />[vagrant@box2 ~]$<br />[vagrant@box2 ~]$ ip address show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:e1:c4:f9 brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.27/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3436sec preferred_lft 3436sec<br />    inet6 fe80::5054:ff:fee1:c4f9/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@box2 ~]$<br /><br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ # install and start docker as previously shown in box1 <br />[vagrant@box3 ~]$ # disable firewall as previously shown in box1<br />[vagrant@box3 ~]$<br />[vagrant@box3 ~]$ ip address show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:18:5a:8c brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.88/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3323sec preferred_lft 3323sec<br />    inet6 fe80::5054:ff:fe18:5a8c/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ # make sure all boxes can ping each other<br />[vagrant@box3 ~]$ ping -c2 192.168.122.102<br />PING 192.168.122.102 (192.168.122.102) 56(84) bytes of data.<br />64 bytes from 192.168.122.102: icmp_seq=1 ttl=64 time=0.562 ms<br />64 bytes from 192.168.122.102: icmp_seq=2 ttl=64 time=0.619 ms<br /><br />--- 192.168.122.102 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 1000ms<br />rtt min/avg/max/mdev = 0.562/0.590/0.619/0.037 ms<br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ ping -c2 192.168.122.27<br />PING 192.168.122.27 (192.168.122.27) 56(84) bytes of data.<br />64 bytes from 192.168.122.27: icmp_seq=1 ttl=64 time=0.457 ms<br />64 bytes from 192.168.122.27: icmp_seq=2 ttl=64 time=0.312 ms<br /><br />--- 192.168.122.27 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 1000ms<br />rtt min/avg/max/mdev = 0.312/0.384/0.457/0.075 ms<br />[vagrant@box3 ~]$<br /><br /><br /><br />The gist of this demo:<br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker swarm init --advertise-addr 192.168.122.102<br />Swarm initialized: current node (325hn4zrumoinjslhiw3p9c1j) is now a manager.<br /><br />To add a worker to this swarm, run the following command:<br /><br />    docker swarm join \<br />    --token SWMTKN-1-1qm592qpo4j2ka5nxqx98vizi6z9dtag4rou49zxvrr7rww72g-agsgzbalcyw0c7saupqvk90sl \<br />    192.168.122.102:2377<br /><br />To add a manager to this swarm, run &#039;docker swarm join-token manager&#039; and follow the instructions.<br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br /><br /><br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ sudo docker swarm join \<br />&gt;     --token SWMTKN-1-1qm592qpo4j2ka5nxqx98vizi6z9dtag4rou49zxvrr7rww72g-agsgzbalcyw0c7saupqvk90sl \<br />&gt;     192.168.122.102:2377<br />This node joined a swarm as a worker.<br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$<br /><br /><br />[vagrant@box3 ~]<br />[vagrant@box3 ~]<br />[vagrant@box3 ~]$ sudo docker swarm join \<br />&gt;     --token SWMTKN-1-1qm592qpo4j2ka5nxqx98vizi6z9dtag4rou49zxvrr7rww72g-agsgzbalcyw0c7saupqvk90sl \<br />&gt;     192.168.122.102:2377<br />This node joined a swarm as a worker.<br />[vagrant@box3 ~]<br />[vagrant@box3 ~]<br /><br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker node ls<br />ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS<br />325hn4zrumoinjslhiw3p9c1j *  box1      Ready   Active        Leader<br />78uis92n6z7lg2glmsbkzuag0    box3      Ready   Active        <br />ehjej7f2ol2svf4nci0k9x4if    box2      Ready   Active        <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ # lets create a service<br />[vagrant@box1 ~]$ sudo docker service create --replicas 5 -p 80:80 --name web nginx<br />ytr9c94iieku7akjlp1gsq8mt<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ls<br />ID            NAME  MODE        REPLICAS  IMAGE<br />ytr9c94iieku  web   replicated  0/5       nginx:latest<br />[vagrant@box1 ~]$<br />[vagrant@box1 ~]$ sudo docker service ps web<br />ID            NAME   IMAGE         NODE  DESIRED STATE  CURRENT STATE                   ERROR  PORTS<br />n4n6xun4dlmn  web.1  nginx:latest  box2  Running        Preparing 20 seconds ago               <br />ks1cnh8oko1r  web.2  nginx:latest  box3  Running        Running less than a second ago         <br />lhqha4nd2sj2  web.3  nginx:latest  box1  Running        Preparing 20 seconds ago               <br />dy48ok6b1clb  web.4  nginx:latest  box2  Running        Preparing 20 seconds ago               <br />81dkfenyjrbz  web.5  nginx:latest  box3  Running        Running less than a second ago         <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # nginx should be available via any box ip in your browser: <a href="http://192.168.122.88/" >http://192.168.122.88/</a>, <a href="http://192.168.122.27/" >http://192.168.122.27/</a> or <a href="http://192.168.122.102/" >http://192.168.122.102/</a><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # we can try curl too<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ curl 192.168.122.102<br />...<br />[vagrant@box1 ~]$ curl 192.168.122.88<br />...<br />[vagrant@box1 ~]$ curl 192.168.122.27<br />...<br /><br />[vagrant@box2 ~]$ # lets see how much memory each replica is assigned<br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$ sudo docker stats --no-stream<br />CONTAINER           CPU %               MEM USAGE / LIMIT       MEM %               NET I/O             BLOCK I/O           PIDS<br />19467a26755f        0.00%               1.402 MiB / 487.1 MiB   0.29%               8.65 kB / 9.52 kB   0 B / 0 B           2<br />427cf3658a03        0.00%               1.383 MiB / 487.1 MiB   0.28%               4.65 kB / 2.86 kB   1.83 MB / 0 B       2<br />[vagrant@box2 ~]$ <br />[vagrant@box2 ~]$<br /><br />[vagrant@box1 ~]$ # lets update each replica memory limit to 250M<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service update --limit-memory 250M web<br />web<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$<br /><br /><br />[vagrant@box3 ~]$  # verify memory adjustment<br />[vagrant@box3 ~]$ <br />[vagrant@box3 ~]$ sudo docker stats --no-stream<br />CONTAINER           CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS<br />8990a7fa2489        0.00%               1.375 MiB / 250 MiB   0.55%               2.19 kB / 1.31 kB   0 B / 0 B           2<br />e6d71ec0caf8        0.00%               1.375 MiB / 250 MiB   0.55%               2.62 kB / 1.31 kB   0 B / 0 B           2<br />[vagrant@box3 ~]$<br /><br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # lets update our service with a different image, we&#039;ll try httpd instead of nginx :)<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service update --image httpd web<br />web<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ps web<br />ID            NAME       IMAGE         NODE  DESIRED STATE  CURRENT STATE                    ERROR  PORTS<br />opf7ks9q5rj4  web.1      httpd:latest  box2  Running        Starting less than a second ago         <br />sbbs4g9shkzm   \_ web.1  nginx:latest  box2  Shutdown       Shutdown 5 seconds ago                  <br />n4n6xun4dlmn   \_ web.1  nginx:latest  box2  Shutdown       Shutdown 3 minutes ago                  <br />vvv6018iym4j  web.2      nginx:latest  box3  Running        Running 3 minutes ago                   <br />ks1cnh8oko1r   \_ web.2  nginx:latest  box3  Shutdown       Shutdown 3 minutes ago                  <br />nl0oddf682d3  web.3      nginx:latest  box1  Running        Running 3 minutes ago                   <br />lhqha4nd2sj2   \_ web.3  nginx:latest  box1  Shutdown       Shutdown 3 minutes ago                  <br />xgcgisnlz5kd  web.4      nginx:latest  box1  Running        Running 3 minutes ago                   <br />dy48ok6b1clb   \_ web.4  nginx:latest  box2  Shutdown       Shutdown 3 minutes ago                  <br />jw9btp4h734o  web.5      nginx:latest  box3  Running        Running 3 minutes ago                   <br />81dkfenyjrbz   \_ web.5  nginx:latest  box3  Shutdown       Shutdown 3 minutes ago                  <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ls<br />ID            NAME  MODE        REPLICAS  IMAGE<br />ytr9c94iieku  web   replicated  5/5       httpd:latest<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # all nodes should render apache httpd welcome message now! <br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ # lets increase the number of replicas<br />[vagrant@box1 ~]$ sudo docker service scale web=8<br />web scaled to 8<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ sudo docker service ls<br />ID            NAME  MODE        REPLICAS  IMAGE<br />ytr9c94iieku  web   replicated  8/8       httpd:latest<br />[vagrant@box1 ~]$ <br />[vagrant@box1 ~]$ exit<br />logout<br />Connection to 192.168.122.102 closed.<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ </pre><br /><br />Enjoy!]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry200223-012017</guid>
			<author>Angel</author>
			<pubDate>Sun, 23 Feb 2020 01:20:17 GMT</pubDate>
		</item>
		<item>
			<title>Vagrant: Creating two CentOS VMs and ping each other.</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry200221-232810</link>
			<description><![CDATA[<pre>[acool@localhost ~]$ date<br />Fri 21 Feb 2020 02:53:59 PM PST<br />[acool@localhost ~]$<br />[acool@localhost ~]$ cat /etc/redhat-release <br />Fedora release 31 (Thirty One)<br />[acool@localhost ~]$<br />[acool@localhost ~]$ sudo dnf install vagrant-libvirt<br />...<br />[acool@localhost ~]$ vagrant --version<br />Vagrant 2.2.6<br />[acool@localhost ~]$<br />[acool@localhost ~]$ mkdir vagrant-box-1<br />[acool@localhost ~]$ cd vagrant-box-1/<br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$ vagrant init centos/7<br />A `Vagrantfile` has been placed in this directory. You are now<br />ready to `vagrant up` your first virtual environment! Please read<br />the comments in the Vagrantfile as well as documentation on<br />`vagrantup.com` for more information on using Vagrant.<br />[acool@localhost vagrant-box-1]<br />[acool@localhost vagrant-box-1]$ vagrant up<br />...<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ vagrant status<br />Current machine states:<br /><br />default                   running (libvirt)<br /><br />The Libvirt domain is running. To stop this machine, you can run<br />`vagrant halt`. To destroy the machine, you can run `vagrant destroy`.<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$ # you can now visually access this VM via &quot;Boxes&quot; which is like virt-manager<br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$ # or you can ssh into this box via vagrant<br />[acool@localhost vagrant-box-1]$ vagrant ssh<br />Last login: Fri Feb 21 23:05:54 2020 from 192.168.122.1<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.6.1810 (Core) <br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ ip a<br />1: lo: &lt;LOOPBACK,UP,LOWER_UP&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000<br />    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br />    inet 127.0.0.1/8 scope host lo<br />       valid_lft forever preferred_lft forever<br />    inet6 ::1/128 scope host <br />       valid_lft forever preferred_lft forever<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:7f:e1:0c brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.194/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 3068sec preferred_lft 3068sec<br />    inet6 fe80::5054:ff:fe7f:e10c/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ exit<br />logout<br />Connection to 192.168.122.194 closed.<br />[acool@localhost vagrant-box-1]$ <br />[acool@localhost vagrant-box-1]$<br />[acool@localhost vagrant-box-1]$ # lets create another box<br />[acool@localhost vagrant-box-1]$ cd ../ &amp;&amp; mkdir vagrant-box-2<br />[acool@localhost ~]$ <br />[acool@localhost ~]$ cd vagrant-box-2<br />[acool@localhost vagrant-box-2]$ <br />[acool@localhost vagrant-box-2]$ vagrant init centos/7<br />A `Vagrantfile` has been placed in this directory. You are now<br />ready to `vagrant up` your first virtual environment! Please read<br />the comments in the Vagrantfile as well as documentation on<br />`vagrantup.com` for more information on using Vagrant.<br />[acool@localhost vagrant-box-2]$ <br />[acool@localhost vagrant-box-2]$ vagrant up<br />...<br />[acool@localhost vagrant-box-2]$ <br />[acool@localhost vagrant-box-2]$ vagrant ssh<br />Last login: Fri Feb 21 23:17:20 2020 from 192.168.122.1<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ ip a show eth0<br />2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000<br />    link/ether 52:54:00:69:74:25 brd ff:ff:ff:ff:ff:ff<br />    inet 192.168.122.27/24 brd 192.168.122.255 scope global noprefixroute dynamic eth0<br />       valid_lft 2908sec preferred_lft 2908sec<br />    inet6 fe80::5054:ff:fe69:7425/64 scope link <br />       valid_lft forever preferred_lft forever<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ # let&#039;s ping box-1 from box-2<br />[vagrant@localhost ~]$ ping -c 2 192.168.122.194<br />PING 192.168.122.194 (192.168.122.194) 56(84) bytes of data.<br />64 bytes from 192.168.122.194: icmp_seq=1 ttl=64 time=0.589 ms<br />64 bytes from 192.168.122.194: icmp_seq=2 ttl=64 time=0.548 ms<br /><br />--- 192.168.122.194 ping statistics ---<br />2 packets transmitted, 2 received, 0% packet loss, time 999ms<br />rtt min/avg/max/mdev = 0.548/0.568/0.589/0.031 ms<br />[vagrant@localhost ~]$ <br />[vagrant@localhost ~]$ cat /etc/redhat-release <br />CentOS Linux release 7.6.1810 (Core) <br />[vagrant@localhost ~]$ exit<br />logout<br />Connection to 192.168.122.27 closed.<br />[acool@localhost vagrant-box-2]$<br />[acool@localhost vagrant-box-2]$<br />[acool@localhost vagrant-box-2]$<br />[acool@localhost vagrant-box-2]$# le&#039;s cleanup our tests<br />[acool@localhost vagrant-box-2]$ vagrant destroy<br />...<br />[acool@localhost vagrant-box-1]$ vagrant destroy<br /></pre>]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry200221-232810</guid>
			<author>Angel</author>
			<pubDate>Fri, 21 Feb 2020 23:28:10 GMT</pubDate>
		</item>
		<item>
			<title>Docker: CentOS 7 Fun.</title>
			<link>https://angelcool.net/sphpblog/blog_index.php?entry=entry181212-030136</link>
			<description><![CDATA[<pre>[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ cat /etc/redhat-release <br />Fedora release 24 (Twenty Four)<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker images<br />REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE<br />[aesteban@localhost ~]$ sudo docker ps -a<br />CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker pull centos:7<br />Trying to pull repository docker.io/library/centos ... <br />7: Pulling from docker.io/library/centos<br /><br />a02a4930cb5d: Pull complete <br />Digest: sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426<br />Status: Downloaded newer image for docker.io/centos:7<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker images<br />REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE<br />docker.io/centos    7                   1e1148e4cc2c        6 days ago          201.8 MB<br />[aesteban@localhost ~]$ sudo docker ps -a<br />CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES<br />[aesteban@localhost ~]$ sudo docker run -d --privileged -p 80:80 docker.io/centos:7 /sbin/init<br />f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker exec -it f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36  bash<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# yum install epel-release<br />...<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# yum install nginx<br />...<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl enable nginx<br />Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl start nginx<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl status nginx<br />● nginx.service - The nginx HTTP and reverse proxy server<br />   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)<br />   Active: active (running) since Wed 2018-12-12 02:55:32 UTC; 4s ago<br />  Process: 2643 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)<br />  Process: 2642 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)<br />  Process: 2641 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)<br /> Main PID: 2644 (nginx)<br />   CGroup: /system.slice/docker-f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36.scope/system.slice/nginx.service<br />           ├─2644 nginx: master process /usr/sbin/nginx<br />           ├─2645 nginx: worker process<br />           ├─2646 nginx: worker process<br />           ├─2647 nginx: worker process<br />           └─2648 nginx: worker process<br /><br />Dec 12 02:55:31 f0faf6197fbc systemd[1]: Starting The nginx HTTP and reverse proxy server...<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: configuration file /etc/nginx/nginx.conf test is successful<br />Dec 12 02:55:32 f0faf6197fbc systemd[1]: Started The nginx HTTP and reverse proxy server.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]#<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# exit<br />exit<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ # localhost should now be accessible in browser<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker exec -it f0faf6197fbc  systemctl status nginx<br />● nginx.service - The nginx HTTP and reverse proxy server<br />   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)<br />   Active: active (running) since Wed 2018-12-12 02:55:32 UTC; 1min 57s ago<br />  Process: 2643 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)<br />  Process: 2642 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)<br />  Process: 2641 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)<br /> Main PID: 2644 (nginx)<br />   CGroup: /system.slice/docker-f0faf6197fbc696796333bfc81f25d537a1aba170b81f2076010222e84284b36.scope/system.slice/nginx.service<br />           ├─2644 nginx: master process /usr/sbin/nginx<br />           ├─2645 nginx: worker process<br />           ├─2646 nginx: worker process<br />           ├─2647 nginx: worker process<br />           └─2648 nginx: worker process<br /><br />Dec 12 02:55:31 f0faf6197fbc systemd[1]: Starting The nginx HTTP and reverse proxy server...<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: the configuration file /etc/nginx/nginx.conf syntax is ok<br />Dec 12 02:55:31 f0faf6197fbc nginx[2642]: nginx: configuration file /etc/nginx/nginx.conf test is successful<br />Dec 12 02:55:32 f0faf6197fbc systemd[1]: Started The nginx HTTP and reverse proxy server.<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$<br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ <br />[aesteban@localhost ~]$ sudo docker exec -it f0faf6197fbc  bash<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl status postfix<br />Unit postfix.service could not be found.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# systemctl status memcached<br />Unit memcached.service could not be found.<br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]#  # We can install postfix and memcached with yum using the same procedure! <br />[root@f0faf6197fbc /]# <br />[root@f0faf6197fbc /]#  # Exercise Done :) !! </pre>]]></description>
			<category>- Linux Notes</category>
			<guid isPermaLink="true">https://angelcool.net/sphpblog/blog_index.php?entry=entry181212-030136</guid>
			<author>Angel</author>
			<pubDate>Wed, 12 Dec 2018 03:01:36 GMT</pubDate>
		</item>
	</channel>
</rss>
