菜单

搭建MySQL的为主、半合伙、主主复制框架结构

2019年4月11日 - LINUX

实例拓扑图:

复制其最终目标是让壹台服务器的数码和其它的服务器的数码保持同步,已落得多少冗余可能服务的载重均衡。一台主服务器能够接连多台从服务器,并且从服务器也得以反过来作为主服务器。主从服务器能够置身差别的网络拓扑中,由于mysql的强有力复制作用,其复制目的能够是富有的数据库,也足以是有个别数据库,甚至是有些数据库中的某个表举行理并答复制。

图片 1

MySQL协理的两种复制方案:依据语句复制,基于行复制
依照语句复制基于行复制,那三种复制方式都是经过记录主服务器的贰进制日志中其它有十分大希望造成数据库内数据发生转移的SQL语句到衔接日志,并且在从服务器上推行以下中继日志内的SQL语句,而落得与主服务器的数额同步。分裂的是,当主服务器上实施了八个基于变量的多寡并将其革新到数据库中,如now()函数,而那时候基于语句复制时记下的就是该SQL语句的满贯语法,而听别人讲行复制就是将now()更新到数据库的数值记录下来。
比如:在主服务器上推行以下语句:
mysql>update user set createtime=now() where sid=16;
假若此时now()再次回到的值是:二〇一二-0四-1陆 20:四陆:35
根据语句的复制就会将其记录为:update user set createtime=now() where
sid=1陆;
依照行复制的就会将其记录为:update user set createtime=’二〇一三-04-16
20:四陆:3伍’ where sid=1陆;

D酷路泽一和D本田CR-V贰陈设keepalived和lvs作主从架构或主主架构,KoleosS一和PRADOS2布局nginx搭建web站点。

举办主从复制运营的三个线程
Binlog dump线程:将2进制日志的剧情发送给从服务器
I/O从线程:将经受的的多寡写入到联网日志
SQL线程:贰遍从中继日志中读出一句SQL语句在从服务器上实施

只顾:各节点的年月必要共同(ntpdate
ntp一.aliyun.com);关闭firewalld(systemctl stop
firewalld.service,systemctl disable
firewalld.service),设置selinux为permissive(setenforce
0);同时保险各网卡接济MULTICAST(多播)通讯。

壹、主从复制:
有备无患工作:
一.修改配置文件(server_id一定要修改)
二.身无寸铁复制用户
三.起步从服务器的从劳动进程

透过命令ifconfig能够查看到是或不是开启了MULTICAST:

规划:
Master:IP地址:172.16.4.11    版本:mysql-5.5.20
Slave:IP地址:172.16.4.12    版本:mysql-5.5.20
此地需注意,mysql复制大多数皆现在向包容,所以,从服务器的本子一定要超过或等于主服务器的版本。
1、Master
修改配置文件,将其设为mysql主服务器
#vim /etc/my.cnf
server_id=11                #修改server_id=11
log_bin=mysql-bin            #敞开二进制日志
sync_binlog=1              
#别的三个业务提交之后就立马写入到磁盘中的2进制文件
innodb_flush_logs_at_trx_commit=1      
#其他二个事物提交现在就立刻写入到磁盘中的日志文件

     
 图片 2

封存退出
#service mysql reload             #再度载入mysql的布局文件

keepalived的中坚架构

二、Master上创建用户,授予复制权限
mysql>grant replication client,replication slave on *.* to
repl@172.16.4.12 identified by ‘135246’;
mysql>flush privileges;

搭建RS1:

[root@RS1 ~]# yum -y install nginx   #安装nginx
[root@RS1 ~]# vim /usr/share/nginx/html/index.html   #修改主页
    <h1> 192.168.4.118 RS1 server </h1>
[root@RS1 ~]# systemctl start nginx.service   #启动nginx服务
[root@RS1 ~]# vim RS.sh   #配置lvs-dr的脚本文件
    #!/bin/bash
    #
    vip=192.168.4.120
    mask=255.255.255.255
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:0 $vip netmask $mask broadcast $vip up
        route add -host $vip dev lo:0
        ;;
    stop)
        ifconfig lo:0 down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ;;
    *) 
        echo "Usage $(basename $0) start|stop"
        exit 1
        ;;
    esac
[root@RS1 ~]# bash RS.sh start

3、Slave
修改配置文件,将其设置为多个mysql从服务器
#vim /etc/my.cnf
server_id=12                #修改server_id=12
#log-bin               
#诠释掉log-bin,从服务器不供给2进制日志,因而将其关闭
relay-log=mysql-relay               
#概念中继日志名,开启从服务器中继日志
relay-log-index=mysql-relay.index    
#概念中继日志索引名,开启从服务器中继索引
read_only=1                   
#设定从服务器只可以进展读操作,不能够拓展写操作

参考CRUISERS一的布置搭建LacrosseS二。

封存退出
#service mysql reload             #重复载入mysql的配置文件

搭建DR1:

[root@DR1 ~]# yum -y install ipvsadm keepalived   #安装ipvsadm和keepalived
[root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改keepalived.conf配置文件
    global_defs {
       notification_email {
         root@localhost
       }
       notification_email_from keepalived@localhost
       smtp_server 127.0.0.1
       smtp_connect_timeout 30
       router_id 192.168.4.116
       vrrp_skip_check_adv_addr
       vrrp_mcast_group4 224.0.0.10
    }

    vrrp_instance VIP_1 {
        state MASTER
        interface eno16777736
        virtual_router_id 1
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass %&hhjj99
        }
        virtual_ipaddress {
          192.168.4.120/24 dev eno16777736 label eno16777736:0
        }
    }

    virtual_server 192.168.4.120 80 {
        delay_loop 6
        lb_algo rr
        lb_kind DR
        protocol TCP

        real_server 192.168.4.118 80 {
            weight 1
            HTTP_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
        }
        real_server 192.168.4.119 80 {
            weight 1
            HTTP_GET {
                url {
                  path /index.html
                  status_code 200
                }
                connect_timeout 3
                nb_get_retry 3
                delay_before_retry 3
            }
         }
    }
[root@DR1 ~]# systemctl start keepalived
[root@DR1 ~]# ifconfig
    eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
            RX packets 14604  bytes 1376647 (1.3 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 6722  bytes 653961 (638.6 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
[root@DR1 ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.4.120:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0

4、验证Slave上连片日志以及server_id是或不是均生效
mysql>show variables like ‘relay%’;
+———————–+—————–+
| Variable_name         | Value           |
+———————–+—————–+
| relay_log             | relay-bin       |
| relay_log_index       | relay-bin.index |
| relay_log_info_file   | relay-log.info  |
| relay_log_purge       | ON              |
| relay_log_recovery    | OFF             |
| relay_log_space_limit | 0               |
+———————–+—————–+
mysql>show variables like ‘server_id’;
+—————+——-+
| Variable_name | Value |
+—————+——-+
| server_id     | 12    |
+—————+——-+

DPRADO二的搭建基本同D奥迪Q7一,首要修改一下布置文件中/etc/keepalived/keepalived.conf的state和priority:state BACKUP、priority 90. 同时大家发现作为backup的D安德拉贰未有启用eno1677773六:0的网口:

图片 3

5、运行从服务器的从服务进程
场景一、要是主服务器和从服务器都以新确立的,并未新增其余数据,则实施以下命令:
mysql>change master to \
master_host=’172.16.4.11′,
master_user=’repl’,
master_password=’135246′;
mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State:
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 107
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 25520
Relay_Log_Space: 2565465
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
mysql>start slave;
mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 107
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 360
Relay_Log_Space: 300
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 11

客户端实行测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>

[root@DR1 ~]# systemctl stop keepalived.service   #关闭DR1的keepalived服务

[root@DR2 ~]# systemctl status keepalived.service   #观察DR2,可以看到DR2已经进入MASTER状态
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2018-09-04 11:33:04 CST; 7min ago
  Process: 12983 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 12985 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─12985 /usr/sbin/keepalived -D
           ├─12988 /usr/sbin/keepalived -D
           └─12989 /usr/sbin/keepalived -D

Sep 04 11:37:41 happiness Keepalived_healthcheckers[12988]: SMTP alert successfully sent.
Sep 04 11:40:22 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Transition to MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Entering MASTER STATE
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) setting protocol VIPs.
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Sending/queueing gratuitous ARPs on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #可以看到客户端正常访问
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>
<h1> 192.168.4.119 RS2 server</h1>
<h1> 192.168.4.118 RS1 server </h1>

场景2、借使主服务器已经运维过一段了,从服务器是新增进的,则必要将主服务器从前的多少导入到从服务器中:
Master:
#mysqldump -uroot -hlocalhost -p123456 –all-databases
–lock-all-tables –flush-logs –master-data=2 >
/backup/alldatabase.sql
mysql>flush tables with read lock;
mysql>show master status;
+——————+———-+————–+——————+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000004 |      360 |              |                  |
+——————+———-+————–+——————+
mysql>unlock tables;
#scp /backup/alldatabase.sql 172.16.4.12:/tmp

keepalived的主主架构

Slave:
#mysql -uroot -p123456 < /tmp/alldatabase.sql
mysql>change master to \
master_host=’172.16.4.11′,
master_user=’repl’,
master_password=’135246′,
master_log_file=’mysql-bin.000004′,
master_log_pos=360;
mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State:
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 360
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: No
Slave_SQL_Running: No
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 360
Relay_Log_Space: 107
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
mysql>start slave;

 修改奔驰G级S1和卡宴S二,添加新的VIP:

[root@RS1 ~]# cp RS.sh RS_bak.sh
[root@RS1 ~]# vim RS_bak.sh   #添加新的VIP
    #!/bin/bash
    #
    vip=192.168.4.121
    mask=255.255.255.255
    case $1 in
    start)
        echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ifconfig lo:1 $vip netmask $mask broadcast $vip up
        route add -host $vip dev lo:1
        ;;
    stop)
        ifconfig lo:1 down
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
        ;;
    *)
        echo "Usage $(basename $0) start|stop"
        exit 1
        ;;
    esac
[root@RS1 ~]# bash RS_bak.sh start
[root@RS1 ~]# ifconfig
    ...
    lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.120  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback)

    lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.121  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback) 
[root@RS1 ~]# scp RS_bak.sh root@192.168.4.119:~
root@192.168.4.119's password: 
RS_bak.sh                100%  693     0.7KB/s   00:00

[root@RS2 ~]# bash RS_bak.sh   #直接运行脚本添加新的VIP 
[root@RS2 ~]# ifconfig
    ...
    lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.120  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback)

    lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 192.168.4.121  netmask 255.255.255.255
            loop  txqueuelen 0  (Local Loopback)

mysql>show slave status\G
*************************** 1. row
***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: 172.16.4.11
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000004
Read_Master_Log_Pos: 360
Relay_Log_File: relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mysql-bin.000004
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 360
Relay_Log_Space: 300
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 11

修改DR1和DR2:

[root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改DR1的配置文件,添加新的实例,配置服务器组
    ...
    vrrp_instance VIP_2 {
        state BACKUP
        interface eno16777736
        virtual_router_id 2
        priority 90
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass UU**99^^
        }
        virtual_ipaddress {
            192.168.4.121/24 dev eno16777736 label eno16777736:1
        }
    }

    virtual_server_group ngxsrvs {
        192.168.4.120 80
        192.168.4.121 80
    }
    virtual_server group ngxsrvs {
        ...
    }
[root@DR1 ~]# systemctl restart keepalived.service   #重启服务
[root@DR1 ~]# ifconfig   #此时可以看到eno16777736:1,因为DR2还未配置
    eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
            RX packets 54318  bytes 5480463 (5.2 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 38301  bytes 3274990 (3.1 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)

    eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
[root@DR1 ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.4.120:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0         
    TCP  192.168.4.121:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0

[root@DR2 ~]# vim /etc/keepalived/keepalived.conf   #修改DR2的配置文件,添加实例,配置服务器组
    ...
    vrrp_instance VIP_2 {
        state MASTER
        interface eno16777736
        virtual_router_id 2
        priority 100
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass UU**99^^
        }
        virtual_ipaddress {
            192.168.4.121/24 dev eno16777736 label eno16777736:1
        }
    }

    virtual_server_group ngxsrvs {
        192.168.4.120 80
        192.168.4.121 80
    }
    virtual_server group ngxsrvs {
        ...
    }
[root@DR2 ~]# systemctl restart keepalived.service   #重启服务
[root@DR2 ~]# ifconfig
    eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.117  netmask 255.255.255.0  broadcast 192.168.4.255
            inet6 fe80::20c:29ff:fe3d:a31b  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)
            RX packets 67943  bytes 6314537 (6.0 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 23250  bytes 2153847 (2.0 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

    eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
            ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)
[root@DR2 ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.4.120:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0         
    TCP  192.168.4.121:80 rr
      -> 192.168.4.118:80             Route   1      0          0         
      -> 192.168.4.119:80             Route   1      0          0 

表明MySQL的主从复制架构成功

客户端测试:

[root@client ~]# for i in {1..20};do curl http://192.168.4.120;done
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
[root@client ~]# for i in {1..20};do curl http://192.168.4.121;done
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>

 

注1:MySQL的复制能够依据有个别数据库或库中的默写表进行理并答复制,要想完结该意义,只需在其布署文件中丰裕以下配置:
Master:
binlog-do-db=db_name        只复制db_name数据库
binlog-ignore-db=db_name    不复制db_name数据库

注2:在Master上定义过滤规则,意味着,任何不关乎到该数据库相关的写操作都不会被记录到2进制日志中,因此不提出在Master上定义过滤规则,并且不建议binlog-do-db与binlog-ignore-db同时定义。

Slave:
replicate_do_db=db_name            只复制db_name数据库
replicate_ignore_db=db_name        不复制db_name数据库
replicate_do_table=tb_name        只复制tb_name表
replicate_ignore_table=tb_name        只复制tb_name表
replicate_wild_do_table=test%       
只复制以test为初始并且前面跟上任意字符的名字的表
replicate_wild_ignore_table=test_   
只复制以test为开始并且前边跟上任意单个字符的名字的表

注叁:即使须要钦赐四个db或table时,则只需将命令数次写入

**

2、半同台复制**

鉴于Mysql的复制皆以依照异步进行的,在格外规情状下不能够保险数据的中标复制,因而在mysql
5.五后头选用了来自google补丁,能够将Mysql的复制实现半壹并形式。所以须要为主服务器加载对应的插件。在Mysql的设置目录下的lib/plugin/目录中具有相应的插件semisync_master.so,semisync_slave.so

在Master和Slave的mysql命令行运转如下命令:

Master:
mysql> install plugin rpl_semi_sync_master soname
‘semisync_master.so’;
mysql> set global rpl_semi_sync_master_enabled = 1;
mysql> set global rpl_semi_sync_master_timeout = 1000;
mysql> show variables like ‘%semi%’;
+————————————+——-+
| Variable_name                      | Value |
+————————————+——-+
| rpl_semi_sync_master_enabled       | ON    |
| rpl_semi_sync_master_timeout       | 1000  |
| rpl_semi_sync_master_trace_level   | 32    |
| rpl_semi_sync_master_wait_no_slave | ON    |
+————————————+——-+

Slave:
mysql> install plugin rpl_semi_sync_slave soname
‘semisync_slave.so’;
mysql> set global rpl_semi_sync_slave_enabled = 1;
mysql> stop slave;
mysql> start slave;
mysql> show variables like ‘%semi%’;
+———————————+——-+
| Variable_name                   | Value |
+———————————+——-+
| rpl_semi_sync_slave_enabled     | ON    |
| rpl_semi_sync_slave_trace_level | 32    |
+———————————+——-+

检查半手拉手是还是不是见效:
Master:
mysql> show global status like ‘rpl_semi%’;
+——————————————–+——-+
| Variable_name                              | Value |
+——————————————–+——-+
| Rpl_semi_sync_master_clients               | 1     |
| Rpl_semi_sync_master_net_avg_wait_time     | 0     |
| Rpl_semi_sync_master_net_wait_time         | 0     |
| Rpl_semi_sync_master_net_waits             | 0     |
| Rpl_semi_sync_master_no_times              | 0     |
| Rpl_semi_sync_master_no_tx                 | 0     |
| Rpl_semi_sync_master_status                | ON    |
| Rpl_semi_sync_master_timefunc_failures     | 0     |
| Rpl_semi_sync_master_tx_avg_wait_time      | 0     |
| Rpl_semi_sync_master_tx_wait_time          | 0     |
| Rpl_semi_sync_master_tx_waits              | 0     |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0     |
| Rpl_semi_sync_master_wait_sessions         | 0     |
| Rpl_semi_sync_master_yes_tx                | 0     |
+——————————————–+——-+
注解半联机成功。

让半共同功能在MySQL每回运转都自动生效,在Master和Slave的my.cnf中编辑:
Master:
[mysqld]
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_timeout=1000     #1秒

Slave:
[mysqld]
rpl_semi_sync_slave_enabled=1

也可通过安装全局变量的点子来设置是不是运维半联合插件:
Master:
mysql> set global rpl_semi_sync_master_enabled=1
注销加载插件
mysql> uninstall plugin rpl_semi_sync_master;

Slave:
mysql> set global rpl_semi_sync_slave_enabled = 1;
mysql> uninstall plugin rpl_semi_sync_slave;

**

三、主主复制架构 一、在两台服务器上分别建立贰个拥有复制权限的用户; Master:*
mysql>grant replication client,replication slave on \
.* to
repl@172.16.4.12 identified by ‘135246’;
mysql>flush privileges;

Slave:
mysql>grant replication client,replication slave on *.* to
repl@172.16.4.11 identified by ‘135246’;
mysql>flush privileges;

二、修改配置文件:
Master:
[mysqld]
server-id = 11
log-bin = mysql-bin
auto-increment-increment = 2
auto-increment-offset = 1
relay-log=mysql-relay
relay-log-index=mysql-relay.index

Slave:
[mysqld]
server-id = 12
log-bin = mysql-bin
auto-increment-increment = 2
auto-increment-offset = 2
relay-log=mysql-relay
relay-log-index=mysql-relay.index

叁、假使此时两台服务器均为新创造,且无此外写入操作,各服务器只需记下当前和好2进制日志文件及事件位置,以之当作此外的服务器复制发轫地方即可
Master:
mysql> show master status;
+——————+———-+————–+——————+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000004 |      360 |              |                  |
+——————+———-+————–+——————+

Slave:
mysql> show master status;
+——————+———-+————–+——————+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+——————+———-+————–+——————+
| mysql-bin.000005 |      107 |              |                  |
+——————+———-+————–+——————+

4、各服务器接下去内定对另一台服务器为温馨的主服务器即可:
Master:
mysql>change master to \
master_host=’172.16.4.12′,
master_user=’repl’,
master_password=’135246′,
master_log_file=’mysql-bin.000005′,
master_log_pos=107;

Slave:
mysql>change master to \
master_host=’172.16.4.11′,
master_user=’repl’,
master_password=’135246′,
master_log_file=’mysql-bin.000004′,
master_log_pos=360;

伍、运营从服务器线程:
Master:
mysql>start slave;

Slave:
mysql>start slave;

到此主主架构已经打响!

相关文章

发表评论

电子邮件地址不会被公开。 必填项已用*标注

网站地图xml地图