搜尋

首頁  >  問答  >  主體

mysql5.7.17的group replication是不是目前不靠谱, 按照官方文档搞了几天都不成功

我是三台主机的, iptables, selinux ,ipv6全关, 第一个节点成功启动了并且加入group, 但是第二个节点死活加入不了group, 网上目前有几篇操作手册, 跟官方都差不多, 应该有人成功过, 不知道自己哪里做的不对

安装官方第二个节点, 在安装完插件后, 只需启动group replication即可

START GROUP_REPLICATION;

第二个节点错误日志如下:

2017-01-16T18:09:03.578252Z 0 [Note] Plugin group_replication reported: 'client connected to localhost 33062 fd 69'
2017-01-16T18:09:03.578427Z 0 [Note] Plugin group_replication reported: 'connecting to localhost 33062'
2017-01-16T18:09:03.578800Z 0 [Note] Plugin group_replication reported: 'client connected to localhost 33062 fd 72'
2017-01-16T18:09:03.578977Z 0 [Note] Plugin group_replication reported: 'connecting to localhost 33062'
2017-01-16T18:09:03.579081Z 0 [Note] Plugin group_replication reported: 'client connected to localhost 33062 fd 74'
2017-01-16T18:09:03.579236Z 0 [Note] Plugin group_replication reported: 'connecting to postgressql1.novalocal 33061'
2017-01-16T18:09:03.580815Z 0 [Note] Plugin group_replication reported: 'client connected to postgressql1.novalocal 33061 fd 76'
2017-01-16T18:09:33.581699Z 0 [ERROR] Plugin group_replication reported: '[GCS] Timeout while waiting for the group communication engine to be ready!'
2017-01-16T18:09:33.581774Z 0 [ERROR] Plugin group_replication reported: '[GCS] The group communication engine is not ready for the member to join. Local port: 33062'
2017-01-16T18:09:33.581955Z 0 [Note] Plugin group_replication reported: 'state 4257 action xa_terminate'
2017-01-16T18:09:33.582007Z 0 [Note] Plugin group_replication reported: 'new state x_start'
2017-01-16T18:09:33.582020Z 0 [Note] Plugin group_replication reported: 'state 4257 action xa_exit'
2017-01-16T18:09:33.582278Z 0 [Note] Plugin group_replication reported: 'Exiting xcom thread'
2017-01-16T18:09:33.582291Z 0 [Note] Plugin group_replication reported: 'new state x_start'
2017-01-16T18:09:33.582349Z 0 [Warning] Plugin group_replication reported: 'read failed'
2017-01-16T18:09:33.596918Z 0 [ERROR] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 33062'
2017-01-16T18:10:03.553584Z 4 [ERROR] Plugin group_replication reported: 'Timeout on wait for view after joining group'
2017-01-16T18:10:03.553730Z 4 [Note] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2017-01-16T18:10:03.554294Z 4 [ERROR] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.'
2017-01-16T18:10:03.554485Z 4 [Note] Plugin group_replication reported: 'auto_increment_increment is reset to 1'
2017-01-16T18:10:03.554514Z 4 [Note] Plugin group_replication reported: 'auto_increment_offset is reset to 1'
2017-01-16T18:10:03.555996Z 9 [Note] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed
2017-01-16T18:10:03.610642Z 6 [Note] Plugin group_replication reported: 'The group replication applier thread was killed'"

my.cnf如下:

[mysqld]

user=mysql
basedir=/usr/local/mysql
datadir=/data/mysql

port=3306
socket=/tmp/mysql.sock
character-set-server=utf8
explicit_defaults_for_timestamp


server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address="localhost:33062"
loose-group_replication_group_seeds="postgressql1.novalocal:33061,localhost:33062,postgressql3.novalocal:33063"
loose-group_replication_bootstrap_group= off
大家讲道理大家讲道理2783 天前2268

全部回覆(3)我來回復

  • 怪我咯

    怪我咯2017-04-17 16:42:20

    官網文檔沒有提到的是, 成功啟動第一個node後, 在啟動其他node的時候需要一個命令

    set global group_replication_allow_local_disjoint_gtids_join=ON;

    為什麼要執行, 可以看日誌裡面有提示, 注意不要光看ERROR的, 這個參數在一個NOTE級別的日誌裡面說了, 我覺得有點不妥, 起碼是WARN的吧

    回覆
    0
  • PHP中文网

    PHP中文网2017-04-17 16:42:20

    樓主你好,我也碰到了同樣的問題,請問你解決了嗎?
    難道GR只能在本機多實例YY著玩?

    回覆
    0
  • 阿神

    阿神2017-04-17 16:42:20

    我也遇到了同樣的問題,跨機器就是玩不轉,所有的參數都做了嘗試,搞不懂為何修改了IP,就不能夠成功加入了,可能是哪裡沒有設定到位,我是新司機不懂路子,期待老司機帶帶。 。 。 。

    2017-03-17T23:21:26.354663+08:00 0 [注意]插件group_replication 報告:'客戶端連接到127.0.0.1 24902 fd 73'
    2017-03-176238207:005注意] 插件group_replication 報告:'連接到127.0.0.1 24902'
    2017-03-17T23:21:26.354854+08:00 0 [注意]插件group_replication 報告:'客戶端連接127.0. 17-03 -17T23:21:26.355006+08:00 0 [注意] 插件group_replication 報告: '連接到127.0.0.1 24902'
    2017-03-17T23:21:26.35076. group_replication 報告: '客戶端連線到127.0.0.1 24902 fd 77'
    2017-03-17T23:21:26.355243+08:00 0 [注意]插件group_repli:26.355243+08:00 0 [注意]外掛程式group_replication 報告:'連接到172013.17201320172370373. 17T23:21:26.3 55522+ 08:00 0 [注意] 插件group_replication 報告: '客戶端連接到172.19.58.11 24901 fd 79'
    2017-03-17T23:21報告: '[GCS] 逾時正在等待群組通訊引擎準備就緒! '
    2017-03-17T23:21:56.356064+08:00 0 [錯誤]插件group_replication報告:'[GCS]群組通訊引擎尚未準備好讓成員加入。本機連接埠:24902'
    2017-03-17T23:21:56.356174+08:00 0 [注意]外掛程式group_replication報告:'state 4257 action xa_terminate'
    2017-03-1720350:20035:0017-0320T2035:0035:000 ] ] 外掛程式group_replication 報告:'新狀態x_start'
    2017-03-17T23:21:56.356208+08:00 0 [注意]插件group_replication 報告:'狀態4257 操作xa_exit'
    2017-038623:83 08 :00 0 [注意] 插件group_replication 報告: '退出xcom 執行緒'
    2017-03-17T23:21:56.356849+08:00 0 [注意] 外掛程式group_replication 報告: '新狀態x_start'
    0 [注意] 外掛程式group_replication 報告: '新狀態x_start'
    21 :56.356943+08:00 0 [警告] 插件group_replication 報告: '讀取失敗'
    2017-03-17T23:21:56.373558+08:00 0 [錯誤] 外掛程式_replication 報告: 'group 。本機連接埠:24902'
    2017-03-17T23:22:26.348810+08:00 4 [錯誤]外掛程式group_replication報告:'加入群組後等待查看超時'
    2017-03-17T23:22:26.348 06. [注意] 插件group_replication 報告:「儘管不是成員,但仍要求離開群組」
    2017-03-17T23:22:26.348974+08:00 4 [錯誤] 插件group_replication 報告:「[GCS] 成員正在離開群組而不加入群組。 4 [Note] 外掛程式group_replication 報告:'auto_increment_offset 重設為1'
    2017-03-17T23:22:26.349434+08:00 75 [Note] cation_applier' 事件從事件中從事件中讀取錯誤:被殺死
    2017-03-17T23:22:26.355725+08:00 72 [注意]插件group_replication 報告:'群組複製應用程式線程被殺死'

    回覆
    0
  • 取消回覆