Server instances that you want to use for Group Replication must satisfy the following requirements.
InnoDB Storage Engine. Data must be stored in the
InnoDB
transactional storage engine. Transactions are executed optimistically and then, at commit time, are checked for conflicts. If there are conflicts, in order to maintain consistency across the group, some transactions are rolled back. This means that a transactional storage engine is required. Moreover,InnoDB
provides some additional functionality that enables better management and handling of conflicts when operating together with Group Replication. The use of other storage engines, including the temporaryMEMORY
storage engine, might cause errors in Group Replication. You can prevent the use of other storage engines by setting thedisabled_storage_engines
system variable on group members, for example:disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"
Primary Keys. Every table that is to be replicated by the group must have a defined primary key, or primary key equivalent where the equivalent is a non-null unique key. Such keys are required as a unique identifier for every row within a table, enabling the system to determine which transactions conflict by identifying exactly which rows each transaction has modified.
Network Performance. MySQL Group Replication is designed to be deployed in a cluster environment where server instances are very close to each other. The performance and stabiity of a group can be impacted by both network latency and network bandwidth. Bi-directional communication must be maintained at all times between all group members. If either inbound or outbound communication is blocked for a server instance (for example, by a firewall, or by connectivity issues), the member cannot function in the group, and the group members (including the member with issues) might not be able to report the correct member status for the affected server instance.
From MySQL 8.0.14, you can use an IPv4 or IPv6 network infrastructure, or a mix of the two, for TCP communication between remote Group Replication servers. There is also nothing preventing Group Replication from operating over a virtual private network (VPN).
Also from MySQL 8.0.14, where Group Replication server instances are co-located and share a local group communication engine (XCom) instance, a dedicated input channel with lower overhead is used for communication where possible instead of the TCP socket. For certain Group Replication tasks that require communication between remote XCom instances, such as joining a group, the TCP network is still used, so network performance influences the group's performance.
The following options must be configured on server instances that are members of a group.
Binary Log Active. Set
--log-bin[=log_file_name]
. MySQL Group Replication replicates binary log contents, therefore the binary log needs to be on for it to operate. This option is enabled by default. See Section 5.4.4, “The Binary Log”.Slave Updates Logged. Set
--log-slave-updates
. Servers need to log binary logs that are applied through the replication applier. Servers in the group need to log all transactions that they receive and apply from the group. This is required because recovery is conducted by relying on binary logs form participants in the group. Therefore, copies of each transaction need to exist on every server, even for those transactions that were not initiated on the server itself. This option is enabled by default.Binary Log Row Format. Set
--binlog-format=row
. Group Replication relies on row-based replication format to propagate changes consistently among the servers in the group. It relies on row-based infrastructure to be able to extract the necessary information to detect conflicts among transactions that execute concurrently in different servers in the group. See Section 17.2.1, “Replication Formats”.Binary Log Checksums Off. Set
--binlog-checksum=NONE
. Due to a design limitation of replication event checksums, Group Replication cannot make use of them, and they must be disabled.Global Transaction Identifiers On. Set
--gtid-mode=ON
. Group Replication uses global transaction identifiers to track exactly which transactions have been committed on every server instance and thus be able to infer which servers have executed transactions that could conflict with already committed transactions elsewhere. In other words, explicit transaction identifiers are a fundamental part of the framework to be able to determine which transactions may conflict. See Section 17.1.3, “Replication with Global Transaction Identifiers”.Replication Information Repositories. Set
--master-info-repository=TABLE
and--relay-log-info-repository=TABLE
. The replication applier needs to have the master information and relay log metadata written to themysql.slave_master_info
andmysql.slave_relay_log_info
system tables. This ensures the Group Replication plugin has consistent recoverability and transactional management of the replication metadata. From MySQL 8.0.2, these options are set toTABLE
by default, and from MySQL 8.0.3, theFILE
setting is deprecated. See Section 17.2.4.2, “Slave Status Logs”.Transaction Write Set Extraction. Set
--transaction-write-set-extraction=XXHASH64
so that while collecting rows to log them to the binary log, the server collects the write set as well. The write set is based on the primary keys of each row and is a simplified and compact view of a tag that uniquely identifies the row that was changed. This tag is then used for detecting conflicts. This option is enabled by default.Multithreaded Appliers. Group Replication members can be configured as multithreaded slaves, enabling transactions to be applied in parallel. A nonzero value for
slave_parallel_workers
enables the multithreaded applier on the member, and up to 1024 parallel applier threads can be specified. Settingslave_preserve_commit_order=1
ensures that the final commit of parallel transactions is in the same order as the original transactions, as required for Group Replication, which relies on consistency mechanisms built around the guarantee that all participating members receive and apply committed transaction in the same order. Finally, the settingslave_parallel_type=LOGICAL_CLOCK
, which specifies the policy used to decide which transactions are allowed to execute in parallel on the slave, is required withslave_preserve_commit_order=1
. Settingslave_parallel_workers=0
disables parallel execution and gives the slave a single applier thread and no coordinator thread. With that setting, theslave_parallel_type
andslave_preserve_commit_order
options have no effect and are ignored.