MySQL Migration is a task in the daily maintenance of DBA. Migration, in its original meaning, is nothing more than removing an actual existing object to ensure the integrity and continuity of the object. Just like on the soft beach, two innocent children moved a pile of sand to other places to build the castle of their hearts.
In the production environment, migration work is required in the following situations:
Insufficient disk space. For example, in some old projects, the selected models may not be suitable for the database. As time goes by, hard disk shortages are likely to occur;
business bottlenecks. For example, in the project, a single machine is used to undertake all reading and writing services, which increases the business pressure and makes it overwhelmed. If the IO pressure is within an acceptable range, a read-write separation solution will be adopted;
The machine has a bottleneck. The main bottlenecks in the machine are disk IO capabilities, memory, and CPU. In addition to optimizing the bottlenecks, migration is a good solution;
project transformation. The databases of some projects span computer rooms, and nodes may be added in different computer rooms, or machines may be moved from one computer room to another. For another example, if different businesses share the same server, migration will be performed in order to relieve the pressure on the server and facilitate maintenance.
In a word, migration is a last resort. The purpose of implementing migration work is to keep the business running smoothly and continuously.
MySQL migration is nothing more than working around the data, and then continuing to extend it is nothing more than backup and recovery on the premise of ensuring the smooth and continuous operation of the business. The problem is how to perform backup and recovery quickly and safely.
On the one hand, backup. There is a backup for each master node's slave node or standby node. This backup may be a full backup or an incremental backup. The online backup method may be to use mysqldump, xtrabackup, or mydumper. For backup of small-capacity (less than 10GB) databases, we can use mysqldump. But for large-capacity databases (hundreds of GB or TB level), we cannot use mysqldump backup. On the one hand, locks will be generated; on the other hand, it will take too long. In this case, you can choose xtrabackup or copy the data directory directly. Directly copy the data directory method. You can use rsync to transfer between different machines. The time consumption depends on the network. Using xtrabackup, the time consuming is mainly in backup and network transmission. If you have a full or specified library backup file, this is the best way to obtain the backup. If the standby database can tolerate stopping the service, directly copying the data directory is the fastest method. If the standby database is not allowed to stop the service, we can use xtrabackup (which does not lock the InnoDB table), which is the best compromise to complete the backup.
On the other hand, recovery. For backup files of small-capacity (less than 10GB) databases, we can directly import them. For the recovery of large-capacity databases (hundreds of GB or TB level), recovery is not difficult after getting the backup files to the local machine. For specific recovery methods, please refer to Section 4.
After we understand why we need to migrate and how to do it, let’s take a look at how the production environment operates. Different application scenarios have different solutions.
Before reading the specific actual combat, it is assumed that the reader has the following agreement:
In order to protect privacy, the server IP and other information in this article have been processed;
If the server is in the same computer room, use the D segment of the server IP instead of the server. For the specific IP, please refer to the Architecture diagram;
If the server In different computer rooms, use the C and D segments of the server IP instead of the server. Please refer to the architecture diagram for the specific IP;
The method is given for each scenario, but the commands to be executed at each step will not be given in detail, because on the one hand, this will cause the article to be too long; on the other hand, I think as long as you know the method, The specific method will come to you, and it only depends on the degree of knowledge mastered and the ability to obtain information;
Please refer to Section 5 for precautions during actual combat.
3.1 Scenario 1 One master and one slave structure migration from the library
Following the idea of from easy to difficult, we start with a simple structure. Project A was originally a master-slave structure. 101 is the master node and 102 is the slave node. Due to business needs, 102 was migrated from node 103. The architecture diagram is shown in Figure 1. 102 The data capacity of the slave node is too large and cannot be backed up using mysqldump. After communicating with R&D, a consistent plan is formed.
Figure 1 Master-Slave structure migration slave library architecture diagram
The specific method is as follows:
R&D will 102 The read business is cut to the main database;
Confirm the 102 MySQL status (mainly see the PROCESS LIST), observe the machine traffic, and after confirming that it is correct, stop the service of the 102 slave node;
103 Create a new MySQL instance. After it is built, stop the MySQL service and mv the entire data directory to other places for backup;
More the entire mysql data of 102 Use rsync to copy the directory to 103;
While copying, authorize 101 so that 103 has the permission to pull binlog (REPLICATION SLAVE, REPLICATION CLIENT);
After the copy is completed, modify the server_id in 103 Configuration file, be careful not to be consistent with the one on 102;
Start the MySQL instance in 103, please note The data file path in the configuration file and the permissions of the data directory;
Enter the 103 MySQL instance, use SHOW SLAVE STATUS to check the slave status, and you can see that Seconds_Behind_Master is decrementing;
After Seconds_Behind_Master becomes 0, it means synchronization is complete. At this time, you can use pt-table-checksum to check that the data of 101 and 103 are consistent, but it is time-consuming and has an impact on the master node. It can be used with development Verify data consistency together;
Communicate with R&D. In addition to verifying data consistency, you also need to verify account permissions to prevent access errors after the business is migrated back;
After completing the above steps, you can coordinate with R&D to switch the partial reading business of 101 to 103 and observe the business status;
If there is no problem with the business, Prove that the migration was successful.
3.2 Scenario 2: One master and one slave structure migration of specified library
After we know how to migrate only the slave library with one master and one slave, continue Let’s take a look at how to migrate the master and slave nodes at the same time. Because different businesses access the same server at the same time, the pressure on a single library is too high and it is inconvenient to manage. Therefore, it is planned to migrate the master node 101 and the slave node 102 to the new machines 103 and 104 at the same time, with 103 acting as the master node and 104 acting as the slave node. The architecture diagram is shown in Figure 2. This migration only requires migrating specified libraries. The capacity of these libraries is not too large and it can ensure that the data is not real-time.
Figure 2 One master and one slave structure migration specified library architecture diagram
The specific method is as follows:
103 and 104 Create a new instance and establish a master-slave relationship. At this time, the master node and slave nodes are unloaded;
102 To export data, the correct way is to configure scheduled tasks and export during low business peaks Operation, the choice here is mysqldump;
102 Collect the accounts and permissions required for the specified library;
102 After exporting the data, use rsync Transfer to 103, perform compression operation if necessary;
103 Import data, the data will be automatically synchronized to 104, monitor server status and MySQL status;
103 Import is completed, 104 synchronization is completed, 103 is authorized according to the account collected in 102. After completion, the R&D will be notified to check the data and account permissions;
After the above is completed, R&D collaboration can be , migrate the business of 101 and 102 to 103 and 104, and observe the business status;
If there is no problem with the business, the migration is successful.
3.3 Scenario 3 Bilateral migration of designated libraries with a master-slave structure
Next, let’s take a look at how to bilaterally migrate a designated library with a master-slave structure. Also due to shared services, the server is under great pressure and management is chaotic. Therefore, it is planned to migrate the master node 101 and the slave node 102 to the new machines 103, 104, 105, and 106 at the same time. 103 will act as the master node of 104, 104 will act as the slave node of 103, 105 will act as the master node of 106, and 106 will act as the master node of 105. From the node, the architecture diagram is shown in Figure 3. This migration only requires migrating specified libraries. The capacity of these libraries is not too large and it can ensure that the data is not real-time. We can see that this migration is very similar to Scenario 2, except that it is migrated twice.
Figure Three One Master One Slave Structure Bilateral Migration Specified Library Architecture Diagram
The specific method is as follows:
103 and 104 Create a new instance and establish a master-slave relationship. At this time, the master node and slave nodes are unloaded;
102 Export the specified library data required by 103. The correct way is to configure scheduled tasks. To perform export operations during low business peaks, mysqldump is selected here;
102 Collect the account and permissions required for the specified library required by 103;
102 After exporting the specified library data required by 103, use rsync to transfer to 103, and perform compression operation if necessary;
103 Import the data, at this time the data will be automatically synchronized to 104, monitor Server status and MySQL status;
103 Import completed, 104 Synchronization completed, 103 Authorization based on the account collected in 102, after completion, notify R&D to check data and account permissions;
After the above is completed, collaborate with R&D to migrate the business of 101 and 102 to 103 and 104, and observe the business status;
105 and 106 create new instances and build Master-slave relationship, the master node and slave node are at no load at this time;
102 exports the specified database data required by 105. The correct way is to configure scheduled tasks and do them during low business peaks. For the export operation, mysqldump is selected here;
102 Collect the accounts and permissions required for the specified library required by 105;
102 Export 105 After the required specified database data is completed, use rsync to transfer to 105, and perform compression operation if necessary;
105 imports the data, and the data will be automatically synchronized to 106 to monitor the server status and MySQL status. ;
105 Import completed, 106 synchronization completed, 105 authorized based on the account collected in 102, after completion, notify R&D to check data and account permissions;
After the above is completed, collaborate with R&D to migrate the businesses of 101 and 102 to 105 and 106, and observe the business status;
If there are no problems with all businesses, the migration is successful.
3.4 Scenario 4: Complete migration of master-slave in one master-slave structure
Next, let’s see how to completely migrate master-slave in one master-slave structure Do. Similar to scenario two, but here all libraries are migrated. Due to the IO bottleneck of master node 101, we plan to migrate master node 101 and slave node 102 to new machines 103 and 104 at the same time, with 103 acting as the master node and 104 acting as the slave node. After the migration is completed, the previous master node and slave node are abandoned. The architecture diagram is shown in Figure 4. This migration is a full database migration with large capacity and needs to be real-time. This migration is special because the strategy adopted is to replace the new slave database first, and then replace the new master database. So the method is a little more complicated.
Figure 41 Master-slave structure complete migration master-slave architecture diagram
The specific method is as follows:
R&D Cut the read business of 102 to the main database;
Confirm the 102 MySQL status (mainly look at PROCESS LIST, MASTER STATUS), observe the machine traffic, and after confirming that it is correct, stop the service of the 102 slave node ;
104 Create a new MySQL instance. After it is completed, stop the MySQL service and mv the entire data directory to other places for backup. Note that the operation here is 104, which is the future From the library;
Copy the entire mysql data directory of 102 to 104 using rsync;
While copying, authorize 101 to make 104 Have permission to pull binlog (REPLICATION SLAVE, REPLICATION CLIENT);
After the copy is completed, modify the server_id in the 104 configuration file, be careful not to be consistent with the one on 102;
Start the MySQL instance in 104, pay attention to the configuration file The data file path and the permissions of the data directory;
Enter the 104 MySQL instance, use SHOW SLAVE STATUS to check the status of the slave library, and you can see that Seconds_Behind_Master is decrementing;
After Seconds_Behind_Master becomes 0, it means synchronization is complete. At this time, you can use pt-table-checksum to check that the data of 101 and 104 are consistent, but it is time-consuming and has an impact on the master node. It can be done together with development. Verification of data consistency;
In addition to data consistency verification, account permissions also need to be verified to prevent access errors after business migration;
Collaborate with R&D to switch the read business of the previous 102 slave node to 104;
Use the data of 102 to change 103 into the slave node of 101. The method is the same as above;
Now comes the key point, we need to turn 104 into the slave library of 103;
3.5 Scenario 5 Dual-primary structure cross-computer room migration
Let’s take a look at how to do dual-primary structure cross-computer room migration. For disaster recovery reasons, a certain project uses a cross-machine room and adopts a dual-master structure, which can be written on both sides. Due to disk space issues, the machine at location A needs to be replaced. It is planned to migrate the master node 1.101 and the slave node 1.102 to the new machines 1.103 and 1.104 at the same time, with 1.103 acting as the master node and 1.104 acting as the slave node. 2.101 and 2.102 in location B remain unchanged, but after the migration is completed, 1.103 and 2.101 become dual masters of each other. The architecture diagram is shown in Figure 5. Because it is a dual-master structure, both sides write at the same time. If you want to replace the master node, one node must stop serving.Figure 5 Dual primary structure cross-machine room migration architecture diagram
Enter the 1.103 MySQL instance and use SHOW SLAVE STATUS to check the slave status. You can see that Seconds_Behind_Master is decreasing;
After Seconds_Behind_Master becomes 0, it means synchronization Completed. At this time, you can use pt-table-checksum to check that the data of 1.101 and 1.103 are consistent, but it is time-consuming and has an impact on the master node. You can verify the data consistency together with development;
We use the same method to make 1.104 become the slave library of 1.103;
Communicate with R&D. In addition to verifying data consistency, you also need to verify account permissions in order to Prevent access errors after business migration;
At this time, what we have to do is to change 1.103 into the slave library of 2.101. For specific methods, please refer to scenario four;
It should be noted that the odd and even number configuration of 1.103 needs to be consistent with 1.101;
After completing the above steps, you can coordinate with R&D to read and write 1.101 The business is cut to 1.103, and the read business of 1.102 is cut to 1.104. Observe the business status;
If there are no problems with the business, the migration is successful.
3.6 Scenario 6 Multi-instance cross-computer room migration
Next let’s look at the proof of multi-instance cross-computer room migration. For the instance relationship of each machine, we can refer to Figure 6. The purpose of this migration is to perform data repair. Create instances 7938 and 7939 on 2.117 to replace the previous instances with abnormal data. Due to business reasons, some libraries are only written in location A, and some libraries are only written in location B, so there is a situation of synchronous filtering.
Figure 6 Multi-instance cross-machine room migration architecture diagram
The specific method is as follows:
1.113 Use innobackupex for the 7936 instance To perform data backup, please note that you need to specify the database and add the slave-info parameter;
After the backup is completed, copy the compressed file to 2.117;
2.117 Create the data directory and related directories involved in the configuration file;
2.117 Use innobackupex to restore logs;
2.117 Use innobackupex to copy data;
2.117 Modify the configuration file, pay attention to the following parameters: replicate-ignore-db, innodb_file_per_table = 1, read_only = 1, server_id;
2.117 Change data directory permissions;
1.112 Authorization so that 2.117 has the permission to pull binlog (REPLICATION SLAVE, REPLICATION CLIENT);
2.117 CHANGE MASTE TO 1.112, LOG FILE and LOG POS refer to xtrabackup_slave_info;
2.117 START SLAVE, check the status of the slave library;
2.117 Create 7939 on The method is similar, but the configuration file needs to specify replicate-wild-do-table;
Work with development to verify data consistency and verify account permissions to prevent access errors after the business is moved. ;
After completing the above steps, you can coordinate with R&D to migrate the corresponding business to the 7938 instance and 7939 instance of 2.117. Observe the business status;
If there are no problems with the business, the migration is successful.
After introducing the migration solutions for different scenarios, you need to pay attention to the following points:
Database migration, If event is involved, remember to turn on the event_scheduler parameter on the master node;
No matter what the migration scenario is, you must always pay attention to the server status, such as disk space and network jitter. ; In addition, continuous monitoring of the business is also essential;
CHANGE MASTER TO’s LOG FILE and LOG POS must not find mistakes. If they are specified incorrectly, the consequences will be The data is inconsistent or the master-slave relationship fails to be established;
Do not execute the script in the $HOME directory, remember to do it in the data directory;
The migration can be done Use scripts to automate, but don’t be self-defeating. Any script must be tested;
You must think twice before executing a command, and understand the meaning of the parameters of each command. ;
In a multi-instance environment, close MySQL in the form of mysqladmin. Do not close the instance being used;
Remember from the database read_only = 1 plus, this will avoid many problems;
The server_id of each machine must be consistent, otherwise synchronization exceptions will occur;
Correctly configure replicate-ignore-db and replicate-wild-do-table;
Remember to set innodb_file_per_table to 1 for the newly created instance. In some of the above scenarios, because this parameter was 0 in the previous instance, ibdata1 was too large, and backup and transmission took a lot of time;
When using gzip to compress data, please note that after the compression is completed, gzip will delete the source file;
All operations must be performed on the slave node or standby node Operation, if operated on the primary node, the primary node is likely to be down;
xtrabackup backup will not lock the InnoDB table, but will lock the MyISAM table. Therefore, before operating, remember to check whether the current database table uses the MyISAM storage engine. If so, either process it separately or change the table's Engine.
In actual MySQL migration, the following techniques can be used:
Any migration LOG FILE to relay_master_log_file (the name of the binlog log on the master being synchronized) shall prevail, and the LOG POS shall be subject to exec_master_log_pos (the POS point of the current binlog log being synchronized);
Use rsync to copy data, which can be combined with expect , nohup is absolutely a wonderful combination;
When using innobackupex to back up data, you can use gzip for compression;
When using innobackupex to back up data For data, you can add the –slave-info parameter to facilitate the slave database;
When using innobackupex to back up data, you can add the –throttle parameter to limit IO and reduce the impact on the business. You can also add the –parallel=n parameter to speed up the backup, but it should be noted that when using tar stream compression, the –parallel parameter is invalid;
For data backup and recovery, you can Make a to-do list, draw a process, and prepare the commands that need to be executed in advance;
There is a good way to quickly copy folders locally, use rsync, and add The following parameters: -avhW –no-compress –progress;
To quickly copy data between different partitions, you can use dd. Or use a more reliable method, back up to the hard disk and then put it on the server. There is even better way to deliver the hard drive directly to another place.
First, the purpose of migration is to allow the business to run smoothly and continuously;
Second, the core of migration is how to continue master-slave synchronization. We need to maintain master-slave synchronization on different servers. and find solutions between different businesses;Third, business switching needs to consider permission issues between different MySQL servers; the order of separation of reading and writing on different machines and the master-slave relationship need to be considered; the impact of cross-machine room calls on the business needs to be considered .
Readers can refer to the ideas provided in this article during the migration process. But how to ensure that each operation runs correctly requires careful thinking before proceeding.
As a digression, "The most important thing to prove that you are capable is to keep everything under your control."
The above is the detailed content of MySQL migration solutions under different circumstances (recommended). For more information, please follow other related articles on the PHP Chinese website!