Home >Database >Mysql Tutorial >Focus on recording the million-level data migration process of Mysql!
Suppose there is such a scenario, a small program is developed, and taking advantage of the popularity of the Double Eleven event, it quickly accumulates more than one million users in a month. We add a bureau for collecting formids on the small program page. Point is used to send template message notifications to WeChat users.
As the amount of data increases, the server space used before is starting to be a bit insufficient. Recently, I have written a new framework specifically for backend development of small programs, so I want to migrate the original data to the new system. database. I bought a 4-core 8G machine and started data migration. Let's make a simple record of the migration process.
In normal development, we often The data backup and migration method used is to use the mysqldump tool to export a sql file, and then import the sql into the new database to complete the data migration. [Recommended: mysql video tutorial]
Experiments have found that it takes a few minutes to export a million-level database into a sql file through mysqldump, and the size of the exported sql file is about 1G. , and then copy the 1G sql file to another server through the scp command, which may take a few minutes. I used the source command to import data in the database of the new server. I ran it all night and the data was not imported. The CPU was full.
Script migration
Directly operating the database through the command line to export and import data is a more convenient way, but the amount of data is large. In this case, it is often more time-consuming and requires higher server performance. If the time requirement for data migration is not very high, you can try writing a script to migrate the data. Although I haven't actually tried it, I think there are probably two scripting solutions.
The first method is to run a migration script on the migration target server, remotely connect to the database of the source data server, read the source data in chunks by setting query conditions, and write it to the target database after reading. . This migration method may be relatively inefficient. Data export and import are equivalent to a synchronous process, and you need to wait until the reading is completed before writing. If the query conditions are designed reasonably, multiple migration scripts can also be started in a multi-threaded manner to achieve the effect of parallel migration.
The second method can be combined with redis to build a "production and consumption" migration solution. The source data server can serve as a data producer, running a multi-threaded script on the source data server, reading the data in the database in parallel, and writing the data to the redis queue. As a consumer, the target server also runs a multi-threaded script on the target server, remotely connects to redis, reads the data in the redis queue in parallel, and writes the read data to the target database. Compared with the first method, this method is an asynchronous solution. Data import and data export can be performed at the same time. By using redis as the data transfer station, the efficiency will be greatly improved.
Here you can also use the go language to write migration scripts. Using its native concurrency features, you can achieve the purpose of migrating data in parallel and improve migration efficiency.
File Migration
The first migration solution is too inefficient, and the second migration solution has a higher encoding cost. Through comparison and online After analyzing the information I was looking for, I finally chose to use mysql. The
select data into outfile file.txt、load data infile file.txt into table
command completes the migration of millions of data in the form of import and export files.
Export the data file in the source database
select * from dc_mp_fans into outfile '/data/fans.txt';
Copy the data file to the target server
zip fans.zip /data/fans.txtscp fans.zip root@ip:/data/
In the target database Import the file
unzip /data/fans.zipload data infile '/data/fans.txt' into table wxa_fans(id,appid,openid,unionid,@dummy,created_at,@dummy,nickname,gender,avatar_url,@dummy,@dummy,@dummy,@dummy,language,country,province,city,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy);
Follow these steps and complete the cross-server migration of a million-level data table in a few minutes.
Note items
Execute load data infile and into in mysql The outfile command requires the secure_file_priv option to be enabled in mysql. You can use show global variables like '%secure%'; to check whether mysql has this option enabled. The default value of Null indicates that the import and export commands are not allowed to be executed.
Modify the mysql configuration item through vim /etc/my.cnf and set the value of secure_file_priv to empty:
[mysqld] secure_file_priv=''
Then you can import and export data files through commands.
In the above example, the data is migrated from the dc_mp_fans table of the source database to the wxa_fans table of the target database. The fields of the two data tables are: dc_mp_fans
wxa_fans
When importing data, you can set the field name to match the data of the target field. Unnecessary target field data can be discarded through @dummy.
Based on this data migration experience, the summary is:
The above is the detailed content of Focus on recording the million-level data migration process of Mysql!. For more information, please follow other related articles on the PHP Chinese website!