What kind of backup is a database logical backup?
Everyone knows that when the database returns data to us for use, it displays the data one by one in the form of a certain logical association format that we originally designed and expected, with certain business logic attributes, and in the physical storage On the level of database software, it is stored after certain processing according to a specific format designed by the database software.
Database logical backup means that the backup software uses the logical structure object of the database as the unit according to the logical relationship we originally designed, and generates relevant text files one by one according to the predefined logical association format to achieve the purpose of backup. Purpose.
Commonly used logical backup
Logical backup can be said to be the simplest and most commonly used backup method for small and medium-sized systems. There are two main types of logical backups we commonly use in MySQL. One is to generate INSERT statements that can completely reproduce the data in the current database. The other is to pass the data through logical backup software to separate our database table data in specific separated by characters and recorded in a text file.
1. Generate INSERT statement backup
Both logical backups have their own advantages and disadvantages, and the usage scenarios they target are slightly different. Let’s first take a look at the logical backup that generates INSERT statements.
In the MySQL database, we generally use mysqldump in the tool program that comes with the MySQL database software to realize the logical backup file claiming the INSERT statement. The usage is basically as follows:
Dumping definition and data mysql database or table
Usage: mysqldump [OPTIONS] database [tables]
OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]
OR mysqldump [OPTIONS] --all-databases [OPTIONS]
Since the use of mysqldump is relatively simple, most of the required information can be obtained by running "mysqldump --help". Here I just want to combine some concepts and principles of the MySQL database to discuss with you what techniques we have and what we need to pay attention to when we use mysqldump to do database logical backup.
We all know that for most software or websites that use databases, they hope that their databases can provide the highest possible availability, instead of needing to shut down and stop providing services from time to time. Because once the database cannot provide services, the system can no longer provide some dynamic functions by accessing data. Therefore, for most systems, it may not be acceptable to shut down every backup. However, the implementation principle of the mysqldump program is to use the parameter information we give plus the system table information in the database to create a table one by one. Obtain the data and then generate an INSERT statement and write it into the backup file. This creates a problem. During the normal operation of the system, data change requests are likely to be continuously being executed, which may cause inconsistency in the data backed up by mysqldump. In other words, the backup data may not be at the same point in time, and may not even satisfy the integrity constraints. Such a backup set may not be a big problem for some systems, but for some systems that have strict requirements on data consistency and integrity, it is a big problem, and it is a completely invalid backup set.
What should we do in such a scenario? We know that if we want the data in the database to be consistent, there are only two situations where this can be achieved.
First, all data is taken out at the same time;
Second, the data in the database is in a static state.
For the first situation, everyone will definitely be thinking, is this possible? No matter what, as long as there are more than two tables, no matter how we write the program, it is impossible to get the same number at the same time last night. Yes, it is true that we cannot make the data fetching time points exactly the same through conventional methods, but don't forget that in the same transaction, the database can ensure that the data read is at the same time point. Therefore, for storage engines that support transactions, such as Innodb or BDB, we can control the entire backup process in the same transaction to achieve the consistency and integrity of the backup data, and the mysqldump program also provides us with The relevant parameter options to support this function are through the "--single-transaction" option, which does not affect any normal services of the database.
For the second case, I think the first thing everyone thinks of is to lock the table that needs to be backed up, allowing only reading but not writing. Yes, that's really all we can do. We can only use a compromise method, so that the database only provides data query services during the backup process and locks the write service, so that the data is temporarily in a consistent state that will not be modified, and waits for mysqldump to complete the backup. Remove the write lock and resume full service. The mysqldump program itself also provides related options such as "--lock-tables" and "--lock-all-tables", which will lock the table before execution and automatically release the lock after execution. One thing to note here is that "--lock-tables" does not lock all the tables that need to be dumped at once, but only locks one database table at a time. If you need to dump the tables in multiple different In the database, "--lock-all-tables" must be used to ensure the consistent integrity of the data.
When generating a logical backup file of an INSERT statement through mysqldump, there is a very useful option for us to use, that is "--master-data[=value]". When "--master-data=1" is added, mysqldump will record the name and location of the binlog log currently used by MySQL into the dump file, and it will be recorded in the form of a CHANGE_MASTER statement. If you just use "- -master-data" or "--masterdata=2", the CHANGE_MASTER statement will exist in the form of a comment. This option is very useful when implementing the online construction of the slave. Even if the slave is not built online, further recovery operations can be performed through the backup binlog during the recovery process in some cases.
In some scenarios, we may just export some special data to other databases, but do not want to achieve it by building a temporary table first. We can also use the "--where=" of the mysqldump program. 'wherecondition'", but can only be used when dumping only one table. In fact, in addition to the above tips, mysqldump also provides many other useful options for everyone to use in different scenarios, such as using "--no-data" to only dump the database structure creation script, and using "--no- createinfo" removes the commands to create table structures in the dump file, etc. Interested readers can read the introduction to the use of the mysqldump program in detail and then test it by yourself.
2. Generate plain text backup data file backup in a specific format
In addition to generating logical backup by generating INSERT commands, we can also use another way to separate the data in the database with specific delimiting characters and record the data in text file to achieve the effect of logical backup. Compared with the INSERT command file, such backup data requires smaller storage space, has a clearer data format, and is easier to edit. However, the disadvantage is that the backup data of multiple tables cannot exist in the same backup file, and there is no command to rebuild the database structure. A backup set requires multiple files, and the impact on us is nothing more than an increase in maintenance and recovery costs due to more files, but these can basically be achieved by writing some simple scripts
So what methods can we generally use to generate such a As for the backup set file, in fact, MySQL has also implemented the corresponding functions for us.
In MySQL, the following two methods are generally used to obtain plain text backup files with customizable delimiters.
1. Implemented by executing the SELECT ... TO OUTFILE FROM ... command.
provides a SELECT syntax in MySQL, which is designed for users to output certain data to a text file in a specified format through SQL statements. , and also provides practical tools and related commands to easily import the exported file into the database as it is. Isn’t that exactly what we need for backup?
This command has several parameters that need to be noted as follows:
The "FIELDS ESCAPED BY ['name']" that implements the character escape function escapes the characters that need to be escaped in the SQL statement;
You can change the field's "FIELDS [OPTIONALLY] ENCLOSED BY 'name'" whose content is "wrapped". If "OPTIONALLY" is not used, all types of data including numeric types will be "wrapped". After "OPTIONALLY" is used, numeric type data will not be Will be "wrapped" by the specified character.
You can set the separator between each two fields through "FIELDS TERMINATED BY";
And "LINES TERMINATED BY" will tell the MySQL output file what characters need to be added at the end of each record.
350024,24,"D","abcd"
350025,25,"A","abcd"
... ...
2. Export through mysqldump
Maybe we all know that mysqldump can export the database The data in it generates related backup files in the form of INSERT statements. In fact, in addition to generating INSERT statements, mysqldump can also implement the functions implemented by "SELECT ... TOOUTFILE FROM ..." above, and also generate a related The creation script corresponding to the database structure.
For example:
root@sky:~# ls -l /tmp/mysqldump
total 0
root@sky:~# mysqldump -uroot -T/tmp/mysqldump test test_outfile --fieldsenclosed-by= " --fields-terminated-by=,
root@sky:~# ls -l /tmp/mysqldump
total 8
-rw-r--r-- 1 root root 1346 2008-10-14 22 :18 test_outfile.sql
-rw-rw-rw- 1 mysql mysql 2521 2008-10-14 22:18 test_outfile.txt
root@sky:~# cat /tmp/mysqldump/test_outfile.txt
350021, 21,"A","abcd"
350022,22,"B","abcd"
350023,23,"C","abcd"
350024,24,"D","abcd"
350025,25,"A","abcd"
... ...
root@sky:~# cat /tmp/mysqldump/test_outfile.sql
-- MySQL dump 10.11
---- Host : localhost Database: test
------------------------------------------ --------------- Server version 5.0.51a-log
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@ CHARACTER_SET_RESULTS */;
/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;
/*!40101 SET NAMES utf8 */;
/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;
/*!40103 SET TIME_ZONE='+00:00' */;
/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='' */;
/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES , SQL_NOTES=0 */;
---- Table structure for table `test_outfile`
--DROP TABLE IF EXISTS `test_outfile`;
SET @saved_cs_client = @@character_set_client;
SET character_set_client = utf8;
CREATE TABLE `test_outfile` (
`id` int(11) NOT NULL default '0',
`t_id` int(11) default NULL,
`a` char(1) default NULL,
` MID` VARCHAR (32) Default Null
) Engine = Myisam Default Charset = UTF8;
Set Character_Client = @Saved_CS_Client;
/*! 40103 Set TimE_zon E =@OLD_TIME_ZONE */;
/ *! 40101 SET SQL_MODE = @OLD_SQL_MODE */;
/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;
/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;
/*!40101 SET COLLATION_CON NECTION=@OLD_COLLATION_CONNECTION */;
/ *!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;
-- Dump completed on 2008-10-14 14:18:23
This output structure is very suitable for us to use as a backup. Of course, if there is If multiple tables need to be dumped, two corresponding files will be generated for each table.
The above is the content of logical backup of Mysql data backup. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!