search
HomeDatabaseMysql TutorialMysql data backup logical backup

What kind of backup is a database logical backup?

Everyone knows that when the database returns data to us for use, it displays the data one by one in the form of a certain logical association format that we originally designed and expected, with certain business logic attributes, and in the physical storage On the level of database software, it is stored after certain processing according to a specific format designed by the database software.

Database logical backup means that the backup software uses the logical structure object of the database as the unit according to the logical relationship we originally designed, and generates relevant text files one by one according to the predefined logical association format to achieve the purpose of backup. Purpose.

Commonly used logical backup

Logical backup can be said to be the simplest and most commonly used backup method for small and medium-sized systems. There are two main types of logical backups we commonly use in MySQL. One is to generate INSERT statements that can completely reproduce the data in the current database. The other is to pass the data through logical backup software to separate our database table data in specific separated by characters and recorded in a text file.

1. Generate INSERT statement backup

Both logical backups have their own advantages and disadvantages, and the usage scenarios they target are slightly different. Let’s first take a look at the logical backup that generates INSERT statements.

In the MySQL database, we generally use mysqldump in the tool program that comes with the MySQL database software to realize the logical backup file claiming the INSERT statement. The usage is basically as follows:

Dumping definition and data mysql database or table

Usage: mysqldump [OPTIONS] database [tables]

OR mysqldump [OPTIONS] --databases [OPTIONS] DB1 [DB2 DB3...]

OR mysqldump [OPTIONS] --all-databases [OPTIONS]

Since the use of mysqldump is relatively simple, most of the required information can be obtained by running "mysqldump --help". Here I just want to combine some concepts and principles of the MySQL database to discuss with you what techniques we have and what we need to pay attention to when we use mysqldump to do database logical backup.

We all know that for most software or websites that use databases, they hope that their databases can provide the highest possible availability, instead of needing to shut down and stop providing services from time to time. Because once the database cannot provide services, the system can no longer provide some dynamic functions by accessing data. Therefore, for most systems, it may not be acceptable to shut down every backup. However, the implementation principle of the mysqldump program is to use the parameter information we give plus the system table information in the database to create a table one by one. Obtain the data and then generate an INSERT statement and write it into the backup file. This creates a problem. During the normal operation of the system, data change requests are likely to be continuously being executed, which may cause inconsistency in the data backed up by mysqldump. In other words, the backup data may not be at the same point in time, and may not even satisfy the integrity constraints. Such a backup set may not be a big problem for some systems, but for some systems that have strict requirements on data consistency and integrity, it is a big problem, and it is a completely invalid backup set.

What should we do in such a scenario? We know that if we want the data in the database to be consistent, there are only two situations where this can be achieved.

First, all data is taken out at the same time;

Second, the data in the database is in a static state.

For the first situation, everyone will definitely be thinking, is this possible? No matter what, as long as there are more than two tables, no matter how we write the program, it is impossible to get the same number at the same time last night. Yes, it is true that we cannot make the data fetching time points exactly the same through conventional methods, but don't forget that in the same transaction, the database can ensure that the data read is at the same time point. Therefore, for storage engines that support transactions, such as Innodb or BDB, we can control the entire backup process in the same transaction to achieve the consistency and integrity of the backup data, and the mysqldump program also provides us with The relevant parameter options to support this function are through the "--single-transaction" option, which does not affect any normal services of the database.

For the second case, I think the first thing everyone thinks of is to lock the table that needs to be backed up, allowing only reading but not writing. Yes, that's really all we can do. We can only use a compromise method, so that the database only provides data query services during the backup process and locks the write service, so that the data is temporarily in a consistent state that will not be modified, and waits for mysqldump to complete the backup. Remove the write lock and resume full service. The mysqldump program itself also provides related options such as "--lock-tables" and "--lock-all-tables", which will lock the table before execution and automatically release the lock after execution. One thing to note here is that "--lock-tables" does not lock all the tables that need to be dumped at once, but only locks one database table at a time. If you need to dump the tables in multiple different In the database, "--lock-all-tables" must be used to ensure the consistent integrity of the data.

When generating a logical backup file of an INSERT statement through mysqldump, there is a very useful option for us to use, that is "--master-data[=value]". When "--master-data=1" is added, mysqldump will record the name and location of the binlog log currently used by MySQL into the dump file, and it will be recorded in the form of a CHANGE_MASTER statement. If you just use "- -master-data" or "--masterdata=2", the CHANGE_MASTER statement will exist in the form of a comment. This option is very useful when implementing the online construction of the slave. Even if the slave is not built online, further recovery operations can be performed through the backup binlog during the recovery process in some cases.

In some scenarios, we may just export some special data to other databases, but do not want to achieve it by building a temporary table first. We can also use the "--where=" of the mysqldump program. 'wherecondition'", but can only be used when dumping only one table. In fact, in addition to the above tips, mysqldump also provides many other useful options for everyone to use in different scenarios, such as using "--no-data" to only dump the database structure creation script, and using "--no- createinfo" removes the commands to create table structures in the dump file, etc. Interested readers can read the introduction to the use of the mysqldump program in detail and then test it by yourself.

2. Generate plain text backup data file backup in a specific format

In addition to generating logical backup by generating INSERT commands, we can also use another way to separate the data in the database with specific delimiting characters and record the data in text file to achieve the effect of logical backup. Compared with the INSERT command file, such backup data requires smaller storage space, has a clearer data format, and is easier to edit. However, the disadvantage is that the backup data of multiple tables cannot exist in the same backup file, and there is no command to rebuild the database structure. A backup set requires multiple files, and the impact on us is nothing more than an increase in maintenance and recovery costs due to more files, but these can basically be achieved by writing some simple scripts

So what methods can we generally use to generate such a As for the backup set file, in fact, MySQL has also implemented the corresponding functions for us.

In MySQL, the following two methods are generally used to obtain plain text backup files with customizable delimiters.

1. Implemented by executing the SELECT ... TO OUTFILE FROM ... command.

provides a SELECT syntax in MySQL, which is designed for users to output certain data to a text file in a specified format through SQL statements. , and also provides practical tools and related commands to easily import the exported file into the database as it is. Isn’t that exactly what we need for backup?

This command has several parameters that need to be noted as follows:

The "FIELDS ESCAPED BY ['name']" that implements the character escape function escapes the characters that need to be escaped in the SQL statement;

You can change the field's "FIELDS [OPTIONALLY] ENCLOSED BY 'name'" whose content is "wrapped". If "OPTIONALLY" is not used, all types of data including numeric types will be "wrapped". After "OPTIONALLY" is used, numeric type data will not be Will be "wrapped" by the specified character.

You can set the separator between each two fields through "FIELDS TERMINATED BY";

And "LINES TERMINATED BY" will tell the MySQL output file what characters need to be added at the end of each record.

For example:

root@localhost : test 10:02:02> SELECT * INTO OUTFILE '/tmp/dump.text'

-> FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'

-> LINES TERMINATED BY 'n'

-> FROM test_outfile limit 100;

Query OK, 100 rows affected (0.00 sec)

root@localhost : test 10:02:11> exit

Bye

root@sky:/tmp# cat dump.text

350021,21,"A","abcd"

350022,22,"B","abcd"

350023,23,"C","abcd"

350024,24,"D","abcd"

350025,25,"A","abcd"

... ...

2. Export through mysqldump

Maybe we all know that mysqldump can export the database The data in it generates related backup files in the form of INSERT statements. In fact, in addition to generating INSERT statements, mysqldump can also implement the functions implemented by "SELECT ... TOOUTFILE FROM ..." above, and also generate a related The creation script corresponding to the database structure.

For example:

root@sky:~# ls -l /tmp/mysqldump

total 0

root@sky:~# mysqldump -uroot -T/tmp/mysqldump test test_outfile --fieldsenclosed-by= " --fields-terminated-by=,

root@sky:~# ls -l /tmp/mysqldump

total 8

-rw-r--r-- 1 root root 1346 2008-10-14 22 :18 test_outfile.sql

-rw-rw-rw- 1 mysql mysql 2521 2008-10-14 22:18 test_outfile.txt

root@sky:~# cat /tmp/mysqldump/test_outfile.txt

350021, 21,"A","abcd"

350022,22,"B","abcd"

350023,23,"C","abcd"

350024,24,"D","abcd"

350025,25,"A","abcd"

... ...

root@sky:~# cat /tmp/mysqldump/test_outfile.sql

-- MySQL dump 10.11

---- Host : localhost Database: test

------------------------------------------ --------------- Server version 5.0.51a-log

/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;

/*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@ CHARACTER_SET_RESULTS */;

/*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */;

/*!40101 SET NAMES utf8 */;

/*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */;

/*!40103 SET TIME_ZONE='+00:00' */;

/*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='' */;

/*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES , SQL_NOTES=0 */;

---- Table structure for table `test_outfile`

--DROP TABLE IF EXISTS `test_outfile`;

SET @saved_cs_client = @@character_set_client;

SET character_set_client = utf8;

CREATE TABLE `test_outfile` (

`id` int(11) NOT NULL default '0',

`t_id` int(11) default NULL,

`a` char(1) default NULL,

` MID` VARCHAR (32) Default Null

) Engine = Myisam Default Charset = UTF8;

Set Character_Client = @Saved_CS_Client;

/*! 40103 Set TimE_zon E =@OLD_TIME_ZONE */;

/ *! 40101 SET SQL_MODE = @OLD_SQL_MODE */;

/*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */;

/*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */;

/*!40101 SET COLLATION_CON NECTION=@OLD_COLLATION_CONNECTION */;

/ *!40111 SET SQL_NOTES=@OLD_SQL_NOTES */;

-- Dump completed on 2008-10-14 14:18:23

This output structure is very suitable for us to use as a backup. Of course, if there is If multiple tables need to be dumped, two corresponding files will be generated for each table.

The above is the content of logical backup of Mysql data backup. For more related content, please pay attention to the PHP Chinese website (www.php.cn)!


Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Adding Users to MySQL: The Complete TutorialAdding Users to MySQL: The Complete TutorialMay 12, 2025 am 12:14 AM

Mastering the method of adding MySQL users is crucial for database administrators and developers because it ensures the security and access control of the database. 1) Create a new user using the CREATEUSER command, 2) Assign permissions through the GRANT command, 3) Use FLUSHPRIVILEGES to ensure permissions take effect, 4) Regularly audit and clean user accounts to maintain performance and security.

Mastering MySQL String Data Types: VARCHAR vs. TEXT vs. CHARMastering MySQL String Data Types: VARCHAR vs. TEXT vs. CHARMay 12, 2025 am 12:12 AM

ChooseCHARforfixed-lengthdata,VARCHARforvariable-lengthdata,andTEXTforlargetextfields.1)CHARisefficientforconsistent-lengthdatalikecodes.2)VARCHARsuitsvariable-lengthdatalikenames,balancingflexibilityandperformance.3)TEXTisidealforlargetextslikeartic

MySQL: String Data Types and Indexing: Best PracticesMySQL: String Data Types and Indexing: Best PracticesMay 12, 2025 am 12:11 AM

Best practices for handling string data types and indexes in MySQL include: 1) Selecting the appropriate string type, such as CHAR for fixed length, VARCHAR for variable length, and TEXT for large text; 2) Be cautious in indexing, avoid over-indexing, and create indexes for common queries; 3) Use prefix indexes and full-text indexes to optimize long string searches; 4) Regularly monitor and optimize indexes to keep indexes small and efficient. Through these methods, we can balance read and write performance and improve database efficiency.

MySQL: How to Add a User RemotelyMySQL: How to Add a User RemotelyMay 12, 2025 am 12:10 AM

ToaddauserremotelytoMySQL,followthesesteps:1)ConnecttoMySQLasroot,2)Createanewuserwithremoteaccess,3)Grantnecessaryprivileges,and4)Flushprivileges.BecautiousofsecurityrisksbylimitingprivilegesandaccesstospecificIPs,ensuringstrongpasswords,andmonitori

The Ultimate Guide to MySQL String Data Types: Efficient Data StorageThe Ultimate Guide to MySQL String Data Types: Efficient Data StorageMay 12, 2025 am 12:05 AM

TostorestringsefficientlyinMySQL,choosetherightdatatypebasedonyourneeds:1)UseCHARforfixed-lengthstringslikecountrycodes.2)UseVARCHARforvariable-lengthstringslikenames.3)UseTEXTforlong-formtextcontent.4)UseBLOBforbinarydatalikeimages.Considerstorageov

MySQL BLOB vs. TEXT: Choosing the Right Data Type for Large ObjectsMySQL BLOB vs. TEXT: Choosing the Right Data Type for Large ObjectsMay 11, 2025 am 12:13 AM

When selecting MySQL's BLOB and TEXT data types, BLOB is suitable for storing binary data, and TEXT is suitable for storing text data. 1) BLOB is suitable for binary data such as pictures and audio, 2) TEXT is suitable for text data such as articles and comments. When choosing, data properties and performance optimization must be considered.

MySQL: Should I use root user for my product?MySQL: Should I use root user for my product?May 11, 2025 am 12:11 AM

No,youshouldnotusetherootuserinMySQLforyourproduct.Instead,createspecificuserswithlimitedprivilegestoenhancesecurityandperformance:1)Createanewuserwithastrongpassword,2)Grantonlynecessarypermissionstothisuser,3)Regularlyreviewandupdateuserpermissions

MySQL String Data Types Explained: Choosing the Right Type for Your DataMySQL String Data Types Explained: Choosing the Right Type for Your DataMay 11, 2025 am 12:10 AM

MySQLstringdatatypesshouldbechosenbasedondatacharacteristicsandusecases:1)UseCHARforfixed-lengthstringslikecountrycodes.2)UseVARCHARforvariable-lengthstringslikenames.3)UseBINARYorVARBINARYforbinarydatalikecryptographickeys.4)UseBLOBorTEXTforlargeuns

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment

MantisBT

MantisBT

Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

MinGW - Minimalist GNU for Windows

MinGW - Minimalist GNU for Windows

This project is in the process of being migrated to osdn.net/projects/mingw, you can continue to follow us there. MinGW: A native Windows port of the GNU Compiler Collection (GCC), freely distributable import libraries and header files for building native Windows applications; includes extensions to the MSVC runtime to support C99 functionality. All MinGW software can run on 64-bit Windows platforms.

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version