Script description
Back up all data every 7 days, and back up binlog every day, which is an incremental backup.
(If there is little data, just back up the complete data once a day, which may not be possible. Incremental backup is necessary)
The author is not very familiar with shell scripts, so many places are written very stupidly:)
Open bin log
In mysql 4.1 version, By default, there are only error logs and no other logs. You can open the bin log by modifying the configuration. There are many methods, one of which is to add the following to the mysqld section of /etc/my.cnf:
[mysqld]
log-bin
The main function of this log is incremental backup or replication (there may be other uses).
If you want incremental backup, you must open this log.
For mysql with frequent database operations, this log will become very large, and there may be multiple.
Flush-logs in the database, or use mysqladmin, mysqldump to call flush-logs and use parameters delete-master-logs, these log files will disappear and new log files will be generated (empty at first).
So if you never back up, opening the log may not be necessary.
Complete You can call flush-logs at the same time as the backup, and flush-logs before the incremental backup to back up the latest data.
Full backup script
If there is a lot of database data, we usually take a few days or Back up data once a week to avoid affecting application operation. If the amount of data is relatively small, then it doesn’t matter if you back up once a day.
Download Assuming that our data amount is relatively large, the backup script is as follows: (Refer to a mysql on the Internet Backup script, thanks:))
#!/bin/sh # mysql data backup script # by scud http://www.jscud.com # 2005-10-30 # # use mysqldump --help,get more detail. # BakDir=/backup/mysql LogFile=/backup/mysql/mysqlbak.log DATE=`date +%Y%m%d` echo " " >> $LogFile echo " " >> $LogFile echo "-------------------------------------------" >> $LogFile echo $(date +"%y-%m-%d %H:%M:%S") >> $LogFile echo "--------------------------" >> $LogFile cd $BakDir DumpFile=$DATE.sql GZDumpFile=$DATE.sql.tgz mysqldump --quick --all-databases --flush-logs --delete-master-logs --lock-all-tables > $DumpFile echo "Dump Done" >> $LogFile tar czvf $GZDumpFile $DumpFile >> $LogFile 2>&1 echo "[$GZDumpFile]Backup Success!" >> $LogFile rm -f $DumpFile #delete previous daily backup files:采用增量备份的文件,如果完整备份后,则删除增量备份的文件. cd $BakDir/daily rm -f * cd $BakDir echo "Backup Done!" echo "please Check $BakDir Directory!" echo "copy it to your local disk or ftp to somewhere !!!" ls -al $BakDir
The above script backs up mysql to the local /backup/mysql directory, and the incremental backup files are placed in the /backup/mysql/daily directory.
Note: The above script does not transfer the backed up files to other remote computers, nor does it delete the backup files from a few days ago: the user needs to add relevant scripts or operate manually.
Incremental Backup
The amount of data in incremental backup is relatively small, but it must be operated on the basis of full backup. Users can weigh time and cost and choose the method that is most beneficial to them.
Usage of incremental backup bin log, the script is as follows:
#!/bin/sh # # mysql binlog backup script # /usr/bin/mysqladmin flush-logs DATADIR=/var/lib/mysql BAKDIR=/backup/mysql/daily ###如果你做了特殊设置,请修改此处或者修改应用此变量的行:缺省取机器名,mysql缺省也是取机器名 HOSTNAME=`uname -n` cd $DATADIR FILELIST=`cat $HOSTNAME-bin.index` ##计算行数,也就是文件数 COUNTER=0 for file in $FILELIST do COUNTER=`expr $COUNTER + 1 ` done NextNum=0 for file in $FILELIST do base=`basename $file` NextNum=`expr $NextNum + 1` if [ $NextNum -eq $COUNTER ] then echo "skip lastest" else dest=$BAKDIR/$base if(test -e $dest) then echo "skip exist $base" else echo "copying $base" cp $base $BAKDIR fi fi done echo "backup mysql binlog ok"
The incremental backup script is flush-logs before backup. Mysql will automatically put the logs in the memory into the file, and then generate a new log file, so we only need to backup The first few are enough, that is, the last one is not backed up.
Because there may be multiple log files generated from the last backup to this backup, it is necessary to detect the files. If it has been backed up, there is no need to back it up.
Note: Similarly, users also need to transmit remotely by themselves, but there is no need to delete it. The program will automatically generate it after a complete backup.
Access Settings
The script is finished. In order to make the script run, the corresponding user name and password need to be set. Both mysqladmin and mysqldump require user names and passwords. Of course, they can be written in the script, but it is not convenient to modify. Assume We use the root user of the system to run this script, then we need to create a .my.cnf file in /root (that is, the home directory of the root user) with the following content
[mysqladmin] password =password user= root [mysqldump] user=root password=password
Note: Set this file only for root Readable. (chmod 600 .my.cnf )
This file describes that the program uses the root user of mysql to back up data, and the password is the corresponding setting. This way there is no need to write the user name and password in the script.
Automatically run
In order to make the backup program run automatically, we need to add it to crontab.
There are two methods, one is to put the script into / according to your choice etc/cron.daily,/etc/cron.weekly such directories.
One is to use crontab -e to put it into the root user's scheduled tasks, for example, a full backup is run every Sunday at 3 am. Daily backup runs every Monday to Saturday at 3am.
The above is the detailed content of Detailed explanation of how mysql backup script. For more information, please follow other related articles on the PHP Chinese website!

The steps for upgrading MySQL database include: 1. Backup the database, 2. Stop the current MySQL service, 3. Install the new version of MySQL, 4. Start the new version of MySQL service, 5. Recover the database. Compatibility issues are required during the upgrade process, and advanced tools such as PerconaToolkit can be used for testing and optimization.

MySQL backup policies include logical backup, physical backup, incremental backup, replication-based backup, and cloud backup. 1. Logical backup uses mysqldump to export database structure and data, which is suitable for small databases and version migrations. 2. Physical backups are fast and comprehensive by copying data files, but require database consistency. 3. Incremental backup uses binary logging to record changes, which is suitable for large databases. 4. Replication-based backup reduces the impact on the production system by backing up from the server. 5. Cloud backups such as AmazonRDS provide automation solutions, but costs and control need to be considered. When selecting a policy, database size, downtime tolerance, recovery time, and recovery point goals should be considered.

MySQLclusteringenhancesdatabaserobustnessandscalabilitybydistributingdataacrossmultiplenodes.ItusestheNDBenginefordatareplicationandfaulttolerance,ensuringhighavailability.Setupinvolvesconfiguringmanagement,data,andSQLnodes,withcarefulmonitoringandpe

Optimizing database schema design in MySQL can improve performance through the following steps: 1. Index optimization: Create indexes on common query columns, balancing the overhead of query and inserting updates. 2. Table structure optimization: Reduce data redundancy through normalization or anti-normalization and improve access efficiency. 3. Data type selection: Use appropriate data types, such as INT instead of VARCHAR, to reduce storage space. 4. Partitioning and sub-table: For large data volumes, use partitioning and sub-table to disperse data to improve query and maintenance efficiency.

TooptimizeMySQLperformance,followthesesteps:1)Implementproperindexingtospeedupqueries,2)UseEXPLAINtoanalyzeandoptimizequeryperformance,3)Adjustserverconfigurationsettingslikeinnodb_buffer_pool_sizeandmax_connections,4)Usepartitioningforlargetablestoi

MySQL functions can be used for data processing and calculation. 1. Basic usage includes string processing, date calculation and mathematical operations. 2. Advanced usage involves combining multiple functions to implement complex operations. 3. Performance optimization requires avoiding the use of functions in the WHERE clause and using GROUPBY and temporary tables.

Efficient methods for batch inserting data in MySQL include: 1. Using INSERTINTO...VALUES syntax, 2. Using LOADDATAINFILE command, 3. Using transaction processing, 4. Adjust batch size, 5. Disable indexing, 6. Using INSERTIGNORE or INSERT...ONDUPLICATEKEYUPDATE, these methods can significantly improve database operation efficiency.

In MySQL, add fields using ALTERTABLEtable_nameADDCOLUMNnew_columnVARCHAR(255)AFTERexisting_column, delete fields using ALTERTABLEtable_nameDROPCOLUMNcolumn_to_drop. When adding fields, you need to specify a location to optimize query performance and data structure; before deleting fields, you need to confirm that the operation is irreversible; modifying table structure using online DDL, backup data, test environment, and low-load time periods is performance optimization and best practice.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Atom editor mac version download
The most popular open source editor

EditPlus Chinese cracked version
Small size, syntax highlighting, does not support code prompt function

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.
