Home >Database >Mysql Tutorial >How to Efficiently Delete Millions of Duplicate Rows in MySQL Without Disrupting Normal Operations?
Optimizing Massive Row Deletion in MySQL
When dealing with millions of duplicate rows in a massive table, efficient deletion becomes paramount to avoid disrupting normal database operations. This article explores alternative approaches to minimize table lock times and maintain site uptime.
Option 1: Gradual Deletion
Rather than executing a single, exhaustive delete query, this method uses a loop to perform numerous smaller delete operations. While this approach addresses table locking, it can still significantly increase database load and execution time.
Option 2: Rename and Restore
Renaming the affected table, recreating an empty one, and subsequently renaming tables back after cleanup offers a less intrusive solution. However, this process involves multiple steps and can interrupt reporting functionality. The data merging process can also be challenging based on the specific data type.
Optimal Solution
An alternative technique is to employ a script that runs the following query in a loop:
DELETE FROM `table` WHERE (whatever criteria) ORDER BY `id` LIMIT 1000
This query gradually deletes rows in batches of 1000, effectively avoiding table locks. By including a pause between loop iterations, the script minimizes database load. Users remain unaffected, and cleanup can be scheduled for optimal timing.
The above is the detailed content of How to Efficiently Delete Millions of Duplicate Rows in MySQL Without Disrupting Normal Operations?. For more information, please follow other related articles on the PHP Chinese website!