search
HomeDatabaseMysql TutorialDetailed explanation of sample code for optimizing paging in MySQL

An interview question, how to do paging when there is a large amount of data in the MySQL table. . . . At that time, I only knew that it could be divided into tables when the amount of data was large, but I didn’t know what to do without dividing the tables. . . . Alas, who asked the agent to only have a few pieces of data and a simple limit and offset to completely hold it (face covering). . .

Many applications tend to only display the latest or most popular records, but in order for old records to remain accessible, a paging navigation bar is needed. However, how to better implement paging through MySQL has always been a headache. While there is no off-the-shelf solution, understanding the underlying layers of a database can help to optimize paginated queries.

Let’s take a look at a commonly used query with poor performance.

SELECT *
FROM city
ORDER BY id DESC
LIMIT 0, 15

This query takes 0.00sec. So, what's wrong with this query? In fact, there is no problem with this query statement and parameters, because it uses the primary key of the table below and only reads 15 records.

CREATE TABLE city (
  id int(10) unsigned NOT NULL AUTO_INCREMENT,
  city varchar(128) NOT NULL,
  PRIMARY KEY (id)
) ENGINE=InnoDB;

The real problem is when the offset (paging offset) is very large, like the following:

SELECT *
FROM city
ORDER BY id DESC
LIMIT 100000, 15;

The above query takes 0.22sec when there are 2M rows of records. By viewing the SQL execution plan through EXPLAIN, you can find that the SQL retrieved 100015 rows, but only 15 rows were needed in the end. Large paging offsets increase the data used, and MySQL loads a lot of data into memory that will ultimately not be used. Even if we assume that most website users only access the first few pages of data, a small number of requests with large page offsets can cause harm to the entire system. Facebook is aware of this, but instead of optimizing the database in order to handle more requests per second, Facebook focuses on reducing the variance of request response times.

For paging requests, there is another piece of information that is also very important, which is the total number of records. We can easily get the total number of records through the following query.

SELECT COUNT(*)
FROM city;

However, the above SQL takes 9.28sec when using InnoDB as the storage engine. An incorrect optimization is to use SQL_CALC_FOUND_ROWS. SQL_CALC_FOUND_ROWS can prepare the number of records that meet the conditions in advance during paging query, and then just execute a select FOUND_ROWS(); to get the total number of records. But in most cases, shorter query statements do not mean improved performance. Unfortunately, this paging query method is used in many mainstream frameworks. Let's take a look at the query performance of this statement.

SELECT SQL_CALC_FOUND_ROWS *
FROM city
ORDER BY id DESC
LIMIT 100000, 15;

This statement takes 20.02sec, twice as long as the previous one. It turns out that using SQL_CALC_FOUND_ROWS for paging is a very bad idea.

Let’s take a look at how to optimize. The article is divided into two parts. The first part is how to get the total number of records, and the second part is to get the real records.

Efficiently calculate the number of rows

If the engine used is MyISAM, you can directly execute COUNT(*) to get the number of rows. Similarly, in a heap table, the row number is also stored in the table's metainformation. But if the engine is InnoDB, the situation will be more complicated, because InnoDB does not save the specific number of rows in the table.

We can cache the number of rows, and then update it regularly through a daemon process or when some user operations cause the cache to become invalid, execute the following statement:

SELECT COUNT(*)
FROM city
USE INDEX(PRIMARY);

Get records

Now enter the most important part of this article and obtain the records to be displayed in pagination. As mentioned above, large offsets will affect performance, so we need to rewrite the query statement. For demonstration, we create a new table "news", sort it by topicality (the latest release is at the top), and implement a high-performance paging. For simplicity, we assume that the ID of the latest news release is also the largest.

CREATE TABLE news(
   id INT UNSIGNED PRIMARY KEY AUTO_INCREMENT,
   title VARCHAR(128) NOT NULL
) ENGINE=InnoDB;

A more efficient way is based on the last news ID displayed by the user. The statement to query the next page is as follows. You need to pass in the last ID displayed on the current page.

SELECT *
FROM news WHERE id < $last_id
ORDER BY id DESC
LIMIT $perpage

The statement for querying the previous page is similar, except that the first ID of the current page needs to be passed in, and the order must be reversed.

SELECT *
FROM news WHERE id > $last_id
ORDER BY id ASC
LIMIT $perpage

The above query method is suitable for simple paging, that is, no specific page navigation is displayed, only "previous page" and "next page" are displayed. For example, the footer of a blog displays "previous page" and "next page" button. But if it is still difficult to achieve real page navigation, let’s look at another way.

SELECT id
FROM (
   SELECT id, ((@cnt:= @cnt + 1) + $perpage - 1) % $perpage cnt
   FROM news 
   JOIN (SELECT @cnt:= 0)T
   WHERE id < $last_id
   ORDER BY id DESC
   LIMIT $perpage * $buttons
)C
WHERE cnt = 0;

Through the above statement, an id corresponding to the offset can be calculated for each paging button. There is another benefit to this approach. Assume that a new article is being published on the website, then the position of all articles will be moved back one position, so if the user changes pages when publishing an article, he will see an article twice. If the offset ID of each button is fixed, this problem will be solved. Mark Callaghan has published a similar blog, using combined indexes and two position variables, but the basic idea is the same.

  如果表中的记录很少被删除、修改,还可以将记录对应的页码存储到表中,并在该列上创建合适的索引。采用这种方式,当新增一个记录的时候,需要执行下面的查询重新生成对应的页号。

SET p:= 0;
UPDATE news SET page=CEIL((p:= p + 1) / $perpage) ORDER BY id DESC;

  当然,也可以新增一个专用于分页的表,可以用个后台程序来维护。

UPDATE pagination T
JOIN (
   SELECT id, CEIL((p:= p + 1) / $perpage) page
   FROM news
   ORDER BY id
)C
ON C.id = T.id
SET T.page = C.page;

  现在想获取任意一页的元素就很简单了:

SELECT *
FROM news A
JOIN pagination B ON A.id=B.ID
WHERE page=$offset;

  还有另外一种与上种方法比较相似的方法来做分页,这种方式比较试用于数据集相对小,并且没有可用的索引的情况下—比如处理搜索结果时。在一个普通的服务器上执行下面的查询,当有2M条记录时,要耗费2sec左右。这种方式比较简单,创建一个用来存储所有Id的临时表即可(这也是最耗费性能的地方)。

CREATE TEMPORARY TABLE _tmp (KEY SORT(random))
SELECT id, FLOOR(RAND() * 0x8000000) random
FROM city;

ALTER TABLE _tmp ADD OFFSET INT UNSIGNED PRIMARY KEY AUTO_INCREMENT, DROP INDEX SORT, ORDER BY random;

  接下来就可以向下面一样执行分页查询了。

SELECT *
FROM _tmp
WHERE OFFSET >= $offset
ORDER BY OFFSET
LIMIT $perpage;

  简单来说,对于分页的优化就是。。。避免数据量大时扫描过多的记录。


The above is the detailed content of Detailed explanation of sample code for optimizing paging in MySQL. For more information, please follow other related articles on the PHP Chinese website!

Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
MySQL's Place: Databases and ProgrammingMySQL's Place: Databases and ProgrammingApr 13, 2025 am 12:18 AM

MySQL's position in databases and programming is very important. It is an open source relational database management system that is widely used in various application scenarios. 1) MySQL provides efficient data storage, organization and retrieval functions, supporting Web, mobile and enterprise-level systems. 2) It uses a client-server architecture, supports multiple storage engines and index optimization. 3) Basic usages include creating tables and inserting data, and advanced usages involve multi-table JOINs and complex queries. 4) Frequently asked questions such as SQL syntax errors and performance issues can be debugged through the EXPLAIN command and slow query log. 5) Performance optimization methods include rational use of indexes, optimized query and use of caches. Best practices include using transactions and PreparedStatemen

MySQL: From Small Businesses to Large EnterprisesMySQL: From Small Businesses to Large EnterprisesApr 13, 2025 am 12:17 AM

MySQL is suitable for small and large enterprises. 1) Small businesses can use MySQL for basic data management, such as storing customer information. 2) Large enterprises can use MySQL to process massive data and complex business logic to optimize query performance and transaction processing.

What are phantom reads and how does InnoDB prevent them (Next-Key Locking)?What are phantom reads and how does InnoDB prevent them (Next-Key Locking)?Apr 13, 2025 am 12:16 AM

InnoDB effectively prevents phantom reading through Next-KeyLocking mechanism. 1) Next-KeyLocking combines row lock and gap lock to lock records and their gaps to prevent new records from being inserted. 2) In practical applications, by optimizing query and adjusting isolation levels, lock competition can be reduced and concurrency performance can be improved.

MySQL: Not a Programming Language, But...MySQL: Not a Programming Language, But...Apr 13, 2025 am 12:03 AM

MySQL is not a programming language, but its query language SQL has the characteristics of a programming language: 1. SQL supports conditional judgment, loops and variable operations; 2. Through stored procedures, triggers and functions, users can perform complex logical operations in the database.

MySQL: An Introduction to the World's Most Popular DatabaseMySQL: An Introduction to the World's Most Popular DatabaseApr 12, 2025 am 12:18 AM

MySQL is an open source relational database management system, mainly used to store and retrieve data quickly and reliably. Its working principle includes client requests, query resolution, execution of queries and return results. Examples of usage include creating tables, inserting and querying data, and advanced features such as JOIN operations. Common errors involve SQL syntax, data types, and permissions, and optimization suggestions include the use of indexes, optimized queries, and partitioning of tables.

The Importance of MySQL: Data Storage and ManagementThe Importance of MySQL: Data Storage and ManagementApr 12, 2025 am 12:18 AM

MySQL is an open source relational database management system suitable for data storage, management, query and security. 1. It supports a variety of operating systems and is widely used in Web applications and other fields. 2. Through the client-server architecture and different storage engines, MySQL processes data efficiently. 3. Basic usage includes creating databases and tables, inserting, querying and updating data. 4. Advanced usage involves complex queries and stored procedures. 5. Common errors can be debugged through the EXPLAIN statement. 6. Performance optimization includes the rational use of indexes and optimized query statements.

Why Use MySQL? Benefits and AdvantagesWhy Use MySQL? Benefits and AdvantagesApr 12, 2025 am 12:17 AM

MySQL is chosen for its performance, reliability, ease of use, and community support. 1.MySQL provides efficient data storage and retrieval functions, supporting multiple data types and advanced query operations. 2. Adopt client-server architecture and multiple storage engines to support transaction and query optimization. 3. Easy to use, supports a variety of operating systems and programming languages. 4. Have strong community support and provide rich resources and solutions.

Describe InnoDB locking mechanisms (shared locks, exclusive locks, intention locks, record locks, gap locks, next-key locks).Describe InnoDB locking mechanisms (shared locks, exclusive locks, intention locks, record locks, gap locks, next-key locks).Apr 12, 2025 am 12:16 AM

InnoDB's lock mechanisms include shared locks, exclusive locks, intention locks, record locks, gap locks and next key locks. 1. Shared lock allows transactions to read data without preventing other transactions from reading. 2. Exclusive lock prevents other transactions from reading and modifying data. 3. Intention lock optimizes lock efficiency. 4. Record lock lock index record. 5. Gap lock locks index recording gap. 6. The next key lock is a combination of record lock and gap lock to ensure data consistency.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Best Graphic Settings
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. How to Fix Audio if You Can't Hear Anyone
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
WWE 2K25: How To Unlock Everything In MyRise
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SecLists

SecLists

SecLists is the ultimate security tester's companion. It is a collection of various types of lists that are frequently used during security assessments, all in one place. SecLists helps make security testing more efficient and productive by conveniently providing all the lists a security tester might need. List types include usernames, passwords, URLs, fuzzing payloads, sensitive data patterns, web shells, and more. The tester can simply pull this repository onto a new test machine and he will have access to every type of list he needs.

PhpStorm Mac version

PhpStorm Mac version

The latest (2018.2.1) professional PHP integrated development tool

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

Integrate Eclipse with SAP NetWeaver application server.

DVWA

DVWA

Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)