An interview question, how to do paging when there is a large amount of data in the MySQL table. . . . At that time, I only knew that it could be divided into tables when the amount of data was large, but I didn’t know what to do without dividing the tables. . . . Alas, who asked the agent to only have a few pieces of data and a simple limit and offset to completely hold it (face covering). . .
Many applications tend to only display the latest or most popular records, but in order for old records to remain accessible, a paging navigation bar is needed. However, how to better implement paging through MySQL has always been a headache. While there is no off-the-shelf solution, understanding the underlying layers of a database can help to optimize paginated queries.
Let’s take a look at a commonly used query with poor performance.
SELECT * FROM city ORDER BY id DESC LIMIT 0, 15
This query takes 0.00sec. So, what's wrong with this query? In fact, there is no problem with this query statement and parameters, because it uses the primary key of the table below and only reads 15 records.
CREATE TABLE city ( id int(10) unsigned NOT NULL AUTO_INCREMENT, city varchar(128) NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB;
The real problem is when the offset (paging offset) is very large, like the following:
SELECT * FROM city ORDER BY id DESC LIMIT 100000, 15;
The above query takes 0.22sec when there are 2M rows of records. By viewing the SQL execution plan through EXPLAIN, you can find that the SQL retrieved 100015 rows, but only 15 rows were needed in the end. Large paging offsets increase the data used, and MySQL loads a lot of data into memory that will ultimately not be used. Even if we assume that most website users only access the first few pages of data, a small number of requests with large page offsets can cause harm to the entire system. Facebook is aware of this, but instead of optimizing the database in order to handle more requests per second, Facebook focuses on reducing the variance of request response times.
For paging requests, there is another piece of information that is also very important, which is the total number of records. We can easily get the total number of records through the following query.
SELECT COUNT(*) FROM city;
However, the above SQL takes 9.28sec when using InnoDB as the storage engine. An incorrect optimization is to use SQL_CALC_FOUND_ROWS. SQL_CALC_FOUND_ROWS can prepare the number of records that meet the conditions in advance during paging query, and then just execute a select FOUND_ROWS(); to get the total number of records. But in most cases, shorter query statements do not mean improved performance. Unfortunately, this paging query method is used in many mainstream frameworks. Let's take a look at the query performance of this statement.
SELECT SQL_CALC_FOUND_ROWS * FROM city ORDER BY id DESC LIMIT 100000, 15;
This statement takes 20.02sec, twice as long as the previous one. It turns out that using SQL_CALC_FOUND_ROWS for paging is a very bad idea.
Let’s take a look at how to optimize. The article is divided into two parts. The first part is how to get the total number of records, and the second part is to get the real records.
If the engine used is MyISAM, you can directly execute COUNT(*) to get the number of rows. Similarly, in a heap table, the row number is also stored in the table's metainformation. But if the engine is InnoDB, the situation will be more complicated, because InnoDB does not save the specific number of rows in the table.
We can cache the number of rows, and then update it regularly through a daemon process or when some user operations cause the cache to become invalid, execute the following statement:
SELECT COUNT(*) FROM city USE INDEX(PRIMARY);
Now enter the most important part of this article and obtain the records to be displayed in pagination. As mentioned above, large offsets will affect performance, so we need to rewrite the query statement. For demonstration, we create a new table "news", sort it by topicality (the latest release is at the top), and implement a high-performance paging. For simplicity, we assume that the ID of the latest news release is also the largest.
CREATE TABLE news( id INT UNSIGNED PRIMARY KEY AUTO_INCREMENT, title VARCHAR(128) NOT NULL ) ENGINE=InnoDB;
A more efficient way is based on the last news ID displayed by the user. The statement to query the next page is as follows. You need to pass in the last ID displayed on the current page.
SELECT * FROM news WHERE id < $last_id ORDER BY id DESC LIMIT $perpage
The statement for querying the previous page is similar, except that the first ID of the current page needs to be passed in, and the order must be reversed.
SELECT * FROM news WHERE id > $last_id ORDER BY id ASC LIMIT $perpage
The above query method is suitable for simple paging, that is, no specific page navigation is displayed, only "previous page" and "next page" are displayed. For example, the footer of a blog displays "previous page" and "next page" button. But if it is still difficult to achieve real page navigation, let’s look at another way.
SELECT id FROM ( SELECT id, ((@cnt:= @cnt + 1) + $perpage - 1) % $perpage cnt FROM news JOIN (SELECT @cnt:= 0)T WHERE id < $last_id ORDER BY id DESC LIMIT $perpage * $buttons )C WHERE cnt = 0;
Through the above statement, an id corresponding to the offset can be calculated for each paging button. There is another benefit to this approach. Assume that a new article is being published on the website, then the position of all articles will be moved back one position, so if the user changes pages when publishing an article, he will see an article twice. If the offset ID of each button is fixed, this problem will be solved. Mark Callaghan has published a similar blog, using combined indexes and two position variables, but the basic idea is the same.
如果表中的记录很少被删除、修改,还可以将记录对应的页码存储到表中,并在该列上创建合适的索引。采用这种方式,当新增一个记录的时候,需要执行下面的查询重新生成对应的页号。
SET p:= 0; UPDATE news SET page=CEIL((p:= p + 1) / $perpage) ORDER BY id DESC;
当然,也可以新增一个专用于分页的表,可以用个后台程序来维护。
UPDATE pagination T JOIN ( SELECT id, CEIL((p:= p + 1) / $perpage) page FROM news ORDER BY id )C ON C.id = T.id SET T.page = C.page;
现在想获取任意一页的元素就很简单了:
SELECT * FROM news A JOIN pagination B ON A.id=B.ID WHERE page=$offset;
还有另外一种与上种方法比较相似的方法来做分页,这种方式比较试用于数据集相对小,并且没有可用的索引的情况下—比如处理搜索结果时。在一个普通的服务器上执行下面的查询,当有2M条记录时,要耗费2sec左右。这种方式比较简单,创建一个用来存储所有Id的临时表即可(这也是最耗费性能的地方)。
CREATE TEMPORARY TABLE _tmp (KEY SORT(random)) SELECT id, FLOOR(RAND() * 0x8000000) random FROM city; ALTER TABLE _tmp ADD OFFSET INT UNSIGNED PRIMARY KEY AUTO_INCREMENT, DROP INDEX SORT, ORDER BY random;
接下来就可以向下面一样执行分页查询了。
SELECT * FROM _tmp WHERE OFFSET >= $offset ORDER BY OFFSET LIMIT $perpage;
简单来说,对于分页的优化就是。。。避免数据量大时扫描过多的记录。
博客比较长,所以翻译的有些粗糙。。。,之后会在好好检查一遍的。在自己做测试时,有些查询时间与作者有点不一致,不过作者这篇博客是写于2011年的,so~不要在意具体数据,领会精神吧~~
The above is the detailed content of MySQL: Optimize paging. For more information, please follow other related articles on the PHP Chinese website!