search
HomeBackend DevelopmentPHP TutorialHow to optimize Mysql tens of millions of fast paging

MySQL database optimization processing enables tens of millions of fast paging analysis, let’s take a look.
Data table collect (id, title, info, vtype) has these 4 fields, of which title is fixed length, info is text, id is gradual, vtype is tinyint, and vtype is index. This is a simple model of a basic news system. Now fill it with data and fill it with 100,000 news items.
The final collect is 100,000 records, and the database table occupies 1.6G of hard disk. OK, look at the following sql statement:
select id,title from collect limit 1000,10; Very fast; basically OK in 0.01 seconds, look at the following
select id,title from collect limit 90000,10; From 90,000 The article starts pagination, the result?
Complete in 8-9 seconds, what’s wrong with my god? ? ? ? In fact, if you want to optimize this data, you can find the answer online. Look at the following statement:
select id from collect order by id limit 90000,10; It’s very fast, it’s OK in 0.04 seconds. Why? Because using the id primary key for indexing is of course faster. The online modification is:
select id,title from collect where id>=(select id from collect order by id limit 90000,1) limit 10;
This is the result of using id for indexing. But if the problem is just a little bit complicated, it’s over. Look at the following statement
select id from collect where vtype=1 order by id limit 90000,10; It’s very slow, it took 8-9 seconds!
Now that I’m here, I believe many people will feel like I’m having a breakdown! Is vtype indexed? Why is it so slow? It is good to have vtype indexed. You can directly select id from collect where vtype=1 limit 1000,10; which is very fast, basically 0.05 seconds, but it can be increased by 90 times. Starting from 90,000, that is 0.05*90=4.5 seconds. speed. And the test result is 8-9 seconds to an order of magnitude. From here, someone started to put forward the idea of ​​sub-table, which is the same idea as the discuz forum. The idea is as follows:
Build an index table: t (id, title, vtype) and set it to a fixed length, then do paging, paging out the results and then go to collect to find info. Is it possible? You will know through experimentation.
100,000 records are stored in t(id,title,vtype), and the data table size is about 20M. Use
select id from t where vtype=1 order by id limit 90000,10; It’s fast. Basically it can be run in 0.1-0.2 seconds. Why is this so? I guess it's because there is too much collect data, so paging takes a long time. limit is completely related to the size of the data table. In fact, this is still a full table scan, just because the amount of data is small, and it is only faster if it is 100,000. OK, let’s do a crazy experiment, add it to 1 million, and test the performance.
After adding 10 times the data, the t table immediately reached more than 200M, and it was a fixed length. The query statement just now takes 0.1-0.2 seconds to complete! Is there any problem with the sub-table performance? wrong! Because our limit is still 90,000, it’s fast. Give it a big one, start at 900,000
select id from t where vtype=1 order by id limit 900000,10; Look at the result, the time is 1-2 seconds!
Why?? Even after dividing the timetable, the time is still so long, which is very frustrating! Some people say that fixed length will improve the performance of limit. At first, I thought that because the length of a record is fixed, mysql should be able to calculate the position of 900,000, right? But we overestimated the intelligence of MySQL. It is not a business database. Facts have proved that fixed length and non-fixed length have little impact on limit? No wonder some people say that discuz will be very slow when it reaches 1 million records. I believe this is true. This is related to the database design!
Can’t MySQL break through the 1 million limit? ? ? Is the limit really reached when the number of pages reaches 1 million? ? ?
The answer is: NO!!!! The reason why it cannot exceed 1 million is because it does not know how to design mysql. Let’s introduce the non-table method, let’s have a crazy test! One table can handle 1 million records and a 10G database. How to quickly paginate!
Okay, our test has returned to the collect table. The conclusion of the test is: 300,000 data, using the split table method is feasible. If it exceeds 300,000, the speed will be unbearable! Of course, if you use the method of sub-table + me, it will be absolutely perfect. But after using my method, it can be solved perfectly without splitting the table!
The answer is: compound index! Once when designing a mysql index, I accidentally discovered that the index name can be chosen arbitrarily, and you can select several fields to enter. What is the use of this? The initial select id from collect order by id limit 90000,10; is so fast because the index is used, but if you add where, the index will not be used. I added an index like search(vtype,id) with the idea of ​​giving it a try. Then test
select id from collect where vtype=1 limit 90000,10; very fast! Completed in 0.04 seconds!
Test again: select id ,title from collect where vtype=1 limit 90000,10; Very sorry, 8-9 seconds, no search index!
Test again: search(id,vtype), or select id statement, which is also very regrettable, 0.5 seconds.
To sum up: If there is a where condition and you want to use limit for indexing, you must design an index that puts where first and the primary key used by limit second, and you can only select the primary key!
Perfectly solved the paging problem. If you can quickly return the ID, you can hope to optimize the limit. According to this logic, a million-level limit should be divided in 0.0x seconds. It seems that the optimization and indexing of mysql statements are very important!
Okay, back to the original question, how can we successfully and quickly apply the above research to development? If I use compound queries, my lightweight framework will be useless. I have to write the paging string myself. How troublesome is that? Here is another example, and the idea comes out:
select * from collect where id in (9000,12,50,7000); It can be checked in 0 seconds!
mygod, the index of mysql is also effective for the in statement! It seems that it is wrong to say that in cannot be indexed online!
With this conclusion, it can be easily applied to lightweight frameworks:
The code is as follows:
$db=dblink();
$db->pagesize=20;
$sql="select id from collect where vtype=$vtype";
$db->execute($sql);
$strpage=$db->strpage(); //Save the paging string in a temporary variable to facilitate output
while($rs =$db->fetch_array()){
$strid.=$rs['id'].',';
}
$strid=substr($strid,0,strlen($strid)-1); //Construct the id string
$db->pagesize=0; //It is very important to clear the paging without logging out the class, so that you only need to use the database connection once and do not need to open it again;
$db ->execute("select id,title,url,sTime,gTime,vtype,tag from collect where id in ($strid)");
fetch_array() ): ?>


 
 
 
  ;
 
   



echo $strpage;
Through simple transformation, the idea is actually very simple: 1) Through optimization Index, find the id, and spell it into a string like "123,90000,12000". 2) The second query finds the results.
A small index + a little change enables mysql to support millions or even tens of millions of efficient paging!
Through the examples here, I reflected on something: for large systems, PHP must not use frameworks, especially frameworks that cannot even see SQL statements! Because my lightweight framework almost collapsed at first! It is only suitable for the rapid development of small applications. For ERP, OA, large websites, the data layer, including the logic layer, cannot use the framework. If programmers lose control of SQL statements, the risk of the project will increase exponentially! Especially when using mysql, mysql must require a professional DBA to achieve its best performance. The performance difference caused by an index can be thousands of times!
PS: After actual testing, when it comes to 1 million data, 1.6 million data, 15G table, 190M index, even if the index is used, the limit is 0.49 seconds. Therefore, it is best not to let others see the data after 100,000 pieces of data during paging, otherwise it will be very slow! Even if you use an index. After such optimization, MySQL has reached the limit of millions of pages! But such a result is already very good. If you are using sqlserver, it will definitely get stuck! The 1.6 million data using id in (str) is very fast, basically still 0 seconds. If so, mysql should be able to easily handle tens of millions of data.
Statement
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
How can you check if a PHP session has already started?How can you check if a PHP session has already started?Apr 30, 2025 am 12:20 AM

In PHP, you can use session_status() or session_id() to check whether the session has started. 1) Use the session_status() function. If PHP_SESSION_ACTIVE is returned, the session has been started. 2) Use the session_id() function, if a non-empty string is returned, the session has been started. Both methods can effectively check the session state, and choosing which method to use depends on the PHP version and personal preferences.

Describe a scenario where using sessions is essential in a web application.Describe a scenario where using sessions is essential in a web application.Apr 30, 2025 am 12:16 AM

Sessionsarevitalinwebapplications,especiallyfore-commerceplatforms.Theymaintainuserdataacrossrequests,crucialforshoppingcarts,authentication,andpersonalization.InFlask,sessionscanbeimplementedusingsimplecodetomanageuserloginsanddatapersistence.

How can you manage concurrent session access in PHP?How can you manage concurrent session access in PHP?Apr 30, 2025 am 12:11 AM

Managing concurrent session access in PHP can be done by the following methods: 1. Use the database to store session data, 2. Use Redis or Memcached, 3. Implement a session locking strategy. These methods help ensure data consistency and improve concurrency performance.

What are the limitations of using PHP sessions?What are the limitations of using PHP sessions?Apr 30, 2025 am 12:04 AM

PHPsessionshaveseverallimitations:1)Storageconstraintscanleadtoperformanceissues;2)Securityvulnerabilitieslikesessionfixationattacksexist;3)Scalabilityischallengingduetoserver-specificstorage;4)Sessionexpirationmanagementcanbeproblematic;5)Datapersis

Explain how load balancing affects session management and how to address it.Explain how load balancing affects session management and how to address it.Apr 29, 2025 am 12:42 AM

Load balancing affects session management, but can be resolved with session replication, session stickiness, and centralized session storage. 1. Session Replication Copy session data between servers. 2. Session stickiness directs user requests to the same server. 3. Centralized session storage uses independent servers such as Redis to store session data to ensure data sharing.

Explain the concept of session locking.Explain the concept of session locking.Apr 29, 2025 am 12:39 AM

Sessionlockingisatechniqueusedtoensureauser'ssessionremainsexclusivetooneuseratatime.Itiscrucialforpreventingdatacorruptionandsecuritybreachesinmulti-userapplications.Sessionlockingisimplementedusingserver-sidelockingmechanisms,suchasReentrantLockinJ

Are there any alternatives to PHP sessions?Are there any alternatives to PHP sessions?Apr 29, 2025 am 12:36 AM

Alternatives to PHP sessions include Cookies, Token-based Authentication, Database-based Sessions, and Redis/Memcached. 1.Cookies manage sessions by storing data on the client, which is simple but low in security. 2.Token-based Authentication uses tokens to verify users, which is highly secure but requires additional logic. 3.Database-basedSessions stores data in the database, which has good scalability but may affect performance. 4. Redis/Memcached uses distributed cache to improve performance and scalability, but requires additional matching

Define the term 'session hijacking' in the context of PHP.Define the term 'session hijacking' in the context of PHP.Apr 29, 2025 am 12:33 AM

Sessionhijacking refers to an attacker impersonating a user by obtaining the user's sessionID. Prevention methods include: 1) encrypting communication using HTTPS; 2) verifying the source of the sessionID; 3) using a secure sessionID generation algorithm; 4) regularly updating the sessionID.

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Dreamweaver Mac version

Dreamweaver Mac version

Visual web development tools

SublimeText3 English version

SublimeText3 English version

Recommended: Win version, supports code prompts!

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment