Home >Backend Development >PHP Tutorial >How does mysql optimize a sql statement with a large amount of data and two tables need to be queried?
Two tables A has 2000 entries and B has 20 million entries
select a.xxx,b.xxx,b.xxxx, from A as a left join b as b on a.id = b.bid where a.xx = 0 and a.xxx != 0;
Two tables A has 2000 entries and B has 20 million entries
select a.xxx,b.xxx,b.xxxx, from A as a left join b as b on a.id = b.bid where a.xx = 0 and a.xxx != 0;
Check in pages, don’t fetch a large amount of data at once
The two fields of b to be checked for redundancy are in table a, so there is no need to join
It must be cached, otherwise the database will definitely not be able to support it
When the amount of data reaches tens of millions, you need to consider horizontal splitting
First of all, I am not a back-end person, so I can only tell you what I know
I don’t know if the tool can see the execution plan. If you can see the execution plan, look at the execution process. Which process generates larger data depends on whether it can be optimized
I also know that adding indexes should be able to optimize it