This article brings you relevant knowledge about mysql, which mainly introduces related issues about common query optimization. Let’s take a look at it together. I hope it will be helpful to everyone.
Recommended learning: mysql video tutorial
After the program goes online and runs for a period of time, once the amount of data increases, more or less It is rare to feel that the system has delays, freezes, etc. When such problems occur, programmers or architects need to perform system tuning work. Among them, a large amount of practical experience shows that although there are many tuning methods, they involve The content of SQL tuning is still a very important part. This article will combine examples to summarize some SQL optimization strategies that may be involved in the work;
It can be said that for large-scale For most systems, it is normal to read more and write less, which means that the SQL involved in querying is a very high-frequency operation;
Preparation, add 100,000 entries to a test table Data
Use the following stored procedure to create a batch of data for a single table, just replace the table with your own
create procedure addMyData() begin declare num int; set num =1; while num <= 100000 do insert into XXX_table values( replace(uuid(),'-',''),concat('测试',num),concat('cs',num),'123456' ); set num =num +1; end while; end ;
Then call the stored procedure
call addMyData();
This article has prepared three tables, namely the student table, the class (class) table, and the account table, each with 500,000, 10,000 and 100,000 pieces of data for testing. ;
Paging query is often encountered in development. There is a situation Yes, when the number of paging is very large, querying is often very time-consuming. For example, querying the student table, using the following SQL query, takes 0.2 seconds;
Practical experience tells us that the further back, the lower the efficiency of paging query. This is the problem with paging query, Because, when performing paging query, if you execute limit 400000,10 , at this time you need MySQL 4000 10 note records, only 400000 - 4 00010 records are returned, and other records are discarded. The cost of query sorting is very high
Generally, when performing paging queries, it is better to create a covering index. To improve performance, you can optimize it through covering indexes and subqueries;
SELECT * FROM student t1,(SELECT id FROM student ORDER BY id LIMIT 400000,10) t2 WHERE t1.id =t2.id;
执行上面的sql,可以看到响应时间有一定的提升;
2)对于主键自增的表,可以把Limit 查询转换成某个位置的查询
select * from student where id > 400000 limit 10;
执行上面的sql,可以看到响应时间有一定的提升;
2、关联查询优化
在实际的业务开发过程中,关联查询可以说随处可见,关联查询的优化核心思路是,最好为关联查询的字段添加索引,这是关键,具体到不同的场景,还需要具体分析,这个跟mysql的引擎在执行优化策略的方案选择时有一定关系;
2.1 左连接或右连接
下面是一个使用left join 的查询,可以预想到这条sql查询的结果集非常大
select t.* from student t left join class cs on t.classId = cs.id;为了检查下sql的执行效率,使用explain做一下分析,可以看到,第一张表即left join左边的表student走了全表扫描,而class表走了主键索引,尽管结果集较大,还是走了索引;
针对这种场景的查询,思路如下:
- 让查询的字段尽量包含在主键索引或者覆盖索引中;
- 查询的时候尽量使用分页查询;
关于左连接(右连接)的explain结果补充说明
- 左连接左边的表一般为驱动表,右边的表为被驱动表;
- 尽可能让数据集小的表作为驱动表,减少mysql内部循环的次数;
- 两表关联时,explain结果展示中,第一栏一般为驱动表;
2.2 关联查询关联的字段建立索引
看下面的这条sql,其关联字段非表的主键,而是普通的字段;
explain select u.* from tenant t left join `user` u on u.account = t.tenant_name where t.removed is null and u.removed is null;通过explain分析可以发现,左边的表走了全表扫描,可以考虑给左边的表的tenant_name和user表的account 各自创建索引;
create index idx_name on tenant(tenant_name);
create index idx_account on `user`(account);
再次使用explain分析结果如下
可以看到第二行type变为ref,rows的数量优化比较明显。这是由左连接特性决定的,LEFT JOIN条件用于确定如何从右表搜索行,左边一定都有,所以右边是我们的关键点,一定需要建立索引 。
2.3 内连接关联的字段建立索引
我们知道,左连接和右连接查询的数据分别是完全包含左表数据,完全包含右表数据,而内连接(inner join 或join) 则是取交集(共有的部分),在这种情况下,驱动表的选择是由mysql优化器自动选择的;
在上面的基础上,首先移除两张表的索引
ALTER TABLE `user` DROP INDEX idx_account;
ALTER TABLE `tenant` DROP INDEX idx_name;使用explain语句进行分析
然后给user表的account字段添加索引,再次执行explain我们发现,user表竟然被当作是被驱动表了;
此时,如果我们给tenant表的tenant_name加索引,并移除user表的account索引,得出的结果竟然都没有走索引,再次说明,使用内连接的情况下,查询优化器将会根据自己的判断进行选择;
3、子查询优化
子查询在日常编写业务的SQL时也是使用非常频繁的做法,不是说子查询不能用,而是当数据量超出一定的范围之后,子查询的性能下降是很明显的,关于这一点,本人在日常工作中深有体会;
比如下面这条sql,由于student表数据量较大,执行起来耗时非常长,可以看到耗费了将近3秒;
select st.* from student st where st.classId in ( select id from class where id > 100 );通过执行explain进行分析得知,内层查询 id > 100的子查询尽管用上了主键索引,但是由于结果集太大,带入到外层查询,即作为in的条件时,查询优化器还是走了全表扫描;
针对上面的情况,可以考虑下面的优化方式
select st.id from student st join class cl on st.classId = cl.id where cl.id > 100;
子查询性能低效的原因
- 子查询时,MySQL需要为内层查询语句的查询结果建立一个临时表 ,然后外层查询语句从临时表中查询记录,查询完毕后,再撤销这些临时表 。这样会消耗过多的CPU和IO资源,产生大量的慢查询;
- 子查询结果集存储的临时表,不论是内存临时表还是磁盘临时表都不能走索引 ,所以查询性能会受到一定的影响;
- 对于返回结果集比较大的子查询,其对查询性能的影响也就越大;
使用mysql查询时,可以使用连接(JOIN)查询来替代子查询。连接查询不需要建立临时表 ,其速度比子查询要快 ,如果查询中使用索引的话,性能就会更好,尽量不要使用NOT IN 或者 NOT EXISTS,用LEFT JOIN xxx ON xx WHERE xx IS NULL替代;
一个真实的案例
在下面的这段sql中,优化前使用的是子查询,在一次生产问题的性能分析中,发现某个tenant_id下的数据达到了35万多,这样直接导致某个列表页面的接口查询耗时达到了5秒左右;
找到了问题的根源后,尝试使用上面的优化思路进行解决即可,优化后的sql大概如下,
4、排序(order by)优化
在mysql,排序主要有两种方式
- Using filesort : 通过表索引或全表扫描,读取满足条件的数据行,然后在排序缓冲区sort
buffer中完成排序操作,所有不是通过索引直接返回排序结果的排序都叫 FileSort 排序;- Using index : 通过有序的索引顺序扫描直接返回有序数据,这种情况即为 using index,不需要额外排序,操作效率高;
对于以上两种排序方式,Using index的性能高,而Using filesort的性能低,我们在优化排序操作时,尽量要优化为 Using index
4.1 使用age字段进行排序
由于age字段未加索引,查询结果按照age排序的时候发现使用了filesort,排序性能较低;
给age字段添加索引,再次使用order by时就走了索引;
4.2 使用多字段进行排序
通常在实际业务中,参与排序的字段往往不只一个,这时候,就可以对参与排序的多个字段创建联合索引;
如下根据stuno和age排序
给stuno和age添加联合索引
create index idx_stuno_age on `student`(stuno,age);
再次分析时结果如下,此时排序走了索引
关于多字段排序时的注意事项
1)排序时,需要满足最左前缀法则,否则也会出现 filesort;
The order of the joint index we created above is stuno and age, that is, stuno is in the front and age is in the back. What will happen if the sort order is changed during query? By analyzing the results, it was found that filesort was used;
2) When sorting, the sorting type remains consistent
while maintaining field sorting When the order remains unchanged, by default, if both are in ascending or descending order, order by can use index. What happens if one is in ascending order and the other is in descending order? Analysis found that filesort will also be used in this case;
5. Group by optimization
The optimization strategy of group by and the optimization strategy of order by The optimization strategy is very similar, and the main points are as follows:
- group by Even if there is no filter condition to use the index, you can also use the index directly;
- group by Sort first and then group. Follow the best left prefix rule for index construction;
- When the index column cannot be used, increase the settings of the max_length_for_sort_data and sort_buffer_size parameters;
- where is more efficient than having, and can be written in the where limit Don’t write the conditions in having;
- Reduce the use of order by, don’t sort if you can, or put the sorting in the program. Statements such as Order by, group by, and distinct consume more CPU, and the CPU resources of the database are extremely precious;
- If the sql contains query statements such as order by, group by, and distinct, the result set filtered by the where condition Please keep it within 1000 rows, otherwise SQL will be very slow;
5.1 Add an index to the field of group by
If the field is not indexed, the analysis results are as follows. This result performance Obviously very inefficient
After adding index to stuno
## Add joint index to stuno and age If the best left prefix is not followed, group by performance will be relatively inefficient The situation of following the best left prefix is as follows 6. Count optimization count() is an aggregate function. The returned result set is judged row by row. If the parameter of the count function is not NULL , the cumulative value is added by 1, otherwise it is not added, and finally the cumulative value is returned;Usage: count (*), count (primary key), count (field), count (number)The following is a detailed description of several ways to write count
Usage Instructions count (primary key) InnoDB will traverse the entire table, take out the primary key id value of each row, and return it to the service layer. After the service layer gets the primary key, it will directly accumulate it by row (primary key is not possible is null); count(*) InnoDB will not take out all the fields, but has specially optimized them. Without taking the value, the service layer directly presses Rows are accumulated; count (field) without not null constraint: The InnoDB engine will traverse the entire table, take out the field values of each row, and return it to Service layer, the service layer determines whether it is null or not, and the count is accumulated. There is a not null constraint: the InnoDB engine will traverse the entire table, take out the field values of each row, return it to the service layer, and directly accumulate it by row; count (number) The InnoDB engine traverses the entire table without taking values. For each row returned, the service layer puts a number "1" in it and accumulates it directly by row; Experience value summary
In order of efficiency, count(field) < count(primary key id) < count(1) ≈ count(*), so try to use count(*)Recommended learning:The above is the detailed content of Detailed explanation of common query optimization strategies for MySql. For more information, please follow other related articles on the PHP Chinese website!