This article is reproduced from http://www.libazh.com/?p=18 A blog to guide newbies on the road
Some time ago, I once set up a record to query SQL statements that were slower than 1 second in mysql. Just now I suddenly recalled the setting method, and I couldn't recall the names of several parameters, so I reorganized them and made notes myself.
For troubleshooting problems to find performance bottlenecks, the easiest problems to find and solve are MYSQL's slow queries and queries that do not use indexes.
OK, let's start finding the SQL statements in mysql that are not "simple" to execute.
Go to the newbie blog; http://www.libazh.com/?p=18
Method 1: I am using this method, haha, I prefer this kind of immediacy.
Mysql5.0 or above can support recording SQL statements that execute slowly. mysql> show variables like 'long%'; Note: This long_query_time is used to define how many seconds slower is considered a "slow query"<code class="prettyprint linenums lang-php">mysql> show variables like 'long%'; 注:这个long_query_time是用来定义慢于多少秒的才算“慢查询”<br>
----------------- ----------- <br>
| Variable_name | Value |<br>
----------------- ----------- <br>
| long_query_time | 10.000000 | <br>
----------------- ----------- <br>
1 row in set (0.00 sec)
------------------ ---------- mysql> set long_query_time=1; 注: 我设置了1, 也就是执行时间超过1秒的都算慢查询。<br>
Query OK, 0 rows affected (0.00 sec)
| Variable_name | Value |mysql> show variables like 'slow%';<br>
--------------------- --------------- <br>
| Variable_name | Value |<br>
--------------------- --------------- <br>
| slow_launch_time | 2 | <br>
| slow_query_log | ON | 注:是否打开日志记录<br>
| slow_query_log_file | /tmp/slow.log | 注: 设置到什么位置<br>
--------------------- --------------- <br>
3 rows in set (0.00 sec)
------------------ ---------- mysql> set global slow_query_log='ON' 注:打开日志记录
| long_query_time | 10.000000 |
------------------ ----------
1 row in set (0.00 sec)mysql> set long_query_time=1; Note: I set 1, which means any query that takes more than 1 second to execute is considered a slow query. . <code class="prettyprint linenums lang-php">long_query_time=1<br>
slow_query_log_file=/tmp/slow.log
Query OK, 0 rows affected (0.00 sec)mysql> show variables like 'slow%';<br>
-------------------------- --------------- <br>
| Variable_name | Value |<code class="prettyprint linenums lang-php">/path/mysqldumpslow -s c -t 10 /tmp/slow-log
-------------------------- --------------- -s, 是表示按照何种方式排序,c、t、l、r分别是按照记录次数、时间、查询时间、返回的记录数来排序,ac、at、al、ar,表示相应的倒叙;<br>
-t, 是top n的意思,即为返回前面多少条的数据;<br>
-g, 后边可以写一个正则匹配模式,大小写不敏感的;<br>
比如<br>
/path/mysqldumpslow -s r -t 10 /tmp/slow-log<br>
得到返回记录集最多的10个查询。<br>
/path/mysqldumpslow -s t -t 10 -g “left join” /tmp/slow-log<br>
得到按照时间排序的前10条里面含有左连接的查询语句。
| slow_launch_time | 2 |
| slow_query_log | ON | | Note: Whether to turn on logging
| slow_query_log_file | /tmp/slow.log | Note: Where to set
-------------------------- --------------- 3 rows in set (0.00 sec)Once the slow_query_log variable is set to ON, mysql will start logging immediately. The initial value of the above MYSQL global variable can be set in /etc/my.cnf.long_query_time=1
slow_query_log_file=/tmp/slow.log
======================================== ==============
Method 2: mysqldumpslow command This will output the 10 SQL statements with the most records, among which: -s, means the sorting method, c, t, l , r are sorted according to the number of records, time, query time, and the number of records returned, and ac, at, al, ar represent the corresponding flashbacks;
-t, means top n, that is, how many previous pieces of data are returned;
-g, you can write a regular matching pattern afterwards, which is case-insensitive;
For example
/path/mysqldumpslow -s r -t 10 /tmp/slow-log
Get the 10 queries that return the most recordsets.
/path/mysqldumpslow -s t -t 10 -g “left join” /tmp/slow-log
Get the first 10 query statements containing left joins sorted by time.
Done!
This article is reprinted from http://www.libazh.com/?p=18 A blog that guides newbies on the road Yunqi Conference Beijing Station: It is rare for Alibaba technical experts to appear on the scene. This time, more than 100 people came? !