Home  >  Article  >  Backend Development  >  My summer holiday English composition MySQL performance inspection and optimization methods

My summer holiday English composition MySQL performance inspection and optimization methods

WBOY
WBOYOriginal
2016-07-29 08:40:12901browse

1. The index is not built;
2. The SQL writing method is too complicated;
3. The configuration is wrong;
4. The machine cannot bear the load;
1. The index is not built
If you see that mysql consumes a lot of CPU, you can use mysql client tool to check.
Execute under linux
/usr/local/mysql/bin/mysql -hlocalhost -uroot -p
Enter the password. If there is no password, you can enter the client interface without the -p parameter.
Look at the current running status
show full processlist
You can run it several times
This command can see the sql statement currently being executed. It will inform the executed sql, database name, execution status, client ip from, The account used, running time and other information
In my cache backend, most of the time there is no SQL statement displayed. I think this is relatively normal. If you see a lot of sql statements, then this mysql must have performance problems. If performance problems occur, you can analyze:
1. Are there any sql statements that are stuck?
This is a common situation. , if the database uses myisam, then there may be a writing thread that locks the data table. If this statement does not end, other statements cannot run.
Check the time item in the processlist to see if there are any statements that take a long time to execute. Pay attention to these statements.
2. A large number of the same sql statements are being executed.
If this happens, it may be that the execution efficiency of the sql statement is low. Pay attention to these statements as well.
Then collect all the statements you suspect and use desc(explain) to check these statements.
First look at a normal desc output:
mysql> desc select * from imgs where imgid=1651768337;
+----+-------------+------ -+-------+---------------+----------+----------+---- ---+------+-------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-- -----------+-------+-------+---------------+------ ---+---------+-------+------+-------+
| 1 | SIMPLE | imgs | const | PRIMARY | PRIMARY | 8 | const | 1 | |
+----+-------------+-------+-------+----- ----------+---------+---------+-------+------+---- ---+
1 row in set (0.00 sec)
Pay attention to the three items key, rows and Extra. The result returned by this statement shows that the SQL will use the PRIMARY primary key index to query. The number of result sets is 1, Extra Not shown, it proves that no sorting or other operations are used. It can be inferred from this result that mysql will query the record imgid=1651768337 from the index, and then retrieve all fields from the real table, which is a very simple operation.
key indicates the index that will be used by the current sql. Mysql can only use one index when executing a simple statement. Pay attention to this restriction; rows is the size of the result set returned, and the result set is all the matching results of a search using this index; Extra generally displays the query and sorting methods.
If the key is not used, or the rows are very large and filesort is used, the efficiency will generally be affected, for example:
mysql> desc select * from imgs where userid="7mini" order by clicks desc limit 10;
+-- --+-------------+-------+------+---------------+-- ----+---------+------+-------+-------------------- ---------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+----------- --+-------+------+---------------+------+--------- +------+-------+--------------------------------+
| 1 | SIMPLE | imgs | ALL | NULL | NULL | NULL | NULL | 12506 | Using where; Using filesort |
+----+-------------+------- +------+---------------+------+----------+------+-- -----+--------------------------------+
1 row in set (0.00 sec)
This sql result There are 12506 entries in the assembly, and filesort is used, so it will be very inefficient to execute. At this time, when mysql is executed, it will scan the entire table to find records matching userid="7mini" one by one, and then sort the clicks of these records. The efficiency can be imagined. If it is found to be relatively fast during actual execution, it is because the server memory is enough to read all 12506 relatively short records into the memory, so it is still relatively fast. However, if the concurrency increases or the table becomes larger, the efficiency problem will become serious.
At this time, I added userid to the index:
create index userid on imgs (userid);
Then check again:
mysql> desc select * from imgs where userid="7mini" order by clicks desc limit 10;
+--- -+-------------+-------+------+---------------+--- -----+---------+-------+------+------------------- ----------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+---------- ---+-------+------+---------------+--------+------ ---+-------+------+-----------------------------+
| 1 | SIMPLE | imgs | ref | userid | userid | 51 | const | 8 | Using where; Using filesort |
+----+-------------+---- ---+------+---------------+--------+----------+---- ---+------+--------------------------------+
1 row in set (0.00 sec)
Well, you can see that mysql uses the userid index to search. After a search using the userid index, there are 8 results in the result set. Although filesort was used to sort one by one, the efficiency problem was alleviated because the result set only had 8 items.
However, if I query with another userid, the results will be different:
mysql> desc select * from imgs where userid="admin" order by clicks desc limit 10;
+----+---- ---------+-------+------+---------------+--------+ ---------+-------+------+------------------------ ----+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-- -----+------+---------------+--------+---------+-- -----+------+-----------------------------+
| 1 | SIMPLE | imgs | ref | userid | userid | 51 | const | 2944 | Using where; Using filesort |
+----+-------------+-------+-- ----+---------------+--------+---------+-------+-- ----+--------------------------------+
1 row in set (0.00 sec)
This result is the same as userid= The results of "7mini" are basically the same, but after a MySQL search using userid index, the size of the result set reaches 2944. These 2944 records will be added to the memory for filesort, and the efficiency is much worse than that of 7mini. There are two ways to solve this problem. The first way is to add an index and judgment condition. Because I only need to get the largest 10 pieces of data based on the number of clicks, so there is a lot of data that I don’t need to add to sort at all, such as If the number of clicks is less than 10, these data may account for a large part.
I add an index to clicks, then add a where condition and query again:
create index clicks on imgs(clicks);
mysql> desc select * from imgs where userid="admin" order by clicks desc limit 10;
+- ---+-------------+-------+------+---------------+- -------+---------+-------+------+----------------- ------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------- -----+-------+------+---------------+--------+---- -----+-------+------+-------------------------------- +
| 1 | SIMPLE | imgs | ref | userid,clicks | userid | 51 | const | 2944 | Using where; Using filesort |
+----+-------------+ -------+------+---------------+--------+----------+ -------+------+--------------------------------+
1 row in set (0.00 sec)
At this time, you can see that possible_keys has become userid, clicks. possible_keys is all the indexes that can be matched. MySQL will judge and use one of the indexes from possible_keys to execute the statement. It is worth noting that mysql takes The index used may not be optimal. This time, mysql was queried using the userid index, which did not follow my wishes, so the results still did not change.Change the sql and add use index to force mysql to use clicks index:
mysql> desc select * from imgs use index (clicks) where userid='admin' and clicks>10 order by clicks desc limit 10
+----+- ------------+-------+-------+---------------+----- ---+---------+------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+------- +---------------+--------+---------+------+------+ -------------+
| 1 | SIMPLE | imgs | range | clicks | clicks | 4 | NULL | 5455 | Using where |
+----+------ -------+-------+-------+---------------+--------+- --------+------+---------+-------------+
1 row in set (0.00 sec)
MySQL at this time The clicks index is used for query, but the result set is larger than the userid! It seems that there are further restrictions:
mysql> desc select * from imgs use index (clicks) where userid='admin' and clicks>1000 order by clicks desc limit 10
+----+-------------+-------+-------+---------- -----+--------+---------+------+------+----------- --+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---- ---+-------+---------------+--------+---------+--- ---+------+-------------+
| 1 | SIMPLE | imgs | range | clicks | clicks | 4 | NULL | 312 | Using where |
+ ----+-------------+-------+-------+------------- +--------+---------+------+------+-------------+
1 row in set (0.00 sec)
When added to 1000, the result set becomes 312, and the sorting efficiency should be acceptable.
However, using the optimization method of index replacement requires taking a sampling point, such as the number 1000 in this example. In this way, for each value of userid, a sampling point must be found, which is very difficult for the program. Done. If sampling is based on 1000, then in the example of userid='7mini', the results obtained will not be 8, but 2, causing confusion to the user.
Of course there is another way, add dual index:
create index userid_clicks on imgs (userid, clicks)
mysql> desc select * from imgs where userid="admin" order by clicks desc limit 10;
+--- -+-------------+------+------+------------------- ---+---------------+---------+-------+------+----- --------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+------------ -+-------+------+----------------------+---------- -----+---------+-------+------+-------------+
| 1 | SIMPLE | imgs | ref | userid,userid_clicks | userid_clicks | 51 | const | 2944 | Using where |
+----+-------------+-------+- -----+----------------------+---------------+----- ----+-------+------+-------------+
1 row in set (0.00 sec)
You can see at this time, The result set still has 2944 entries, but the filesort in Extra is missing. At this time, mysql uses the userid_clicks index to query. Not only can all records with userid="admin" be quickly queried, but the results are sorted according to clicks! So there is no need to read this result set into the memory and sort it one by one. The efficiency will be much higher.
But there is a problem with using multi-field index. If you query many types of SQL, you have to plan it carefully. Otherwise, a lot of indexes will be built, which will not only affect the efficiency of data insert and update, but also the data table. Also easily damaged.
The above is a method for optimizing the index. Because the reason may be more complicated, it will be longer to write. Generally, after optimizing the index, the efficiency of MySQL will be improved by n levels, so there is no need to consider adding more machines to solve the problem. .
However, mysql or even all databases may not be able to solve the limit problem. In mysql, there is no problem with limit 0 and 10 as long as the index is appropriate, but limit 100000 and 10 will be very slow, because mysql will scan the sorted results, then find the 100000 point, take out 10 records and return them. To find the 100,000 point, we need to scan 100,000 records. This cycle is relatively time-consuming. I don’t know if there is any good algorithm that can optimize this scanning engine. I have been thinking hard and can’t think of any good way. Regarding limit, from now until the relatively distant future, I think it can only be optimized through business, program and data table planning. None of the optimization methods I have thought of is a foolproof strategy, and we will discuss it later.
2. The sql writing method is too complicated.
The sql writing method uses some special functions, such as groupby or multi-table joint query. What method is used to query mysql? You can also use desc to analyze. I use complex sql here. It’s not too much, so I don’t analyze it often, and I don’t have any good suggestions for the time being.
3. Configuration error
The main parameters in the configuration are key_buffer, sort_buffer_size/myisam_sort_buffer_size. These two parameters mean:
key_buffer=128M: The indexes of all tables will be placed in this memory area as much as possible. If the index is relatively large, open it It can be slightly larger. I usually set it to 128M. A good suggestion is to find a way to move the rarely used and relatively large tables to other places. This can significantly reduce the memory usage of MySQL.
sort_buffer_size=1M: The memory used by a single thread for sorting. The query result set will be put into this memory. If it is relatively small, mysql will put it several times more, so just make it a little larger. The important thing is to optimize the index. and query statements so that they do not generate too large a result set.
Some other configurations:
thread_c
interactive_timeout=30
wait_timeout=30: These two configurations can be used for 10-30 seconds. This will release memory resources as soon as possible. Note: the connection that has been used will not be disconnected. , this configuration only disconnects connections that have not been active for a long time.
query_cache: Do not use this function. Now many people see the letters cache as if they are seeing a treasure. This is not materialistic. MySQL's query_cache will re-clear all caches connected to the table every time the table data changes. If updates are frequent, query_cache will not only be unable to help, but will also have a great impact on efficiency. This parameter is only suitable for read-only databases. If you must use it, you can only use query_cache_type=2 to specify some sql for caching using SQL_CACHE.
max_connections: The default is 100, which is generally sufficient, but generally it needs to be set higher, 400-600 is enough. If it exceeds 600, there will generally be efficiency problems, and you have to find other countermeasures. Just increase this. Numbers are not the answer.
You can just leave the other configurations as default. Personally, I think the problem is not that big. Let me remind you: 1. Although configuration is very important, in most cases it is not the culprit of efficiency problems. 2. MySQL is a database. The most important thing to consider for a database should not be efficiency, but stability and data accuracy.
4. The machine really can’t bear the load
If the server still can’t bear it after making the above adjustments, it can only be optimized through architecture-level adjustments.
1. mysql synchronization.
The data is synchronized to several slave databases through the mysql synchronization function, and is written by the master database and read from the slave database.
Personally, I am not so happy to use mysql synchronization, because this method will increase the complexity of the program and often cause data errors. In a high-load service, it can be quickly restarted if it crashes, but it will be more troublesome to recover if the data is wrong.
2. Add cache
After adding cache, you can solve the concurrency problem, and the effect is obvious. If it is a real-time system, you can consider refreshing the cache to keep the cache up to date.
It is recommended to add Squid to the front-end architecture. In applications with a high hit rate, it can basically solve the problem.
If caching is done in the program logic layer, it will add a lot of complexity, and the problems will be more difficult to solve. It is not recommended to make adjustments at this level.
3. Adjust the program structure to support connecting to multiple databases at the same time.
If the problem is still serious after the web is added to the cache, the only way is to adjust the program structure, dismantle the application, and use multiple machines to provide services at the same time.
If it is dismantled, it will have a slight impact on the business. If there are some functions in the business that must use all the data, you can use an architecture like a complete library + n dispersed libraries. Each modification will be done in the complete library and the dispersed library. Compile the complete library once, or periodically.
Of course, there is also the stupidest way, which is to make a complete copy of the database, and then the program executes the complete SQL in these libraries every time, and polls for access during access. I think this is better than the MySQL synchronization method. Safety.
4. Use mysql proxy proxy
mysql proxy can disperse the tables in the database to several servers through proxies, but its problem is that it cannot solve the problem of popular tables. If popular content is scattered in multiple tables, use this method It is relatively easy to solve the problem.
I have neither used this software nor checked it carefully, but I have a little doubt about its function, that is, how does it implement joint queries between multiple tables? If it can be implemented, how efficient is it?
5. Use memcachedb
Changing the database to memcachedb that supports mysql is an idea that can be tried. From the perspective of the implementation method and level of memcachedb, it will have no impact on the data and will not cause any trouble to users.
Because I don’t have many problems with the database, I haven’t tried this thing yet. However, as long as it supports most of the main syntax of MySQL and is stable, there is no doubt about its usability.

The above introduces the checking and optimization methods of my summer holiday English composition mysql performance, including the content of my summer holiday English composition. I hope it will be helpful to friends who are interested in PHP tutorials.

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn