Home >Backend Development >PHP Tutorial >How to optimize your database
The reason why the database runs too slowly
The first point is that the hardware is too old
Hardware here we mainly focus on CPU, memory, Let’s talk about the three aspects of disks. There are also some factors such as network cards, computer room networks, etc. Due to the length of the article, I will not introduce them one by one. There will be opportunities to talk about them in the future.
First of all, let’s take a look at MySQL’s CPU utilization characteristics:
5.1 can utilize 4 cores, 5.5 can utilize 24 cores, and 5.6 can utilize 64 cores
For example, MySQL5.6 can use more than 48 COREs. If it runs well, it can use 64 COREs (between 48CORE-64COREs, the official announcement is 48 COREs, and in my actual test, it can reach 64 COREs).
MySQL 5.6 can use 48 core+
MySQL 5.1 can use up to 4 cores before
Now the general production environment servers are 32CORE or above.
So I recommend everyone here to use MySQL5.5 or MySQL5.6 as much as possible, unless your company's server has been using a very old server with only 4 cores or 1 core.
Because before 5.1 (the same as 5.0), it was hard-coded in the internal code and was based on the innobase storage engine, so the database had poor hardware utilization. After it evolved into the InnoDB engine, it became much better.
Each connection is a thread (non-thread pool), and each query can only use one core.
In addition, in MySQL, each query can only use one CPU.
Oracle uses parallel SQL and parallel query. This kind of function does not exist in MySQL.
No execution plan cache (no SQL execution plan precompilation)
Secondly, there is no SQL precompilation inside MySQL. Therefore, there is no structure like the library cache in Oracle's memory structure. Therefore, MySQL only has hard parsing, there is no soft parsing, let alone soft parsing.
MySQL will experience performance degradation as the number of connections increases
This is also a flaw of MySQL, but with the evolution of MySQL versions, many solutions have emerged.
For example: the officially launched thread pool, referred to as TP. It is to solve the problem of too high number of concurrent connections, but this is an additional component of MySQL, and the official TP requires additional money to purchase.
In addition, there is a person in China named Lou Fangxin who developed a OneSQL middleware to solve similar problems.
There is a Result cache, but it is useless.
MySQL also has a result cache similar to that in Oracle, called Query Cache, but it is a relatively useless function and is rarely used.
Because most of the actual production environments are OLTP systems, there are frequent update and modification operations. This Query Cache is used in an environment where data is frequently updated and modified, which will seriously degrade the performance of MySQL. Therefore, it is generally very Use sparingly.
Nowadays, when using MySQL, the InnoDB storage engine is basically used. The previous MyISAM engines are rarely used. (What is a storage engine? If you don’t know this, you can gg)
There is no need to turn on this Query Cache in the InnoDB engine, because it is a transactional storage engine, and it is used when using InnoDB With high transaction processing capabilities, frequent data updates and modifications will definitely occur.
Let’s look at the memory utilization characteristics of MySQL again
The server with 64-bit operating system can use memory ((2^64-1)/1024/1024/1024)G
In a high-speed concurrent environment, memory caching is basically used to reduce the IO impact on the disk.
Usually the memory is planned according to 15%-20% of the actual data. If the data is particularly hot, a larger proportion needs to be considered. Cache data
This 15%-20% of data is usually called hot data. (This is also a common experience value)
For example, if you estimate that the total data volume of your MySQL is about 500G, then the memory that MySQL will provide may be 75G (500*0.15), then you may A server with about 128G of memory is required.
In addition, some businesses will have particularly hot and large amounts of hot data (it is possible to greatly exceed the 15%-20% range), such as QQ Farm.
I believe everyone has played the food-stealing games before, such as QQ Farm, Happy Farm and the like. (There is also a 12306 website for booking tickets).
This type of business is of high concern in our industry. The characteristics of this type of business are that when the data is hot, it is basically 100% hot data. For example: when everyone is playing on QQ Farm, They come here to play every day, and they come up every once in a while to steal some food. Many people get up in the middle of the night and steal some food when they go to the toilet.
So the memory configuration of the MySQL database for this type of business needs to be increased. 15-20% is not enough.
Summary: 15%-20% of general business is used to plan hot data, such as user center, orders and other common businesses. For some other special businesses, the specific situation must be analyzed in detail.
Can guide allocation based on Query response time
When we are doing this kind of large-scale online architecture - large database planning and design,
The response time of SQL query is also a very important indicator.
In such a large-scale system, it must carry millions or even tens of millions of users to conduct business online at the same time. The response time of SQL query (query) must be strictly controlled, and your system must be Query response time is controlled within the time limit.
For example, for our core library, I require Query’s response time (average response) to be below 30ms. If it exceeds 30ms, we think that the database may have reached the load limit and the database needs to be expanded.
In addition, long-term indicator monitoring of this Query response time is required.
This is the core library. If there are other less important auxiliary libraries, such as libraries that store logs, or some libraries whose performance requirements are not too high, we can relax the Query response time to Within 1 second or 2 seconds.
Determine the threshold of this Query response time according to the importance of the business.
This is a very important guiding principle. Plan your performance capacity based on Query response time.
There are two types of capacity: performance capacity and space capacity. The space capacity is very simple, that is, how much SIZE data is placed, and how many T.
Performance capacity is more important and determines whether it can handle your business pressure and load.
Everyone should remember: If the business you want to fight against is millions of active users, not hundreds of users, performance is king, and meeting the needs of the business in terms of performance is the most important.
No matter how awesome your functions are, no matter how good your product is, the performance is unmatched, and everything else is nonsense. Hundreds of people may bring down your entire system and project in a few seconds, and then you guys The company was blinded.
Users who have worked hard will also be lost in large numbers, and the losses will be heavy.
Performance is the foundation. The entire architecture only makes sense if the performance can withstand it. If the performance is unsatisfactory, it will be useless to consider high availability later.
MySQL’s utilization characteristics of disk
Binlog, redo log, undo log sequential IO
MySQL has various IO types.
Binlog, redolog, undolog, these are sequential IO writes.
There is not much need to put this kind of things on SSD. Sequential writing on traditional mechanical disks is also very fast. Putting it on SSD is a waste of money, and SSD has problems with write loss and write life. It needs to be placed on SSD. Putting it on a traditional SAS disk is enough. There is no need to put an SSD.
SSD is used to store datafile. Because most of the IO that occurs on the datafile is random IO, it is very advantageous for SSD to run random IO. SSD solid state disk + traditional disk SAS disk are mixed together for storage. In addition, do not use SSD for backup disks.
Datafile combines random IO and sequential IO
Sequential IO is always faster. In database design, what determines whether you are an awesome DBA or an awesome architect depends on whether you can design a business as sequential IO as much as possible while reducing random IO. For example: When designing a friend relationship business, I hope that a query can take out the friend relationship through sequential IO. So how to design it?
In MySQL's InnoDB, we can take advantage of a feature of InnoDB: clustered index tables. (Similar to Oracle's IOT).
Using this feature, the user's friend data can be gathered in one page or multiple adjacent pages as much as possible. When reading, a sequential read IO can be done, and the performance is greatly improved.
The friend relationship table structure is as follows (the prerequisite table is the InnoDB engine):
owner_id friend_id (friend id)
The above two fields are used as a primary key, the primary key of InnoDB It is a clustered index, so reading these two fields must be done with sequential IO.
In the past, any database design books always mentioned that each table must add a specification for an auto-incremented primary key. In fact, the specification is dead, and the response is living. The friend relationship I gave an example above is Instead of using an auto-incremented primary key, two business fields that have business attributes and are frequently read are used as primary keys, which results in better performance.
Therefore, when you study, don’t memorize the norms and regulations in these books. Instead, you should really understand the principles of something, such as learning the internal principles of InnoDB, and then in actual work, you can use the principles Support and use principles to draw inferences.
The principles of InnoDB are a huge piece of knowledge and require learning over time. You can pay more attention to my official account, and some articles about InnoDB will be released one after another.
OLTP business requires more random IO
You can use memory for caching, thereby reducing random IO
OLAP business requires more sequential IO
Memory The cache is of little use
Before MySQL5.6, page modification was not supported, and the default was 16K.
It can be changed after MySQL5.6. This parameter is innodb_page_size, but MySQL5.6 can only be changed to 8K or 4K, and cannot be increased. It cannot be changed to 32K or 64K until MySQL5.7 or above.
For OLAP systems, larger pages will help improve performance, because OLAP systems have relatively large queries and scan a lot of data.
Second point: Poor database design
For example, a lot of database features are used, such as triggers, partitions, a lot of stored procedures, functions, etc.
We often say that small is beautiful, which means that simplicity is the best. If you use all the functions of the database, the performance of the database will naturally be slowed down, and the chances of possible bugs and underlying failures will increase.
So everyone must understand that a good database project design is small, beautiful, concise and concise. In addition, the database is only a part of the overall project. Things like triggers and stored procedures can definitely be implemented using application code in the overall project.
So, when we use MySQL, we just use its powerful features, such as tables, indexes, and transactions, rather than using all its functions.
Another point is that before MySQL 5.6, subqueries were not allowed in the main database of the production environment.
The performance of subqueries before MySQL5.6 was particularly poor. (Syntax is supported, but SQL performance is very poor).
For example, if you are using Oracle now and want to migrate Oracle to MySQL, it is recommended that you use MySQL5.6 version. MySQL5.6 has greatly improved the support and performance of subqueries.
The performance of MySQL5.6 subquery will be greatly improved.
The third point: The program writing is too bad
I guess all students who have been DBAs should have experienced this. In small and medium-sized companies, the level of programmers varies. together.
Especially when you encounter many programmers who have just entered the industry (fresh graduates), it is more likely that these programmers who have just entered the industry will also receive some very urgent needs. It’s hard to imagine a program developed in this kind of environment.
Of course, it’s not our programmers’ fault, we can’t blame them.
The reason for my above phenomenon is mainly due to the domestic development environment. There is no way. Development needs are urgent (products are activated every day), programmers are busy rushing to work (long-term overtime), and can only be busy with implementing business programs. , there is no time to optimize the program.
Of course, in this environment, it is an opportunity for us DBAs. Bad SQL and complex SQL written by programmers caused the system to be slow or even crash. Then our DBA stepped in to optimize and transform these bad SQL and slow SQL, and the system returned to normal and became increasingly stable. This is also something that is very fulfilling and will be respected by colleagues and leaders.
At the same time, DBAs can also strengthen training for programmers and enhance their ability to quickly write good SQL. Let them spend less time and write SQL statements with better performance and smoother performance. In this way, it can also reduce the burden on the DBA.
I personally prefer to talk to programmers about training. Firstly, everyone can gain something by exchanging technology. Secondly, it can build a good relationship and make it easier to talk about any matters at work that need to be negotiated in the future. This is better than inviting them to dinner.
We mainly have the following solutions for poorly written programs:
To make applications use database connection pools, especially in large-scale high-concurrency applications developed based on JAVA, it must be Use connection pooling.
The advantage of using a connection pool is that it can limit the number of connections for the application. In addition, there is no need to create each additional connection. The cost of creating a connection for MySQL is also large, because creating a new connection is equivalent to MySQL. A thread is created.
I also mentioned just now that MySQL will experience performance degradation as the number of connections increases.
Those who have written program code should also know that on our ordinary PC notebook (usually 4CORE), you create 400 threads, and each thread does 1+1+1+1+. .Simple task, sleep again and see if your PC is stuck or not. You will find that the CPU of your PC is almost full. If you dare to create 600 threads, then your machine will be restarted soon. This is because the CPU is fully occupied due to thread overhead.
Complex SQL statements
As I said just now, SQL written by programmers usually has many problems. After all, they are too busy to consider the performance and operation of this SQL. . In some cases, the SQL spliced by the programmer can directly bring down the entire system.
Let me give you a simple example: one of our applications creates 10 connections to the database (maximum number of connections = 10). Each of these 10 connections runs the same complex SQL at the same time and executes this complex SQL. It will take at least 10 minutes. Then these 10 connections can only execute this complex SQL within 10 minutes, and all other subsequent SQLs will be blocked.
As a result, most applications will be unavailable for 10 minutes, right? And it may cause an avalanche and cause the system to collapse.
The optimization of complex SQL is also a very important job for DBAs. It is necessary to find out these complex SQL, slow SQL, and bad SQL through monitoring methods, and then give optimization suggestions to programmers (DBA needs to conduct performance comparisons Testing), allowing programmers to modify the code, so that the system can truly run smoothly and in parallel, like a highway without traffic jams.
Some people may ask, our company's programmers are just awesome. They won't change the SQL code even if they are killed, they won't optimize it, and they can't communicate. So what should we do?
We still have a way. We can also build a dedicated slave library (Slave library) to handle it. You can change the library query and it will be fine.
For example, taking our company as an example, our background system that generates reports is connected to the slave database for query, and does not connect to the main database.
Invalid logic
Full table scan
For example: update t set a = a + 1; Forgot to add the where condition.
If you want your system to support millions of online users, you must add a SQL review system (SQL Review) to eliminate SQL with invalid logic and SQL with full table scans.
SQL can only be released online after it has been reviewed and approved by the DBA.
In addition, this kind of large update SQL should be updated in batches, and the large SQL task should be divided into small tasks to run. In MySQL, this requires special attention.
Why update in batches?
Reason 1. As mentioned above, a MySQL query can only use one CORE. SQL transactions are too large and complex and take a long time to run, easily causing congestion.
Reason 2. In the online environment, MySQL generally has a Master/Slave architecture. If a large update transaction of 1 million rows occurs in the Master, it is likely to cause the SLAVE to get stuck there, because the SLAVE is a single-threaded structure, causing synchronization delays. .
Write SQL in MySQL and create small transaction SQL, which can be executed quickly and submitted quickly. Let each query complete faster and the connection be released faster.
After optimizing based on the above sharing, has your database become faster?
The above is the detailed content of How to optimize your database. For more information, please follow other related articles on the PHP Chinese website!