There is a big data table with 30 fields, int
varchar
text
There are all fields, 1000W+ data, added every day, frequently searched fields There are 10 of them. How to optimize this?
I asked someone for advice and got a similar answer. I feel that this kind of question is a bit fake. This kind of problem will basically not happen in reality, right?
Optimization plan:
Master-slave synchronization + read-write separation:
This table can separate read and write when there are equipment conditions, which can reduce a lot of pressure, and Data stability can also be improved
Vertical table splitting:
According to the principle, each table should not have more than 5 indexes, split the fields vertically, and split some fields into a new Table
Usually we perform vertical split according to the following principles: (First distinguish the hot and cold data fields in this table)
Put the infrequently used fields in a separate table;
Put text
, blob
and other large fields are split and placed in the attached table;
Columns that are often combined and queried are placed in one table;
The disadvantage is: a lot of logic is required Rewriting brings a lot of work.
Use table partitioning:
This is a recommended solution. It does not bring rewriting logic, etc. Table partitioning can be done based on time, which is equivalent to the same disk. On the table, the data of the table is stored in different folders, which can greatly improve the query speed.
Horizontal table sharding:
1000W pieces of data are quite a lot, which will bring some operation and maintenance pressure. When backing up, a single table backup will take a long time, so it can Tables are divided horizontally according to the server hardware conditions and how much data each table has.
If there is something wrong, please point it out, thank you.
The above is the detailed content of Introducing several optimization methods for Mysql multi-field large tables. For more information, please follow other related articles on the PHP Chinese website!