Home  >  Article  >  CMS Tutorial  >  Which database is better for PHPCMS?

Which database is better for PHPCMS?

Guanhui
GuanhuiOriginal
2020-07-21 09:46:342961browse

Which database is better for PHPCMS?

#Which database is better for PHPCMS?

It is better to use MySQL database for PHPCMS, because PHPCMS is written in PHP, and PHP supports MySQL better, so it is recommended to use MySQL, which is an open source relational database. management system.

MySQL optimization

Select InnoDB as the storage engine

Large-scale product databases have higher requirements for reliability and concurrency, and InnoDB is the default MySQL storage engine is a better choice than MyISAM.

Optimize database structure

Organize the schema, tables and fields of the database to reduce I/O overhead, save related items together, and plan in advance so that as the amount of data grows, Performance can be maintained at a high level.

The data table should be designed to minimize the space it takes up, and the primary key of the table should be as short as possible. ·For InnoDB tables, the column where the primary key is located is replicable in each secondary index entry, so if there are many secondary indexes, a short primary key can save a lot of space.

Create only the indexes you need to improve query performance. Indexes facilitate retrieval, but increase the execution time of insert and update operations.

ChangeBuffering feature of InnoDB

InnoDB provides changebuffering configuration to reduce the disk I/O required to maintain auxiliary indexes. Large-scale databases may experience a large number of table operations and heavy I/O to keep secondary indexes up to date. When the relevant page is not in the buffer pool, InnoDB's changebuffer will cache the changes to the secondary index entry, thereby avoiding time-consuming I/O operations caused by not being able to read the page immediately from disk. When pages are loaded into the buffer pool, buffered changes are merged and updated pages are later flushed to disk. Doing so improves performance and is available for MySQL 5.5 and higher.

InnoDB page compression

InnoDB supports page-level compression of tables. When a data page is written, a specific compression algorithm is used to compress it. The compressed data is written to disk, with its hole-punching mechanism releasing empty blocks at the end of the page. If compression fails, the data is written unchanged. Tables and indexes are compressed because indexes usually account for a large proportion of the total database size. Compression can significantly save memory, I/O or processing time, thus achieving the purpose of improving performance and scalability. It also reduces the amount of data transferred between memory and disk. MySQL5.1 and later versions support this feature.

Note that page compression does not support tables in shared table spaces. Shared table spaces include system table spaces, temporary table spaces and regular table spaces.

Using batch data import

Using a sorted data source on the primary key to import batch data can speed up the data insertion process. Otherwise, rows may need to be inserted between other rows to maintain ordering, which can cause high disk I/O, impact performance, and increase page splits. It is also beneficial to turn off autocommit mode, as it will perform a log flush to disk for each insert. Temporarily shifting unique key and foreign key checks during bulk inserts can also significantly reduce disk I/O. For newly created tables, the best practice is to create foreign key/unique key constraints after the bulk import.

Once your data reaches a stable size, or a growing table adds tens or hundreds of megabytes, you should consider using the OPTIMIZETABLE statement to reorganize the table and compress wasted space. A full table scan of the reorganized table will require less I/O.

Optimize InnoDB disk I/O

Increasing the InnoDB buffer pool size allows queries to be accessed from the buffer pool instead of through disk I/O. Adjust the buffer clearing indicator to reach the optimal level by adjusting the system variable innodb_flush_method.

Memory allocation for MySQL

Before allocating enough memory for MySQL, please consider the memory requirements for MySQL in different areas. Key areas to consider are: Concurrent Connections - With a large number of concurrent connections, sorting and temporary tables will require a lot of memory. At the time of writing, 16GB to 32GB of RAM is sufficient for a database handling 3000 concurrent connections.

Memory fragmentation can consume approximately 10% or more of memory. Caches and buffers like innodb_buffer_pool_size, key_buffer_size, query_cache_size, etc. consume about 80% of the allocated memory.

Daily maintenance

Regularly check slow query logs and optimize the query mechanism to effectively use cache to reduce disk I/O. Optimize them to scan the minimum number of rows rather than doing a full table scan.

Other logs that can help the DBA check and analyze performance include: error logs, general query logs, binary logs, and DDL logs (metadata logs).

Regularly flush caches and buffers to reduce fragmentation. Use the OPTIMIZETABLE statement to reorganize the table and compress any potentially wasted space.

Recommended tutorials: "PHP" "PHPCMS Tutorial"

The above is the detailed content of Which database is better for PHPCMS?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn