Home >Database >Mysql Tutorial >How to compress large text storage in MySQL
As mentioned earlier, we have a cloud document project whose snapshot content is directly stored in db, which is a large text storage. Most of the content fields of the document snapshot are kb level, and some Even to the MB level. At present, CDN caching optimization has been carried out for data reading ( Static resource caching tool - CDN). Data writing and storage still need to be optimized. If it can be done in large text through some compression algorithms Compressed storage can save DB storage space to a great extent and relieve DB I/O pressure.
select table_name as '表名', table_rows as '记录数', truncate(data_length/1024/1024, 2) as '数据容量(MB)', truncate(index_length/1024/1024, 2) as '索引容量(MB)', truncate(DATA_FREE/1024/1024, 2) as '碎片占用(MB)' from information_schema.tables where table_schema=${数据库名} order by data_length desc, index_length desc;
We all know that the default page block size of innodb is 16k. If the length of a row of data in the table exceeds 16k, row overflow will occur, and the overflowed row is stored in another place (uncompress blob page). Since innodb uses clustered index to store data, that is, B Tree structure, there are at least two rows of data in each page block, otherwise the meaning of B Tree will be lost, so the maximum length of a row of data is limited to 8k (Large fields will store 768 bytes of data in the data page, and the remaining data overflows to another page. The data page also has 20 bytes to record the address of the overflow page)
The hole part of the file does not occupy disk space, and the disk space occupied by the file is still continuous.
Applicable scenarios: Due to the large amount of data and insufficient disk space, the load is mainly reflected in IO, and the server's CPU has a relatively large margin.
Related documents: dev.mysql.com/doc/refman/…
Related documents: dev. mysql.com/doc/refman/…
hole
feature) ALTER TABLE xxx COMPRESSION = ZLIB
The TPC page compression function can be enabled, but this is only for subsequent additions. To compress a large amount of data, if you expect to compress the entire table, you need to execute OPTIMIZE TABLE xxx
Implementation process: A compressed page is a 16K non-volatile page in the buffer pool The compressed page will be compressed only when the data is flushed. The remaining space after compression will be filled with 0x00. The file system's hole punch is used to crop the file and release the sparse space occupied by 0x00
MySQL currently does not have a direct solution for column compression, but there is a curved way to save the country. , which is to use the compression and decompression functions provided by MySQL at the business layer to perform compression and decompression operations on columns. That is, if you need to compress a certain column, call the COMPRESS
function to compress the contents of that column when writing, and use the UNCOMPRESS
function to compress the compressed data when reading. Unzip.
COMPRESS()
UNCOMPRESS()
LENGTH()
UNCOMPRESSED_LENGTH()
insert into xxx (content) values (compress('xxx....'))
Read compressed data: select c_id, uncompressed_length(c_content) uncompress_len, length(c_content) compress_len from xxx
##
SELECT NAME, FS_BLOCK_SIZE, FILE_SIZE, ALLOCATED_SIZE FROM information_schema.INNODB_TABLESPACES WHERE NAME like 'test_compress%';
FS_BLOCK_SIZE
:文件系统块大小,也就是打孔使用的单位大小FILE_SIZE
:文件的表观大小,表示文件的最大大小,未压缩ALLOCATED_SIZE
:文件的实际大小,即磁盘上分配的空间量压缩率:
【相关推荐:mysql视频教程】
The above is the detailed content of How to compress large text storage in MySQL. For more information, please follow other related articles on the PHP Chinese website!