search
HomeDatabaseRedisHow to optimize Redis cache space
How to optimize Redis cache spaceMay 27, 2023 pm 11:44 PM
redis

Scene setting

1. We need to store POJO in the cache. The class is defined as follows

public class TestPOJO implements Serializable {
    private String testStatus;
    private String userPin;
    private String investor;
    private Date testQueryTime;
    private Date createTime;
    private String bizInfo;
    private Date otherTime;
    private BigDecimal userAmount;
    private BigDecimal userRate;
    private BigDecimal applyAmount;
    private String type;
    private String checkTime;
    private String preTestStatus;
    
    public Object[] toValueArray(){
        Object[] array = {testStatus, userPin, investor, testQueryTime,
                createTime, bizInfo, otherTime, userAmount,
                userRate, applyAmount, type, checkTime, preTestStatus};
        return array;
    }
    
    public CreditRecord fromValueArray(Object[] valueArray){         
        //具体的数据类型会丢失,需要做处理
    }
}

2. Use the following example as test data

TestPOJO pojo = new TestPOJO();
pojo.setApplyAmount(new BigDecimal("200.11"));
pojo.setBizInfo("XX");
pojo.setUserAmount(new BigDecimal("1000.00"));
pojo.setTestStatus("SUCCESS");
pojo.setCheckTime("2023-02-02");
pojo.setInvestor("ABCD");
pojo.setUserRate(new BigDecimal("0.002"));
pojo.setTestQueryTime(new Date());
pojo.setOtherTime(new Date());
pojo.setPreTestStatus("PROCESSING");
pojo.setUserPin("ABCDEFGHIJ");
pojo.setType("Y");

General practice

System.out.println(JSON.toJSONString(pojo).length());

Use JSON to directly serialize and print length=284**, **This method is the simplest way , which is also the most commonly used method. The specific data is as follows:

{"applyAmount":200.11,"bizInfo":"XX","checkTime":"2023-02-02","investor":"ABCD ","otherTime":"2023-04-10 17:45:17.717","preCheckStatus":"PROCESSING","testQueryTime":"2023-04-10 17:45:17.717","testStatus":"SUCCESS ","type":"Y","userAmount":1000.00,"userPin":"ABCDEFGHIJ","userRate":0.002}

We found that the above contains a lot of useless data, among which the attribute names There is no need to store it.

Improvement 1-Remove the attribute name

System.out.println(JSON.toJSONString(pojo.toValueArray()).length());

By selecting the array structure instead of the object structure, the attribute name is removed, and length=144 is printed. The data size has been reduced by 50%. The specific data is as follows:

["SUCCESS","ABCDEFGHIJ","ABCD","2023-04-10 17:45:17.717",null,"XX"," 2023-04-10 17:45:17.717",1000.00,0.002,200.11,"Y","2023-02-02","PROCESSING"]

We found that there is no need to store null. The time format is serialized into a string. Unreasonable serialization results lead to data expansion, so we should choose a better serialization tool.

Improvement 2-Use better serialization tools

//我们仍然选取JSON格式,但使用了第三方序列化工具
System.out.println(new ObjectMapper(new MessagePackFactory()).writeValueAsBytes(pojo.toValueArray()).length);

Choose better serialization tools to achieve field compression and reasonable data format, print** length=92, the space is reduced by 40% compared with the previous step.

This is a piece of binary data. Redis needs to be operated in binary. After converting the binary to a string, print it as follows:

��SUCCESS�ABCDEFGHIJ�ABCD� �j�6� ��XX� �j�6�� ��?`bM����@i � �Q�Y�2023-02-02�PROCESSING

Follow this idea further Digging, we found that we can achieve more extreme optimization effects by manually selecting data types. Choosing to use smaller data types will achieve further improvements.

Improvement 3-Optimize data type

In the above use case, the three fields testStatus, preCheckStatus, and investor are actually enumeration string types. If they can Using simpler data types (such as byte or int, etc.) instead of string can further save space. You can use the Long type instead of a string to represent checkTime, so that the serialization tool outputs fewer bytes.

public Object[] toValueArray(){
    Object[] array = {toInt(testStatus), userPin, toInt(investor), testQueryTime,
    createTime, bizInfo, otherTime, userAmount,
    userRate, applyAmount, type, toLong(checkTime), toInt(preTestStatus)};
    return array;
}

After manual adjustment, a smaller data type was used instead of String type, printinglength=69

Improvement 4-Consider ZIP compression

In addition to the above points, you can also consider using ZIP compression to obtain a smaller volume. When the content is large or repetitive, the effect of ZIP compression is obvious. If the storage The content is an array of TestPOJOs, probably suitable for use with ZIP compression.

For files smaller than 30 bytes, ZIP compression may increase the file size but may not necessarily reduce the file size. In the case of less repetitive content, no significant improvement can be obtained. And there is CPU overhead.

After the above optimization, ZIP compression is no longer a required option, and testing based on actual data is required to determine the ZIP compression effect.

Finally implemented

The above improvement steps reflect the optimization ideas, but the deserialization process will lead to the loss of types, which is more cumbersome to handle, so We also need to consider the issue of deserialization.

When the cache object is predefined, we can completely process each field manually. Therefore, in actual combat, it is recommended to use manual serialization to achieve the above purpose, achieve refined control, and achieve the best compression. effect and minimal performance overhead.

You can refer to the following msgpack implementation code. The following is the test code. Please package better Packer and UnPacker tools by yourself:

<dependency>    
    <groupId>org.msgpack</groupId>    
    <artifactId>msgpack-core</artifactId>    
    <version>0.9.3</version>
</dependency>
    public byte[] toByteArray() throws Exception {
        MessageBufferPacker packer = MessagePack.newDefaultBufferPacker();
        toByteArray(packer);
        packer.close();
        return packer.toByteArray();
    }

    public void toByteArray(MessageBufferPacker packer) throws Exception {
        if (testStatus == null) {
            packer.packNil();
        }else{
            packer.packString(testStatus);
        }

        if (userPin == null) {
            packer.packNil();
        }else{
            packer.packString(userPin);
        }

        if (investor == null) {
            packer.packNil();
        }else{
            packer.packString(investor);
        }

        if (testQueryTime == null) {
            packer.packNil();
        }else{
            packer.packLong(testQueryTime.getTime());
        }

        if (createTime == null) {
            packer.packNil();
        }else{
            packer.packLong(createTime.getTime());
        }

        if (bizInfo == null) {
            packer.packNil();
        }else{
            packer.packString(bizInfo);
        }

        if (otherTime == null) {
            packer.packNil();
        }else{
            packer.packLong(otherTime.getTime());
        }

        if (userAmount == null) {
            packer.packNil();
        }else{
            packer.packString(userAmount.toString());
        }

        if (userRate == null) {
            packer.packNil();
        }else{
            packer.packString(userRate.toString());
        }

        if (applyAmount == null) {
            packer.packNil();
        }else{
            packer.packString(applyAmount.toString());
        }

        if (type == null) {
            packer.packNil();
        }else{
            packer.packString(type);
        }

        if (checkTime == null) {
            packer.packNil();
        }else{
            packer.packString(checkTime);
        }

        if (preTestStatus == null) {
            packer.packNil();
        }else{
            packer.packString(preTestStatus);
        }
    }


    public void fromByteArray(byte[] byteArray) throws Exception {
        MessageUnpacker unpacker = MessagePack.newDefaultUnpacker(byteArray);
        fromByteArray(unpacker);
        unpacker.close();
    }

    public void fromByteArray(MessageUnpacker unpacker) throws Exception {
        if (!unpacker.tryUnpackNil()){
            this.setTestStatus(unpacker.unpackString());
        }
        if (!unpacker.tryUnpackNil()){
            this.setUserPin(unpacker.unpackString());
        }
        if (!unpacker.tryUnpackNil()){
            this.setInvestor(unpacker.unpackString());
        }
        if (!unpacker.tryUnpackNil()){
            this.setTestQueryTime(new Date(unpacker.unpackLong()));
        }
        if (!unpacker.tryUnpackNil()){
            this.setCreateTime(new Date(unpacker.unpackLong()));
        }
        if (!unpacker.tryUnpackNil()){
            this.setBizInfo(unpacker.unpackString());
        }
        if (!unpacker.tryUnpackNil()){
            this.setOtherTime(new Date(unpacker.unpackLong()));
        }
        if (!unpacker.tryUnpackNil()){
            this.setUserAmount(new BigDecimal(unpacker.unpackString()));
        }
        if (!unpacker.tryUnpackNil()){
            this.setUserRate(new BigDecimal(unpacker.unpackString()));
        }
        if (!unpacker.tryUnpackNil()){
            this.setApplyAmount(new BigDecimal(unpacker.unpackString()));
        }
        if (!unpacker.tryUnpackNil()){
            this.setType(unpacker.unpackString());
        }
        if (!unpacker.tryUnpackNil()){
            this.setCheckTime(unpacker.unpackString());
        }
        if (!unpacker.tryUnpackNil()){
            this.setPreTestStatus(unpacker.unpackString());
        }
    }

Scenario extension

Assume that we store data for 200 million users. Each user contains 40 fields. The length of the field key is 6 bytes, and the fields are managed separately.

Under normal circumstances, we will think of the hash structure, and the hash structure stores key information, which will occupy additional resources. The field key is unnecessary data. According to the above ideas, you can use list instead of hash structure.

Through the Redis official tool test, using the list structure requires 144G of space, while using the hash structure requires 245G of space** (When more than 50% of the attributes are empty, you need to test whether it is still applicable)* *

How to optimize Redis cache space

In the above case, we took several very simple measures, with just a few lines of simple code, which can reduce the space by more than 70%. When the amount of data is relatively large, It is highly recommended in scenarios with large and high performance requirements. :

• Use arrays instead of objects (if a large number of fields are empty, you need to use serialization tools to compress nulls)

• Use better serialization tools

• Use Smaller data types

• Consider using ZIP compression

• Use list instead of hash structure (if a large number of fields are empty, testing and comparison are required)

The above is the detailed content of How to optimize Redis cache space. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:亿速云. If there is any infringement, please contact admin@php.cn delete
es和redis区别es和redis区别Jul 06, 2019 pm 01:45 PM

Redis是现在最热门的key-value数据库,Redis的最大特点是key-value存储所带来的简单和高性能;相较于MongoDB和Redis,晚一年发布的ES可能知名度要低一些,ES的特点是搜索,ES是围绕搜索设计的。

一起来聊聊Redis有什么优势和特点一起来聊聊Redis有什么优势和特点May 16, 2022 pm 06:04 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了关于redis的一些优势和特点,Redis 是一个开源的使用ANSI C语言编写、遵守 BSD 协议、支持网络、可基于内存、分布式存储数据库,下面一起来看一下,希望对大家有帮助。

实例详解Redis Cluster集群收缩主从节点实例详解Redis Cluster集群收缩主从节点Apr 21, 2022 pm 06:23 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了Redis Cluster集群收缩主从节点的相关问题,包括了Cluster集群收缩概念、将6390主节点从集群中收缩、验证数据迁移过程是否导致数据异常等,希望对大家有帮助。

详细解析Redis中命令的原子性详细解析Redis中命令的原子性Jun 01, 2022 am 11:58 AM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了关于原子操作中命令原子性的相关问题,包括了处理并发的方案、编程模型、多IO线程以及单命令的相关内容,下面一起看一下,希望对大家有帮助。

Redis实现排行榜及相同积分按时间排序功能的实现Redis实现排行榜及相同积分按时间排序功能的实现Aug 22, 2022 pm 05:51 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了Redis实现排行榜及相同积分按时间排序,本文通过实例代码给大家介绍的非常详细,对大家的学习或工作具有一定的参考借鉴价值,希望对大家有帮助。

实例详解Redis实现排行榜及相同积分按时间排序功能的实现实例详解Redis实现排行榜及相同积分按时间排序功能的实现Aug 26, 2022 pm 02:09 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了Redis实现排行榜及相同积分按时间排序,本文通过实例代码给大家介绍的非常详细,下面一起来看一下,希望对大家有帮助。

一文搞懂redis的bitmap一文搞懂redis的bitmapApr 27, 2022 pm 07:48 PM

本篇文章给大家带来了关于redis的相关知识,其中主要介绍了bitmap问题,Redis 为我们提供了位图这一数据结构,位图数据结构其实并不是一个全新的玩意,我们可以简单的认为就是个数组,只是里面的内容只能为0或1而已,希望对大家有帮助。

redis error什么意思redis error什么意思Jun 17, 2019 am 11:07 AM

redis error就是redis数据库和其组合使用的部件出现错误,这个出现的错误有很多种,例如Redis被配置为保存数据库快照,但它不能持久化到硬盘,用来修改集合数据的命令不能用。

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

Repo: How To Revive Teammates
1 months agoBy尊渡假赌尊渡假赌尊渡假赌
R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
1 months agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

mPDF

mPDF

mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

VSCode Windows 64-bit Download

VSCode Windows 64-bit Download

A free and powerful IDE editor launched by Microsoft

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment