Home  >  Article  >  Database  >  Remember a major accident caused by Redis distributed lock to avoid pitfalls in the future!

Remember a major accident caused by Redis distributed lock to avoid pitfalls in the future!

Java学习指南
Java学习指南forward
2023-07-26 16:25:281369browse

Preface

Using distributed locks based on Redis is nothing new today. This article is mainly based on the analysis and solutions of accidents caused by redis distributed locks in our actual projects.

Background: The rush orders in our project are solved using distributed locks.

Once, the operation held a rush sale event for Feitian Moutai, with 100 bottles in stock, but it was oversold! You know, the scarcity of Feitian Moutai on this earth! ! ! The accident was classified as a P0 major accident... we can only accept it calmly. The entire project team's performance was deducted~~

After the accident, the CTO named me and asked me to take the lead in charge to deal with it, okay, charge~

The accident scene

After some understanding, I learned that this snap-up activity interface has never seen this situation before, but why is it oversold this time?

The reason is: the previous rush-sale products were not scarce goods, but this event is actually Feitian Moutai. Through the analysis of buried data, all data have basically doubled, and the enthusiasm of the event can be imagined. And know! Without further ado, let’s go directly to the core code, and the confidential part has been treated with pseudo code. . .

public SeckillActivityRequestVO seckillHandle(SeckillActivityRequestVO request) {
SeckillActivityRequestVO response;
    String key = "key:" + request.getSeckillId;
    try {
        Boolean lockFlag = redisTemplate.opsForValue().setIfAbsent(key, "val", 10, TimeUnit.SECONDS);
        if (lockFlag) {
            // HTTP请求用户服务进行用户相关的校验
            // 用户活动校验
            
            // 库存校验
            Object stock = redisTemplate.opsForHash().get(key+":info", "stock");
            assert stock != null;
            if (Integer.parseInt(stock.toString()) <= 0) {
                // 业务异常
            } else {
                redisTemplate.opsForHash().increment(key+":info", "stock", -1);
                // 生成订单
                // 发布订单创建成功事件
                // 构建响应VO
            }
        }
    } finally {
        // 释放锁
        stringRedisTemplate.delete("key");
        // 构建响应VO
    }
    return response;
}

The above code ensures that the business logic has sufficient execution time through the distributed lock expiration time of 10s; the try-finally statement block is used to ensure that the lock will be released in time. Inventory is also verified within the business code. It looks very safe~ Don’t worry, continue the analysis.

Recommend a warehouse of SpringBoot basic tutorial written by DD, I wish you a helping hand: https://gitee.com/didispace/SpringBoot-Learning /tree/master/2.1.x

Cause of the accident

Feitian Moutai rush sale event attracted a large number of new users to download and register our APP Among them, there are many wool enthusiasts who use professional methods to register new users to harvest wool. Of course, our user system is prepared in advance and is connected to Alibaba Cloud human-machine verification, three-factor authentication, and self-developed risk control system. All kinds of martial arts have blocked a large number of illegal users. I can’t help but like this~

But it is also because of this that the user service is always under a high operating load.

The moment the rush buying activity started, a large number of user verification requests hit the user service. This caused a short response delay on the user service gateway. The response time of some requests exceeded 10s, but due to the response timeout of the HTTP request, we set 30s, which causes the interface to be blocked in user verification. After 10s, the distributed lock has expired. At this time, new requests can get the lock, which means that the lock is overwritten. These blocked interfaces After execution, the logic of releasing the lock will be executed again, which releases the locks of other threads, causing new requests to compete for the lock~ This is really an extremely bad cycle.

At this time, we can only rely on inventory verification, but inventory verification is not non-atomic. It uses the get and compare method. The tragedy of oversold happened~~~

Accident Analysis

After careful analysis, it can be found that this snap-up interface has serious security risks in high concurrency scenarios, mainly concentrated in three places:

No other system risk fault tolerance processing

Due to tight user services, the gateway response is delayed, but there is no response method, which is the trigger of oversold.

The seemingly safe distributed lock is actually not safe at all

Although set key value [EX seconds] [PX milliseconds] [NX| XX] method, but if thread A takes a long time to execute and has no time to release, the lock will expire. At this time, thread B can acquire the lock. When thread A completes execution and releases the lock, thread B's lock is actually released.

At this time, thread C can acquire the lock again. At this time, if thread B finishes executing and releases the lock, it is actually the lock set by thread C that is released. This is the direct cause of oversold.

Non-atomic inventory verification

非原子性的库存校验导致在并发场景下,库存校验的结果不准确。这是超卖的根本原因。

通过以上分析,问题的根本原因在于库存校验严重依赖了分布式锁。因为在分布式锁正常set、del的情况下,库存校验是没有问题的。但是,当分布式锁不安全可靠的时候,库存校验就没有用了。

解决方案

知道了原因之后,我们就可以对症下药了。

实现相对安全的分布式锁

相对安全的定义:set、del是一一映射的,不会出现把其他现成的锁del的情况。从实际情况的角度来看,即使能做到set、del一一映射,也无法保障业务的绝对安全。

因为锁的过期时间始终是有界的,除非不设置过期时间或者把过期时间设置的很长,但这样做也会带来其他问题。故没有意义。

要想实现相对安全的分布式锁,必须依赖key的value值。在释放锁的时候,通过value值的唯一性来保证不会勿删。我们基于LUA脚本实现原子性的get and compare,如下:

public void safedUnLock(String key, String val) {
    String luaScript = "local in = ARGV[1] local curr=redis.call(&#39;get&#39;, KEYS[1]) if in==curr then redis.call(&#39;del&#39;, KEYS[1]) end return &#39;OK&#39;"";
    RedisScript<String> redisScript = RedisScript.of(luaScript);
    redisTemplate.execute(redisScript, Collections.singletonList(key), Collections.singleton(val));
}

我们通过LUA脚本来实现安全地解锁。

实现安全的库存校验

如果我们对于并发有比较深入的了解的话,会发现想 get and compare/ read and save 等操作,都是非原子性的。如果要实现原子性,我们也可以借助LUA脚本来实现。

但就我们这个例子中,由于抢购活动一单只能下1瓶,因此可以不用基于LUA脚本实现而是基于redis本身的原子性。原因在于:

// redis会返回操作之后的结果,这个过程是原子性的
Long currStock = redisTemplate.opsForHash().increment("key", "stock", -1);

发现没有,代码中的库存校验完全是“画蛇添足”。

改进之后的代码

经过以上的分析之后,我们决定新建一个DistributedLocker类专门用于处理分布式锁。

public SeckillActivityRequestVO seckillHandle(SeckillActivityRequestVO request) {
SeckillActivityRequestVO response;
    String key = "key:" + request.getSeckillId();
    String val = UUID.randomUUID().toString();
    try {
        Boolean lockFlag = distributedLocker.lock(key, val, 10, TimeUnit.SECONDS);
        if (!lockFlag) {
            // 业务异常
        }

        // 用户活动校验
        // 库存校验,基于redis本身的原子性来保证
        Long currStock = stringRedisTemplate.opsForHash().increment(key + ":info", "stock", -1);
        if (currStock < 0) { // 说明库存已经扣减完了。
            // 业务异常。
            log.error("[抢购下单] 无库存");
        } else {
            // 生成订单
            // 发布订单创建成功事件
            // 构建响应
        }
    } finally {
        distributedLocker.safedUnLock(key, val);
        // 构建响应
    }
    return response;
}

深度思考

分布式锁有必要么

改进之后,其实可以发现,我们借助于redis本身的原子性扣减库存,也是可以保证不会超卖的。对的。但是如果没有这一层锁的话,那么所有请求进来都会走一遍业务逻辑,由于依赖了其他系统,此时就会造成对其他系统的压力增大。这会增加的性能损耗和服务不稳定性,得不偿失。基于分布式锁可以在一定程度上拦截一些流量。

分布式锁的选型

有人提出用RedLock来实现分布式锁。RedLock的可靠性更高,但其代价是牺牲一定的性能。在本场景,这点可靠性的提升远不如性能的提升带来的性价比高。如果对于可靠性极高要求的场景,则可以采用RedLock来实现。

再次思考分布式锁有必要么

由于bug需要紧急修复上线,因此我们将其优化并在测试环境进行了压测之后,就立马热部署上线了。实际证明,这个优化是成功的,性能方面略微提升了一些,并在分布式锁失效的情况下,没有出现超卖的情况。

然而,还有没有优化空间呢?有的!

由于服务是集群部署,我们可以将库存均摊到集群中的每个服务器上,通过广播通知到集群的各个服务器。网关层基于用户ID做hash算法来决定请求到哪一台服务器。这样就可以基于应用缓存来实现库存的扣减和判断。性能又进一步提升了!

// 通过消息提前初始化好,借助ConcurrentHashMap实现高效线程安全
private static ConcurrentHashMap<Long, Boolean> SECKILL_FLAG_MAP = new ConcurrentHashMap<>();
// 通过消息提前设置好。由于AtomicInteger本身具备原子性,因此这里可以直接使用HashMap
private static Map<Long, AtomicInteger> SECKILL_STOCK_MAP = new HashMap<>();

...

public SeckillActivityRequestVO seckillHandle(SeckillActivityRequestVO request) {
SeckillActivityRequestVO response;

    Long seckillId = request.getSeckillId();
    if(!SECKILL_FLAG_MAP.get(requestseckillId)) {
        // 业务异常
    }
     // 用户活动校验
     // 库存校验
    if(SECKILL_STOCK_MAP.get(seckillId).decrementAndGet() < 0) {
        SECKILL_FLAG_MAP.put(seckillId, false);
        // 业务异常
    }
    // 生成订单
    // 发布订单创建成功事件
    // 构建响应
    return response;
}

通过以上的改造,我们就完全不需要依赖redis了。性能和安全性两方面都能进一步得到提升!

当然,此方案没有考虑到机器的动态扩容、缩容等复杂场景,如果还要考虑这些话,则不如直接考虑分布式锁的解决方案。

Summary

Overselling of scarce commodities is definitely a major accident. If the oversold quantity is large, it will even have a very serious operating impact and social impact on the platform. After this accident, I realized that no line of code in the project can be taken lightly, otherwise in some scenarios, these normally working codes will become fatal killers!

For a developer, when designing a development plan, the plan must be carefully considered. How can we consider the plan thoroughly? Only keep learning!

The above is the detailed content of Remember a major accident caused by Redis distributed lock to avoid pitfalls in the future!. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:Java学习指南. If there is any infringement, please contact admin@php.cn delete