<!-- guava 限流 --> <dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>25.1-jre</version> </dependency>
@Target({ElementType.PARAMETER, ElementType.METHOD}) @Retention(RetentionPolicy.RUNTIME) @Documented public @interface ServiceLimit { String description() default ""; }
@Component @Scope @Aspect public class LimitAspect { 每秒只发出5个令牌,此处是单进程服务的限流,内部采用令牌捅算法实现 private static RateLimiter rateLimiter = RateLimiter.create(5.0); //Service层切点 限流 @Pointcut("@annotation(com.itstyle.seckill.common.aop.ServiceLimit)") public void ServiceAspect() { } @Around("ServiceAspect()") public Object around(ProceedingJoinPoint joinPoint) { Boolean flag = rateLimiter.tryAcquire(); Object obj = null; try { if(flag){ obj = joinPoint.proceed(); } } catch (Throwable e) { e.printStackTrace(); } return obj; } }
@Override @ServiceLimit @Transactional public Result startSeckil(long seckillId,long userId) { //todo 操作 }
Token bucket and leaky bucket
The implementation of the leaky bucket algorithm often relies on queues. When a request arrives, if the queue is not full, it is directly put into the queue, and then a processor takes out the request from the head of the queue at a fixed frequency for processing. If the request volume is large, the queue will be full, and new requests will be discarded.
Token bucket algorithm is a bucket that stores tokens with a fixed capacity, and tokens are added to the bucket at a fixed rate. There is a maximum limit for the number of tokens stored in the bucket. Once exceeded, they will be discarded or rejected. When traffic or network requests arrive, each request must obtain a token. If it can be obtained, it will be processed directly, and a token will be deleted from the token bucket. If it cannot be obtained, the request will be flow-limited and either discarded directly or waited in the buffer.
Comparison between token bucket and leaky bucket
Token bucket adds tokens to the bucket at a fixed rate , whether the request is processed depends on whether there are enough tokens in the bucket. When the number of tokens reduces to zero, new requests are rejected; leaky buckets outflow requests at a constant fixed rate, and the incoming request rate is arbitrary. When the number of incoming requests accumulates When the capacity of the leaky bucket is reached, new incoming requests are rejected;
The token bucket limits the average inflow rate and allows sudden requests. As long as there is a token, it can be processed. Support Take 3 tokens or 4 tokens at a time; the leaky bucket limits the constant outflow rate, that is, the outflow rate is a fixed constant value, for example, it always flows out at a rate of 1, but it cannot be 1 once and 2 next time. Thereby smoothing the burst inflow rate;
The token bucket allows a certain degree of burst, while the main purpose of the leaky bucket is to smooth the outflow rate;
1. Dependency
<dependency> <groupId>com.google.guava</groupId> <artifactId>guava</artifactId> <version>28.1-jre</version> <optional>true</optional> </dependency>
2. Sample code
@Slf4j @Configuration public class RequestInterceptor implements HandlerInterceptor { // 根据字符串分不同的令牌桶, 每天自动清理缓存 private static LoadingCache<String, RateLimiter> cachesRateLimiter = CacheBuilder.newBuilder() .maximumSize(1000) //设置缓存个数 /** * expireAfterWrite是在指定项在一定时间内没有创建/覆盖时,会移除该key,下次取的时候从loading中取 * expireAfterAccess是指定项在一定时间内没有读写,会移除该key,下次取的时候从loading中取 * refreshAfterWrite是在指定时间内没有被创建/覆盖,则指定时间过后,再次访问时,会去刷新该缓存,在新值没有到来之前,始终返回旧值 * 跟expire的区别是,指定时间过后,expire是remove该key,下次访问是同步去获取返回新值; * 而refresh则是指定时间后,不会remove该key,下次访问会触发刷新,新值没有回来时返回旧值 */ .expireAfterAccess(1, TimeUnit.HOURS) .build(new CacheLoader<String, RateLimiter>() { @Override public RateLimiter load(String key) throws Exception { // 新的字符串初始化 (限流每秒2个令牌响应) return RateLimiter.create(2); } }); @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { log.info("request请求地址path[{}] uri[{}]", request.getServletPath(), request.getRequestURI()); try { String str = "hello"; // 令牌桶 RateLimiter rateLimiter = cachesRateLimiter.get(str); if (!rateLimiter.tryAcquire()) { System.out.println("too many requests."); return false; } } catch (Exception e) { // 解决拦截器的异常,全局异常处理器捕获不到的问题 request.setAttribute("exception", e); request.getRequestDispatcher("/error").forward(request, response); } return true; } }
3. Test
@RestController @RequestMapping(value = "user") public class UserController { @GetMapping public Result test2(){ System.out.println("1111"); return new Result(true,200,""); } }
http://localhost:8080/user/
If there is no result class, you can just return a string
4. Test results
Create
RateLimiter provides two factory methods:
One is smooth burst Current limiting
RateLimiter r = RateLimiter.create(5); //项目启动,直接允许5个令牌
One is smooth preheating current limiting
RateLimiter r = RateLimiter.create(2, 3, TimeUnit.SECONDS); //项目启动后3秒后才会到达设置的2个令牌
Disadvantages
RateLimiter can only be used for single-machine current limiting, if you want a cluster For current limiting, you need to introduce redis or Alibaba's open source sentinel middleware.
TimeUnit.SECONDS);` //项目启动后3秒后才会到达设置的2个令牌
The above is the detailed content of How does SpringBoot use RateLimiter to limit current through AOP?. For more information, please follow other related articles on the PHP Chinese website!