search
HomeJavajavaTutorialHow to use @KafkaListener to concurrently receive messages in batches in Spring Boot


The first step, concurrent consumption

Look at the code first. The key point is that we are using ConcurrentKafkaListenerContainerFactory and set factory.setConcurrency(4); (My topic has 4 partitions. In order to To speed up consumption, set the concurrency to 4, that is, there are 4 KafkaMessageListenerContainers)

    @Bean
    KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(4);
        factory.setBatchListener(true);
        factory.getContainerProperties().setPollTimeout(3000);
        return factory;
    }

Note that you can also add spring.kafka.listener.concurrency=3 directly to application.properties, and then use @KafkaListener for concurrent consumption.


The second step is batch consumption

Then there is batch consumption. The key points are factory.setBatchListener(true);

and propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 50);

One is to enable batch consumption, and the other is to set the maximum number of message records that batch consumption can consume each time.

It is important to note that the ConsumerConfig.MAX_POLL_RECORDS_CONFIG we set is 50, which does not mean that we will keep waiting if 50 messages are not reached. The official explanation is "The maximum number of records returned in a single call to poll()." That is, 50 represents the maximum number of records returned in one poll.

You can see from the startup log that there is max.poll.interval.ms = 300000, which means that we call poll once every max.poll.interval.ms interval. Each poll returns up to 50 records.

max.poll.interval.msThe official explanation is "The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. ";How to use @KafkaListener to concurrently receive messages in batches in Spring Boot

    @Bean
    KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setConcurrency(4);
        factory.setBatchListener(true);
        factory.getContainerProperties().setPollTimeout(3000);
        return factory;
    }

   @Bean
    public Map consumerConfigs() {
        Map propsMap = new HashMap<>();
        propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, propsConfig.getBroker());
        propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, propsConfig.getEnableAutoCommit());
        propsMap.put(ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG, "100");
        propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, "15000");
        propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.GROUP_ID_CONFIG, propsConfig.getGroupId());
        propsMap.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, propsConfig.getAutoOffsetReset());
        propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, 50);
        return propsMap;
    }

Startup log screenshot

How to use @KafkaListener to concurrently receive messages in batches in Spring Boot

Screenshot of official explanation about max.poll.records and max.poll.interval.ms:


######## The third step, partition consumption######For a topic with only one partition, partition consumption is not necessary because it is meaningless. The following example is for the case where there are 2 partitions (there are 4 listenPartitionX methods in my complete code, and my topic has 4 partitions set). Readers can adjust it according to their own situation. ###
public class MyListener {
    private static final String TPOIC = "topic02";

    @KafkaListener(id = "id0", topicPartitions = { @TopicPartition(topic = TPOIC, partitions = { "0" }) })
    public void listenPartition0(List<ConsumerRecord<?, ?>> records) {
        log.info("Id0 Listener, Thread ID: " + Thread.currentThread().getId());
        log.info("Id0 records size " +  records.size());

        for (ConsumerRecord<?, ?> record : records) {
            Optional<?> kafkaMessage = Optional.ofNullable(record.value());
            log.info("Received: " + record);
            if (kafkaMessage.isPresent()) {
                Object message = record.value();
                String topic = record.topic();
                log.info("p0 Received message={}",  message);
            }
        }
    }

    @KafkaListener(id = "id1", topicPartitions = { @TopicPartition(topic = TPOIC, partitions = { "1" }) })
    public void listenPartition1(List<ConsumerRecord<?, ?>> records) {
        log.info("Id1 Listener, Thread ID: " + Thread.currentThread().getId());
        log.info("Id1 records size " +  records.size());

        for (ConsumerRecord<?, ?> record : records) {
            Optional<?> kafkaMessage = Optional.ofNullable(record.value());
            log.info("Received: " + record);
            if (kafkaMessage.isPresent()) {
                Object message = record.value();
                String topic = record.topic();
                log.info("p1 Received message={}",  message);
            }
        }
}

The above is the detailed content of How to use @KafkaListener to concurrently receive messages in batches in Spring Boot. For more information, please follow other related articles on the PHP Chinese website!

Statement
This article is reproduced at:亿速云. If there is any infringement, please contact admin@php.cn delete
怎么使用SpringBoot+Canal实现数据库实时监控怎么使用SpringBoot+Canal实现数据库实时监控May 10, 2023 pm 06:25 PM

Canal工作原理Canal模拟MySQLslave的交互协议,伪装自己为MySQLslave,向MySQLmaster发送dump协议MySQLmaster收到dump请求,开始推送binarylog给slave(也就是Canal)Canal解析binarylog对象(原始为byte流)MySQL打开binlog模式在MySQL配置文件my.cnf设置如下信息:[mysqld]#打开binloglog-bin=mysql-bin#选择ROW(行)模式binlog-format=ROW#配置My

Spring Boot怎么使用SSE方式向前端推送数据Spring Boot怎么使用SSE方式向前端推送数据May 10, 2023 pm 05:31 PM

前言SSE简单的来说就是服务器主动向前端推送数据的一种技术,它是单向的,也就是说前端是不能向服务器发送数据的。SSE适用于消息推送,监控等只需要服务器推送数据的场景中,下面是使用SpringBoot来实现一个简单的模拟向前端推动进度数据,前端页面接受后展示进度条。服务端在SpringBoot中使用时需要注意,最好使用SpringWeb提供的SseEmitter这个类来进行操作,我在刚开始时使用网上说的将Content-Type设置为text-stream这种方式发现每次前端每次都会重新创建接。最

SpringBoot怎么实现二维码扫码登录SpringBoot怎么实现二维码扫码登录May 10, 2023 pm 08:25 PM

一、手机扫二维码登录的原理二维码扫码登录是一种基于OAuth3.0协议的授权登录方式。在这种方式下,应用程序不需要获取用户的用户名和密码,只需要获取用户的授权即可。二维码扫码登录主要有以下几个步骤:应用程序生成一个二维码,并将该二维码展示给用户。用户使用扫码工具扫描该二维码,并在授权页面中授权。用户授权后,应用程序会获取一个授权码。应用程序使用该授权码向授权服务器请求访问令牌。授权服务器返回一个访问令牌给应用程序。应用程序使用该访问令牌访问资源服务器。通过以上步骤,二维码扫码登录可以实现用户的快

spring boot怎么对敏感信息进行加解密spring boot怎么对敏感信息进行加解密May 10, 2023 pm 02:46 PM

我们使用jasypt最新版本对敏感信息进行加解密。1.在项目pom文件中加入如下依赖:com.github.ulisesbocchiojasypt-spring-boot-starter3.0.32.创建加解密公用类:packagecom.myproject.common.utils;importorg.jasypt.encryption.pbe.PooledPBEStringEncryptor;importorg.jasypt.encryption.pbe.config.SimpleStrin

SpringBoot/Spring AOP默认动态代理方式是什么SpringBoot/Spring AOP默认动态代理方式是什么May 10, 2023 pm 03:52 PM

1.springboot2.x及以上版本在SpringBoot2.xAOP中会默认使用Cglib来实现,但是Spring5中默认还是使用jdk动态代理。SpringAOP默认使用JDK动态代理,如果对象没有实现接口,则使用CGLIB代理。当然,也可以强制使用CGLIB代理。在SpringBoot中,通过AopAutoConfiguration来自动装配AOP.2.Springboot1.xSpringboot1.xAOP默认还是使用JDK动态代理的3.SpringBoot2.x为何默认使用Cgl

使用Java SpringBoot集成POI实现Word文档导出使用Java SpringBoot集成POI实现Word文档导出Apr 21, 2023 pm 12:19 PM

知识准备需要理解ApachePOI遵循的标准(OfficeOpenXML(OOXML)标准和微软的OLE2复合文档格式(OLE2)),这将对应着API的依赖包。什么是POIApachePOI是用Java编写的免费开源的跨平台的JavaAPI,ApachePOI提供API给Java程序对MicrosoftOffice格式档案读和写的功能。POI为“PoorObfuscationImplementation”的首字母缩写,意为“简洁版的模糊实现”。ApachePOI是创建和维护操作各种符合Offic

springboot怎么整合shiro实现多验证登录功能springboot怎么整合shiro实现多验证登录功能May 10, 2023 pm 04:19 PM

1.首先新建一个shiroConfigshiro的配置类,代码如下:@ConfigurationpublicclassSpringShiroConfig{/***@paramrealms这儿使用接口集合是为了实现多验证登录时使用的*@return*/@BeanpublicSecurityManagersecurityManager(Collectionrealms){DefaultWebSecurityManagersManager=newDefaultWebSecurityManager();

Springboot如何实现视频上传及压缩功能Springboot如何实现视频上传及压缩功能May 10, 2023 pm 05:16 PM

一、定义视频上传请求接口publicAjaxResultvideoUploadFile(MultipartFilefile){try{if(null==file||file.isEmpty()){returnAjaxResult.error("文件为空");}StringossFilePrefix=StringUtils.genUUID();StringfileName=ossFilePrefix+"-"+file.getOriginalFilename(

See all articles

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

AI Hentai Generator

AI Hentai Generator

Generate AI Hentai for free.

Hot Article

R.E.P.O. Energy Crystals Explained and What They Do (Yellow Crystal)
2 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Repo: How To Revive Teammates
4 weeks agoBy尊渡假赌尊渡假赌尊渡假赌
Hello Kitty Island Adventure: How To Get Giant Seeds
3 weeks agoBy尊渡假赌尊渡假赌尊渡假赌

Hot Tools

SublimeText3 Linux new version

SublimeText3 Linux new version

SublimeText3 Linux latest version

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

Atom editor mac version download

Atom editor mac version download

The most popular open source editor

WebStorm Mac version

WebStorm Mac version

Useful JavaScript development tools

ZendStudio 13.5.1 Mac

ZendStudio 13.5.1 Mac

Powerful PHP integrated development environment