Enterprise Java

Apache Kafka Consumer Group Example

This is an in-depth article related to the Apache Kafka Consumer group.

1. Introduction

Apache Kafka is an Apache open-source project. It was initially created on Linkedin. Kafka framework was created in java and scala. It supports publish-subscribe messaging and is fault-tolerant. It is scalable and performs for high-volume messaging. Zookeeper is the basic component that manages the Apache Kafka Server. Kafka has features related to reliability, scalability, performance, distributed logging, and durability.

2. Apache Kafka – Consumer Group

2.1 Prerequisites

Java 7 or 8 is required on the linux, windows or mac operating system. Maven 3.6.1 is required for building the spring and hibernate application.

2.2 Download

You can download Java 8 can be downloaded from the Oracle web site . Apache Maven 3.6.1 can be downloaded from Apache site. Kafka framework’s latest releases are available from this website.

2.3 Setup

You can set the environment variables for JAVA_HOME and PATH. They can be set as shown below:

Setup

JAVA_HOME="/desktop/jdk1.8.0_73"
export JAVA_HOME
PATH=$JAVA_HOME/bin:$PATH
export PATH

The environment variables for maven are set as below:

Maven Environment

JAVA_HOME=”/jboss/jdk1.8.0_73″
export M2_HOME=/users/bhagvan.kommadi/Desktop/apache-maven-3.6.1
export M2=$M2_HOME/bin
export PATH=$M2:$PATH

2.4 How to download and install Apache Kafka

Apache Kafka’s latest releases are available from the apache Kafka website. After downloading the zip file can be extracted to a folder.

To start the zookeeper, you can use the command below:

Zoo keeper

bin/zookeeper-server-start.sh config/zookeeper.properties

The output of the above command is shown as below:

Zoo keeper Output

apples-MacBook-Air:kafka_2.12-2.8.0 bhagvan.kommadi$ bin/zookeeper-server-start.sh config/zookeeper.properties
[2021-05-18 14:29:30,953] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:30,961] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:30,986] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:30,987] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:30,993] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager)
[2021-05-18 14:29:30,993] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager)
[2021-05-18 14:29:30,993] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager)
[2021-05-18 14:29:30,993] WARN Either no config or no quorum defined in config, running  in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain)
[2021-05-18 14:29:31,004] INFO Log4j 1.2 jmx support found and enabled. (org.apache.zookeeper.jmx.ManagedUtil)
[2021-05-18 14:29:31,028] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:31,028] WARN config/zookeeper.properties is relative. Prepend ./ to indicate that you're sure! (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:31,029] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:31,030] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig)
[2021-05-18 14:29:31,030] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain)
[2021-05-18 14:29:31,040] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2021-05-18 14:29:31,076] INFO Server environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,077] INFO Server environment:host.name=localhost (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,077] INFO Server environment:java.version=1.8.0_101 (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,077] INFO Server environment:java.vendor=Oracle Corporation (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,077] INFO Server environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,077] INFO Server environment:java.class.path=/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/activation-1.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/argparse4j-0.7.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-cli-1.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-lang3-3.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-api-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-basic-auth-extension-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-file-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-json-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-client-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-runtime-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-transforms-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-api-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-locator-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-utils-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-core-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-databind-2.10.5.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-scala_2.12-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.inject-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javassist-3.27.0-GA.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-client-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-common-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-hk2-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-media-jaxb-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-server-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-client-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-continuation-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-http-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-io-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-security-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-server-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlet-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlets-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-ajax-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jline-3.12.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-clients-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-log4j-appender-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-metadata-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-raft-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-shell-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-examples-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-scala_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-test-utils-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-tools-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0-sources.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/log4j-1.2.17.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/lz4-java-1.7.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/maven-artifact-3.6.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/metrics-core-2.2.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-buffer-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-codec-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-handler-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-resolver-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/paranamer-2.8.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/reflections-0.9.12.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/rocksdbjni-5.18.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-collection-compat_2.12-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-library-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-logging_2.12-3.9.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-reflect-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/snappy-java-1.1.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-jute-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,079] INFO Server environment:java.library.path=/Users/bhagvan.kommadi/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,079] INFO Server environment:java.io.tmpdir=/var/folders/cr/0y892lq14qv7r24yl0gh0_dm0000gp/T/ (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,079] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,079] INFO Server environment:os.name=Mac OS X (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,079] INFO Server environment:os.arch=x86_64 (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,079] INFO Server environment:os.version=10.16 (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,080] INFO Server environment:user.name=bhagvan.kommadi (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,080] INFO Server environment:user.home=/Users/bhagvan.kommadi (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,080] INFO Server environment:user.dir=/Users/bhagvan.kommadi/Desktop/kafka_2.12-2.8.0 (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,080] INFO Server environment:os.memory.free=498MB (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,080] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,080] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,087] INFO minSessionTimeout set to 6000 (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,087] INFO maxSessionTimeout set to 60000 (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,088] INFO Created server with tickTime 3000 minSessionTimeout 6000 maxSessionTimeout 60000 datadir /tmp/zookeeper/version-2 snapdir /tmp/zookeeper/version-2 (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 14:29:31,121] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory)
[2021-05-18 14:29:31,146] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 1 selector thread(s), 8 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2021-05-18 14:29:31,173] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory)
[2021-05-18 14:29:31,207] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase)
[2021-05-18 14:29:31,312] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2021-05-18 14:29:31,318] INFO Snapshotting: 0x0 to /tmp/zookeeper/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog)
[2021-05-18 14:29:31,346] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor)
[2021-05-18 14:29:31,359] INFO Using checkIntervalMs=60000 maxPerMinute=10000 (org.apache.zookeeper.server.ContainerManager)
[2021-05-18 14:30:02,543] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog)
[2021-05-18 17:08:56,457] INFO Expiring session 0x100002219360000, timeout of 18000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 17:10:12,347] INFO Invalid session 0x100002219360000 for client /0:0:0:0:0:0:0:1:54613, probably expired (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 17:12:35,764] INFO Expiring session 0x100002219360001, timeout of 18000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 17:13:10,572] INFO Invalid session 0x100002219360001 for client /0:0:0:0:0:0:0:1:54748, probably expired (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 17:14:13,880] WARN Unable to read additional data from client sessionid 0x100002219360002, likely client has closed socket (org.apache.zookeeper.server.NIOServerCnxn)
[2021-05-18 22:26:42,509] INFO Expiring session 0x100002219360002, timeout of 18000ms exceeded (org.apache.zookeeper.server.ZooKeeperServer)
[2021-05-18 22:26:44,457] INFO Invalid session 0x100002219360002 for client /127.0.0.1:55120, probably expired (org.apache.zookeeper.server.ZooKeeperServer)

You can now start the apache kafka server using the command below:

Apache Kafka Server Run Command

bin/kafka-server-start.sh config/server.properties

The output of the above command is shown as below:

Apache Kafka Server Output

apples-MacBook-Air:kafka_2.12-2.8.0 bhagvan.kommadi$ bin/kafka-server-start.sh config/server.properties
[2021-05-18 14:30:00,649] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-05-18 14:30:02,048] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-05-18 14:30:02,239] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2021-05-18 14:30:02,263] INFO starting (kafka.server.KafkaServer)
[2021-05-18 14:30:02,264] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2021-05-18 14:30:02,327] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 14:30:02,347] INFO Client environment:zookeeper.version=3.5.9-83df9301aa5c2a5d284a9940177808c01bc35cef, built on 01/06/2021 20:03 GMT (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,348] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,348] INFO Client environment:java.version=1.8.0_101 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,348] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,348] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_101.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,348] INFO Client environment:java.class.path=/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/activation-1.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/aopalliance-repackaged-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/argparse4j-0.7.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/audience-annotations-0.5.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-cli-1.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/commons-lang3-3.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-api-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-basic-auth-extension-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-file-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-json-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-mirror-client-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-runtime-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/connect-transforms-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-api-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-locator-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/hk2-utils-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-core-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-databind-2.10.5.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-paranamer-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jackson-module-scala_2.12-2.10.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.activation-api-1.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.annotation-api-1.3.5.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.inject-2.6.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.validation-api-2.0.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javassist-3.27.0-GA.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.servlet-api-3.1.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/javax.ws.rs-api-2.1.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jaxb-api-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-client-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-common-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-container-servlet-core-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-hk2-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-media-jaxb-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jersey-server-2.31.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-client-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-continuation-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-http-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-io-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-security-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-server-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlet-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-servlets-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jetty-util-ajax-9.4.39.v20210325.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jline-3.12.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/jopt-simple-5.0.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-clients-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-log4j-appender-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-metadata-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-raft-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-shell-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-examples-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-scala_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-streams-test-utils-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka-tools-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0-sources.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/kafka_2.12-2.8.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/log4j-1.2.17.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/lz4-java-1.7.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/maven-artifact-3.6.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/metrics-core-2.2.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-buffer-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-codec-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-handler-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-resolver-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-epoll-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/netty-transport-native-unix-common-4.1.62.Final.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/osgi-resource-locator-1.0.3.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/paranamer-2.8.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/plexus-utils-3.2.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/reflections-0.9.12.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/rocksdbjni-5.18.4.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-collection-compat_2.12-2.3.0.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-java8-compat_2.12-0.9.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-library-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-logging_2.12-3.9.2.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/scala-reflect-2.12.13.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-api-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/slf4j-log4j12-1.7.30.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/snappy-java-1.1.8.1.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zookeeper-jute-3.5.9.jar:/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/bin/../libs/zstd-jni-1.4.9-1.jar (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,351] INFO Client environment:java.library.path=/Users/bhagvan.kommadi/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,351] INFO Client environment:java.io.tmpdir=/var/folders/cr/0y892lq14qv7r24yl0gh0_dm0000gp/T/ (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,352] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,352] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,353] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,353] INFO Client environment:os.version=10.16 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,354] INFO Client environment:user.name=bhagvan.kommadi (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,355] INFO Client environment:user.home=/Users/bhagvan.kommadi (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,356] INFO Client environment:user.dir=/Users/bhagvan.kommadi/Desktop/kafka_2.12-2.8.0 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,356] INFO Client environment:os.memory.free=973MB (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,356] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,358] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,410] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@799f10e1 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 14:30:02,424] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-05-18 14:30:02,439] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-05-18 14:30:02,453] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 14:30:02,468] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 14:30:02,493] INFO Socket connection established, initiating session, client: /127.0.0.1:50851, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 14:30:02,616] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x100002219360000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 14:30:02,624] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 14:30:02,916] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread)
[2021-05-18 14:30:02,956] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener)
[2021-05-18 14:30:02,958] INFO Cleared cache (kafka.server.FinalizedFeatureCache)
[2021-05-18 14:30:03,367] INFO Cluster ID = -5njRDKVSPmbuX__hPltCQ (kafka.server.KafkaServer)
[2021-05-18 14:30:03,375] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2021-05-18 14:30:03,539] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.heartbeat.interval.ms = 2000
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	broker.session.timeout.ms = 9000
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.listener.names = null
	controller.quorum.append.linger.ms = 25
	controller.quorum.election.backoff.max.ms = 1000
	controller.quorum.election.timeout.ms = 1000
	controller.quorum.fetch.timeout.ms = 2000
	controller.quorum.request.timeout.ms = 2000
	controller.quorum.retry.backoff.ms = 20
	controller.quorum.voters = []
	controller.quota.window.num = 11
	controller.quota.window.size.seconds = 1
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delegation.token.secret.key = null
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.max.bytes = 57671680
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 0
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	initial.broker.registration.timeout.ms = 60000
	inter.broker.listener.name = null
	inter.broker.protocol.version = 2.8-IV1
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /tmp/kafka-logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.8-IV1
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connection.creation.rate = 2147483647
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1048588
	metadata.log.dir = null
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	node.id = -1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	process.roles = []
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 30000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.controller.protocol = GSSAPI
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.clientCnxnSocket = null
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 18000
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 18000
	zookeeper.set.acl = false
	zookeeper.ssl.cipher.suites = null
	zookeeper.ssl.client.enable = false
	zookeeper.ssl.crl.enable = false
	zookeeper.ssl.enabled.protocols = null
	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
	zookeeper.ssl.keystore.location = null
	zookeeper.ssl.keystore.password = null
	zookeeper.ssl.keystore.type = null
	zookeeper.ssl.ocsp.enable = false
	zookeeper.ssl.protocol = TLSv1.2
	zookeeper.ssl.truststore.location = null
	zookeeper.ssl.truststore.password = null
	zookeeper.ssl.truststore.type = null
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2021-05-18 14:30:03,551] INFO KafkaConfig values: 
	advertised.host.name = null
	advertised.listeners = null
	advertised.port = null
	alter.config.policy.class.name = null
	alter.log.dirs.replication.quota.window.num = 11
	alter.log.dirs.replication.quota.window.size.seconds = 1
	authorizer.class.name = 
	auto.create.topics.enable = true
	auto.leader.rebalance.enable = true
	background.threads = 10
	broker.heartbeat.interval.ms = 2000
	broker.id = 0
	broker.id.generation.enable = true
	broker.rack = null
	broker.session.timeout.ms = 9000
	client.quota.callback.class = null
	compression.type = producer
	connection.failed.authentication.delay.ms = 100
	connections.max.idle.ms = 600000
	connections.max.reauth.ms = 0
	control.plane.listener.name = null
	controlled.shutdown.enable = true
	controlled.shutdown.max.retries = 3
	controlled.shutdown.retry.backoff.ms = 5000
	controller.listener.names = null
	controller.quorum.append.linger.ms = 25
	controller.quorum.election.backoff.max.ms = 1000
	controller.quorum.election.timeout.ms = 1000
	controller.quorum.fetch.timeout.ms = 2000
	controller.quorum.request.timeout.ms = 2000
	controller.quorum.retry.backoff.ms = 20
	controller.quorum.voters = []
	controller.quota.window.num = 11
	controller.quota.window.size.seconds = 1
	controller.socket.timeout.ms = 30000
	create.topic.policy.class.name = null
	default.replication.factor = 1
	delegation.token.expiry.check.interval.ms = 3600000
	delegation.token.expiry.time.ms = 86400000
	delegation.token.master.key = null
	delegation.token.max.lifetime.ms = 604800000
	delegation.token.secret.key = null
	delete.records.purgatory.purge.interval.requests = 1
	delete.topic.enable = true
	fetch.max.bytes = 57671680
	fetch.purgatory.purge.interval.requests = 1000
	group.initial.rebalance.delay.ms = 0
	group.max.session.timeout.ms = 1800000
	group.max.size = 2147483647
	group.min.session.timeout.ms = 6000
	host.name = 
	initial.broker.registration.timeout.ms = 60000
	inter.broker.listener.name = null
	inter.broker.protocol.version = 2.8-IV1
	kafka.metrics.polling.interval.secs = 10
	kafka.metrics.reporters = []
	leader.imbalance.check.interval.seconds = 300
	leader.imbalance.per.broker.percentage = 10
	listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
	listeners = null
	log.cleaner.backoff.ms = 15000
	log.cleaner.dedupe.buffer.size = 134217728
	log.cleaner.delete.retention.ms = 86400000
	log.cleaner.enable = true
	log.cleaner.io.buffer.load.factor = 0.9
	log.cleaner.io.buffer.size = 524288
	log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
	log.cleaner.max.compaction.lag.ms = 9223372036854775807
	log.cleaner.min.cleanable.ratio = 0.5
	log.cleaner.min.compaction.lag.ms = 0
	log.cleaner.threads = 1
	log.cleanup.policy = [delete]
	log.dir = /tmp/kafka-logs
	log.dirs = /tmp/kafka-logs
	log.flush.interval.messages = 9223372036854775807
	log.flush.interval.ms = null
	log.flush.offset.checkpoint.interval.ms = 60000
	log.flush.scheduler.interval.ms = 9223372036854775807
	log.flush.start.offset.checkpoint.interval.ms = 60000
	log.index.interval.bytes = 4096
	log.index.size.max.bytes = 10485760
	log.message.downconversion.enable = true
	log.message.format.version = 2.8-IV1
	log.message.timestamp.difference.max.ms = 9223372036854775807
	log.message.timestamp.type = CreateTime
	log.preallocate = false
	log.retention.bytes = -1
	log.retention.check.interval.ms = 300000
	log.retention.hours = 168
	log.retention.minutes = null
	log.retention.ms = null
	log.roll.hours = 168
	log.roll.jitter.hours = 0
	log.roll.jitter.ms = null
	log.roll.ms = null
	log.segment.bytes = 1073741824
	log.segment.delete.delay.ms = 60000
	max.connection.creation.rate = 2147483647
	max.connections = 2147483647
	max.connections.per.ip = 2147483647
	max.connections.per.ip.overrides = 
	max.incremental.fetch.session.cache.slots = 1000
	message.max.bytes = 1048588
	metadata.log.dir = null
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	min.insync.replicas = 1
	node.id = -1
	num.io.threads = 8
	num.network.threads = 3
	num.partitions = 1
	num.recovery.threads.per.data.dir = 1
	num.replica.alter.log.dirs.threads = null
	num.replica.fetchers = 1
	offset.metadata.max.bytes = 4096
	offsets.commit.required.acks = -1
	offsets.commit.timeout.ms = 5000
	offsets.load.buffer.size = 5242880
	offsets.retention.check.interval.ms = 600000
	offsets.retention.minutes = 10080
	offsets.topic.compression.codec = 0
	offsets.topic.num.partitions = 50
	offsets.topic.replication.factor = 1
	offsets.topic.segment.bytes = 104857600
	password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
	password.encoder.iterations = 4096
	password.encoder.key.length = 128
	password.encoder.keyfactory.algorithm = null
	password.encoder.old.secret = null
	password.encoder.secret = null
	port = 9092
	principal.builder.class = null
	process.roles = []
	producer.purgatory.purge.interval.requests = 1000
	queued.max.request.bytes = -1
	queued.max.requests = 500
	quota.consumer.default = 9223372036854775807
	quota.producer.default = 9223372036854775807
	quota.window.num = 11
	quota.window.size.seconds = 1
	replica.fetch.backoff.ms = 1000
	replica.fetch.max.bytes = 1048576
	replica.fetch.min.bytes = 1
	replica.fetch.response.max.bytes = 10485760
	replica.fetch.wait.max.ms = 500
	replica.high.watermark.checkpoint.interval.ms = 5000
	replica.lag.time.max.ms = 30000
	replica.selector.class = null
	replica.socket.receive.buffer.bytes = 65536
	replica.socket.timeout.ms = 30000
	replication.quota.window.num = 11
	replication.quota.window.size.seconds = 1
	request.timeout.ms = 30000
	reserved.broker.max.id = 1000
	sasl.client.callback.handler.class = null
	sasl.enabled.mechanisms = [GSSAPI]
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.principal.to.local.rules = [DEFAULT]
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism.controller.protocol = GSSAPI
	sasl.mechanism.inter.broker.protocol = GSSAPI
	sasl.server.callback.handler.class = null
	security.inter.broker.protocol = PLAINTEXT
	security.providers = null
	socket.connection.setup.timeout.max.ms = 30000
	socket.connection.setup.timeout.ms = 10000
	socket.receive.buffer.bytes = 102400
	socket.request.max.bytes = 104857600
	socket.send.buffer.bytes = 102400
	ssl.cipher.suites = []
	ssl.client.auth = none
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.engine.factory.class = null
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.certificate.chain = null
	ssl.keystore.key = null
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.principal.mapping.rules = DEFAULT
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.certificates = null
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
	transaction.max.timeout.ms = 900000
	transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
	transaction.state.log.load.buffer.size = 5242880
	transaction.state.log.min.isr = 1
	transaction.state.log.num.partitions = 50
	transaction.state.log.replication.factor = 1
	transaction.state.log.segment.bytes = 104857600
	transactional.id.expiration.ms = 604800000
	unclean.leader.election.enable = false
	zookeeper.clientCnxnSocket = null
	zookeeper.connect = localhost:2181
	zookeeper.connection.timeout.ms = 18000
	zookeeper.max.in.flight.requests = 10
	zookeeper.session.timeout.ms = 18000
	zookeeper.set.acl = false
	zookeeper.ssl.cipher.suites = null
	zookeeper.ssl.client.enable = false
	zookeeper.ssl.crl.enable = false
	zookeeper.ssl.enabled.protocols = null
	zookeeper.ssl.endpoint.identification.algorithm = HTTPS
	zookeeper.ssl.keystore.location = null
	zookeeper.ssl.keystore.password = null
	zookeeper.ssl.keystore.type = null
	zookeeper.ssl.ocsp.enable = false
	zookeeper.ssl.protocol = TLSv1.2
	zookeeper.ssl.truststore.location = null
	zookeeper.ssl.truststore.password = null
	zookeeper.ssl.truststore.type = null
	zookeeper.sync.time.ms = 2000
 (kafka.server.KafkaConfig)
[2021-05-18 14:30:03,651] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-18 14:30:03,652] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-18 14:30:03,654] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-18 14:30:03,657] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2021-05-18 14:30:03,699] INFO Log directory /tmp/kafka-logs not found, creating it. (kafka.log.LogManager)
[2021-05-18 14:30:03,740] INFO Loading logs from log dirs ArrayBuffer(/tmp/kafka-logs) (kafka.log.LogManager)
[2021-05-18 14:30:03,744] INFO Attempting recovery for all logs in /tmp/kafka-logs since no clean shutdown file was found (kafka.log.LogManager)
[2021-05-18 14:30:03,770] INFO Loaded 0 logs in 31ms. (kafka.log.LogManager)
[2021-05-18 14:30:03,772] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2021-05-18 14:30:03,791] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2021-05-18 14:30:05,315] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas)
[2021-05-18 14:30:05,327] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2021-05-18 14:30:05,398] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2021-05-18 14:30:05,493] INFO [broker-0-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread)
[2021-05-18 14:30:05,540] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:05,543] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:05,545] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:05,548] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:05,587] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2021-05-18 14:30:05,649] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-05-18 14:30:05,713] INFO Stat of the created znode at /brokers/ids/0 is: 25,25,1621328405684,1621328405684,1,0,0,72057740489785344,202,0,25
 (kafka.zk.KafkaZkClient)
[2021-05-18 14:30:05,716] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient)
[2021-05-18 14:30:05,853] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:05,870] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:05,871] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:05,875] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient)
[2021-05-18 14:30:05,930] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener)
[2021-05-18 14:30:05,939] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 14:30:05,948] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 14:30:05,996] INFO Updated cache from existing  to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2021-05-18 14:30:06,030] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2021-05-18 14:30:06,124] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-18 14:30:06,136] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2021-05-18 14:30:06,137] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2021-05-18 14:30:06,218] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2021-05-18 14:30:06,306] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2021-05-18 14:30:06,328] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Starting socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-18 14:30:06,343] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer)
[2021-05-18 14:30:06,345] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Started socket server acceptors and processors (kafka.network.SocketServer)
[2021-05-18 14:30:06,356] INFO Kafka version: 2.8.0 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-18 14:30:06,357] INFO Kafka commitId: ebb1d6e21cc92130 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-18 14:30:06,357] INFO Kafka startTimeMs: 1621328406345 (org.apache.kafka.common.utils.AppInfoParser)
[2021-05-18 14:30:06,360] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2021-05-18 14:30:06,473] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker localhost:9092 (id: 0 rack: null) (kafka.server.BrokerToControllerRequestThread)
[2021-05-18 14:52:04,815] INFO Creating topic ExampleTopic with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2021-05-18 14:52:05,107] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(ExampleTopic-0) (kafka.server.ReplicaFetcherManager)
[2021-05-18 14:52:05,285] INFO [Log partition=ExampleTopic-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 14:52:05,294] INFO Created log for partition ExampleTopic-0 in /tmp/kafka-logs/ExampleTopic-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 14:52:05,296] INFO [Partition ExampleTopic-0 broker=0] No checkpointed highwatermark is found for partition ExampleTopic-0 (kafka.cluster.Partition)
[2021-05-18 14:52:05,298] INFO [Partition ExampleTopic-0 broker=0] Log loaded for partition ExampleTopic-0 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 14:58:05,213] INFO Creating topic example-topic with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2021-05-18 14:58:05,330] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(example-topic-0) (kafka.server.ReplicaFetcherManager)
[2021-05-18 14:58:05,366] INFO [Log partition=example-topic-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 14:58:05,371] INFO Created log for partition example-topic-0 in /tmp/kafka-logs/example-topic-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 14:58:05,375] INFO [Partition example-topic-0 broker=0] No checkpointed highwatermark is found for partition example-topic-0 (kafka.cluster.Partition)
[2021-05-18 14:58:05,376] INFO [Partition example-topic-0 broker=0] Log loaded for partition example-topic-0 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:09:28,855] INFO Creating topic exatopickafka with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2021-05-18 15:09:29,101] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(exatopickafka-0) (kafka.server.ReplicaFetcherManager)
[2021-05-18 15:09:29,123] INFO [Log partition=exatopickafka-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:09:29,150] INFO Created log for partition exatopickafka-0 in /tmp/kafka-logs/exatopickafka-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:09:29,153] INFO [Partition exatopickafka-0 broker=0] No checkpointed highwatermark is found for partition exatopickafka-0 (kafka.cluster.Partition)
[2021-05-18 15:09:29,153] INFO [Partition exatopickafka-0 broker=0] Log loaded for partition exatopickafka-0 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:13:44,531] INFO Creating topic topickafka with configuration {} and initial partition assignment Map(0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2021-05-18 15:13:44,781] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(topickafka-0) (kafka.server.ReplicaFetcherManager)
[2021-05-18 15:13:44,813] INFO [Log partition=topickafka-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:13:44,830] INFO Created log for partition topickafka-0 in /tmp/kafka-logs/topickafka-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:13:44,841] INFO [Partition topickafka-0 broker=0] No checkpointed highwatermark is found for partition topickafka-0 (kafka.cluster.Partition)
[2021-05-18 15:13:44,841] INFO [Partition topickafka-0 broker=0] Log loaded for partition topickafka-0 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:28,989] INFO Creating topic __consumer_offsets with configuration {segment.bytes=104857600, cleanup.policy=compact, compression.type=producer} and initial partition assignment Map(23 -> ArrayBuffer(0), 32 -> ArrayBuffer(0), 41 -> ArrayBuffer(0), 17 -> ArrayBuffer(0), 8 -> ArrayBuffer(0), 35 -> ArrayBuffer(0), 44 -> ArrayBuffer(0), 26 -> ArrayBuffer(0), 11 -> ArrayBuffer(0), 29 -> ArrayBuffer(0), 38 -> ArrayBuffer(0), 47 -> ArrayBuffer(0), 20 -> ArrayBuffer(0), 2 -> ArrayBuffer(0), 5 -> ArrayBuffer(0), 14 -> ArrayBuffer(0), 46 -> ArrayBuffer(0), 49 -> ArrayBuffer(0), 40 -> ArrayBuffer(0), 13 -> ArrayBuffer(0), 4 -> ArrayBuffer(0), 22 -> ArrayBuffer(0), 31 -> ArrayBuffer(0), 16 -> ArrayBuffer(0), 7 -> ArrayBuffer(0), 43 -> ArrayBuffer(0), 25 -> ArrayBuffer(0), 34 -> ArrayBuffer(0), 10 -> ArrayBuffer(0), 37 -> ArrayBuffer(0), 1 -> ArrayBuffer(0), 19 -> ArrayBuffer(0), 28 -> ArrayBuffer(0), 45 -> ArrayBuffer(0), 27 -> ArrayBuffer(0), 36 -> ArrayBuffer(0), 18 -> ArrayBuffer(0), 9 -> ArrayBuffer(0), 21 -> ArrayBuffer(0), 48 -> ArrayBuffer(0), 3 -> ArrayBuffer(0), 12 -> ArrayBuffer(0), 30 -> ArrayBuffer(0), 39 -> ArrayBuffer(0), 15 -> ArrayBuffer(0), 42 -> ArrayBuffer(0), 24 -> ArrayBuffer(0), 6 -> ArrayBuffer(0), 33 -> ArrayBuffer(0), 0 -> ArrayBuffer(0)) (kafka.zk.AdminZkClient)
[2021-05-18 15:16:30,966] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-38, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-13, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager)
[2021-05-18 15:16:31,045] INFO [Log partition=__consumer_offsets-0, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,051] INFO Created log for partition __consumer_offsets-0 in /tmp/kafka-logs/__consumer_offsets-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,061] INFO [Partition __consumer_offsets-0 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,064] INFO [Partition __consumer_offsets-0 broker=0] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,112] INFO [Log partition=__consumer_offsets-29, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,154] INFO Created log for partition __consumer_offsets-29 in /tmp/kafka-logs/__consumer_offsets-29 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,158] INFO [Partition __consumer_offsets-29 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition)
[2021-05-18 15:16:31,159] INFO [Partition __consumer_offsets-29 broker=0] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,212] INFO [Log partition=__consumer_offsets-48, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,220] INFO Created log for partition __consumer_offsets-48 in /tmp/kafka-logs/__consumer_offsets-48 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,221] INFO [Partition __consumer_offsets-48 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition)
[2021-05-18 15:16:31,222] INFO [Partition __consumer_offsets-48 broker=0] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,326] INFO [Log partition=__consumer_offsets-10, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,370] INFO Created log for partition __consumer_offsets-10 in /tmp/kafka-logs/__consumer_offsets-10 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,373] INFO [Partition __consumer_offsets-10 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition)
[2021-05-18 15:16:31,375] INFO [Partition __consumer_offsets-10 broker=0] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,443] INFO [Log partition=__consumer_offsets-45, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,465] INFO Created log for partition __consumer_offsets-45 in /tmp/kafka-logs/__consumer_offsets-45 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,470] INFO [Partition __consumer_offsets-45 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition)
[2021-05-18 15:16:31,470] INFO [Partition __consumer_offsets-45 broker=0] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,492] INFO [Log partition=__consumer_offsets-26, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,509] INFO Created log for partition __consumer_offsets-26 in /tmp/kafka-logs/__consumer_offsets-26 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,517] INFO [Partition __consumer_offsets-26 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition)
[2021-05-18 15:16:31,517] INFO [Partition __consumer_offsets-26 broker=0] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,541] INFO [Log partition=__consumer_offsets-7, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,568] INFO Created log for partition __consumer_offsets-7 in /tmp/kafka-logs/__consumer_offsets-7 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,574] INFO [Partition __consumer_offsets-7 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition)
[2021-05-18 15:16:31,577] INFO [Partition __consumer_offsets-7 broker=0] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,632] INFO [Log partition=__consumer_offsets-42, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,654] INFO Created log for partition __consumer_offsets-42 in /tmp/kafka-logs/__consumer_offsets-42 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,657] INFO [Partition __consumer_offsets-42 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition)
[2021-05-18 15:16:31,658] INFO [Partition __consumer_offsets-42 broker=0] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,677] INFO [Log partition=__consumer_offsets-4, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,680] INFO Created log for partition __consumer_offsets-4 in /tmp/kafka-logs/__consumer_offsets-4 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,682] INFO [Partition __consumer_offsets-4 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition)
[2021-05-18 15:16:31,682] INFO [Partition __consumer_offsets-4 broker=0] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,732] INFO [Log partition=__consumer_offsets-23, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,753] INFO Created log for partition __consumer_offsets-23 in /tmp/kafka-logs/__consumer_offsets-23 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,755] INFO [Partition __consumer_offsets-23 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition)
[2021-05-18 15:16:31,767] INFO [Partition __consumer_offsets-23 broker=0] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,813] INFO [Log partition=__consumer_offsets-1, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,816] INFO Created log for partition __consumer_offsets-1 in /tmp/kafka-logs/__consumer_offsets-1 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,825] INFO [Partition __consumer_offsets-1 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition)
[2021-05-18 15:16:31,827] INFO [Partition __consumer_offsets-1 broker=0] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,849] INFO [Log partition=__consumer_offsets-20, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,854] INFO Created log for partition __consumer_offsets-20 in /tmp/kafka-logs/__consumer_offsets-20 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,857] INFO [Partition __consumer_offsets-20 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition)
[2021-05-18 15:16:31,858] INFO [Partition __consumer_offsets-20 broker=0] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,875] INFO [Log partition=__consumer_offsets-39, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,882] INFO Created log for partition __consumer_offsets-39 in /tmp/kafka-logs/__consumer_offsets-39 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,885] INFO [Partition __consumer_offsets-39 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition)
[2021-05-18 15:16:31,885] INFO [Partition __consumer_offsets-39 broker=0] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,909] INFO [Log partition=__consumer_offsets-17, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,911] INFO Created log for partition __consumer_offsets-17 in /tmp/kafka-logs/__consumer_offsets-17 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,913] INFO [Partition __consumer_offsets-17 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition)
[2021-05-18 15:16:31,913] INFO [Partition __consumer_offsets-17 broker=0] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:31,933] INFO [Log partition=__consumer_offsets-36, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:31,955] INFO Created log for partition __consumer_offsets-36 in /tmp/kafka-logs/__consumer_offsets-36 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:31,956] INFO [Partition __consumer_offsets-36 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition)
[2021-05-18 15:16:31,956] INFO [Partition __consumer_offsets-36 broker=0] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,013] INFO [Log partition=__consumer_offsets-14, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,029] INFO Created log for partition __consumer_offsets-14 in /tmp/kafka-logs/__consumer_offsets-14 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,031] INFO [Partition __consumer_offsets-14 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition)
[2021-05-18 15:16:32,033] INFO [Partition __consumer_offsets-14 broker=0] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,054] INFO [Log partition=__consumer_offsets-33, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,062] INFO Created log for partition __consumer_offsets-33 in /tmp/kafka-logs/__consumer_offsets-33 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,063] INFO [Partition __consumer_offsets-33 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition)
[2021-05-18 15:16:32,065] INFO [Partition __consumer_offsets-33 broker=0] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,080] INFO [Log partition=__consumer_offsets-49, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,097] INFO Created log for partition __consumer_offsets-49 in /tmp/kafka-logs/__consumer_offsets-49 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,098] INFO [Partition __consumer_offsets-49 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition)
[2021-05-18 15:16:32,107] INFO [Partition __consumer_offsets-49 broker=0] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,182] INFO [Log partition=__consumer_offsets-11, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,265] INFO Created log for partition __consumer_offsets-11 in /tmp/kafka-logs/__consumer_offsets-11 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,325] INFO [Partition __consumer_offsets-11 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition)
[2021-05-18 15:16:32,325] INFO [Partition __consumer_offsets-11 broker=0] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,452] INFO [Log partition=__consumer_offsets-30, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,479] INFO Created log for partition __consumer_offsets-30 in /tmp/kafka-logs/__consumer_offsets-30 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,481] INFO [Partition __consumer_offsets-30 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition)
[2021-05-18 15:16:32,482] INFO [Partition __consumer_offsets-30 broker=0] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,493] INFO [Log partition=__consumer_offsets-46, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,500] INFO Created log for partition __consumer_offsets-46 in /tmp/kafka-logs/__consumer_offsets-46 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,500] INFO [Partition __consumer_offsets-46 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition)
[2021-05-18 15:16:32,501] INFO [Partition __consumer_offsets-46 broker=0] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,521] INFO [Log partition=__consumer_offsets-27, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,530] INFO Created log for partition __consumer_offsets-27 in /tmp/kafka-logs/__consumer_offsets-27 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,531] INFO [Partition __consumer_offsets-27 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition)
[2021-05-18 15:16:32,531] INFO [Partition __consumer_offsets-27 broker=0] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,566] INFO [Log partition=__consumer_offsets-8, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,568] INFO Created log for partition __consumer_offsets-8 in /tmp/kafka-logs/__consumer_offsets-8 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,569] INFO [Partition __consumer_offsets-8 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition)
[2021-05-18 15:16:32,569] INFO [Partition __consumer_offsets-8 broker=0] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,692] INFO [Log partition=__consumer_offsets-24, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,698] INFO Created log for partition __consumer_offsets-24 in /tmp/kafka-logs/__consumer_offsets-24 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,698] INFO [Partition __consumer_offsets-24 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition)
[2021-05-18 15:16:32,702] INFO [Partition __consumer_offsets-24 broker=0] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,784] INFO [Log partition=__consumer_offsets-43, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,793] INFO Created log for partition __consumer_offsets-43 in /tmp/kafka-logs/__consumer_offsets-43 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,794] INFO [Partition __consumer_offsets-43 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition)
[2021-05-18 15:16:32,797] INFO [Partition __consumer_offsets-43 broker=0] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,814] INFO [Log partition=__consumer_offsets-5, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,834] INFO Created log for partition __consumer_offsets-5 in /tmp/kafka-logs/__consumer_offsets-5 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,835] INFO [Partition __consumer_offsets-5 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition)
[2021-05-18 15:16:32,840] INFO [Partition __consumer_offsets-5 broker=0] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:32,925] INFO [Log partition=__consumer_offsets-21, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:32,952] INFO Created log for partition __consumer_offsets-21 in /tmp/kafka-logs/__consumer_offsets-21 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:32,952] INFO [Partition __consumer_offsets-21 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition)
[2021-05-18 15:16:32,977] INFO [Partition __consumer_offsets-21 broker=0] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,007] INFO [Log partition=__consumer_offsets-40, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,010] INFO Created log for partition __consumer_offsets-40 in /tmp/kafka-logs/__consumer_offsets-40 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,011] INFO [Partition __consumer_offsets-40 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition)
[2021-05-18 15:16:33,011] INFO [Partition __consumer_offsets-40 broker=0] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,030] INFO [Log partition=__consumer_offsets-2, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,032] INFO Created log for partition __consumer_offsets-2 in /tmp/kafka-logs/__consumer_offsets-2 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,032] INFO [Partition __consumer_offsets-2 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition)
[2021-05-18 15:16:33,032] INFO [Partition __consumer_offsets-2 broker=0] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,048] INFO [Log partition=__consumer_offsets-37, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,057] INFO Created log for partition __consumer_offsets-37 in /tmp/kafka-logs/__consumer_offsets-37 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,059] INFO [Partition __consumer_offsets-37 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition)
[2021-05-18 15:16:33,060] INFO [Partition __consumer_offsets-37 broker=0] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,150] INFO [Log partition=__consumer_offsets-18, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,207] INFO Created log for partition __consumer_offsets-18 in /tmp/kafka-logs/__consumer_offsets-18 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,211] INFO [Partition __consumer_offsets-18 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition)
[2021-05-18 15:16:33,212] INFO [Partition __consumer_offsets-18 broker=0] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,304] INFO [Log partition=__consumer_offsets-34, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,313] INFO Created log for partition __consumer_offsets-34 in /tmp/kafka-logs/__consumer_offsets-34 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,313] INFO [Partition __consumer_offsets-34 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition)
[2021-05-18 15:16:33,314] INFO [Partition __consumer_offsets-34 broker=0] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,334] INFO [Log partition=__consumer_offsets-15, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,335] INFO Created log for partition __consumer_offsets-15 in /tmp/kafka-logs/__consumer_offsets-15 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,336] INFO [Partition __consumer_offsets-15 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition)
[2021-05-18 15:16:33,337] INFO [Partition __consumer_offsets-15 broker=0] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,508] INFO [Log partition=__consumer_offsets-12, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,519] INFO Created log for partition __consumer_offsets-12 in /tmp/kafka-logs/__consumer_offsets-12 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,520] INFO [Partition __consumer_offsets-12 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition)
[2021-05-18 15:16:33,521] INFO [Partition __consumer_offsets-12 broker=0] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,605] INFO [Log partition=__consumer_offsets-31, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,611] INFO Created log for partition __consumer_offsets-31 in /tmp/kafka-logs/__consumer_offsets-31 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,611] INFO [Partition __consumer_offsets-31 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition)
[2021-05-18 15:16:33,611] INFO [Partition __consumer_offsets-31 broker=0] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,651] INFO [Log partition=__consumer_offsets-9, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,654] INFO Created log for partition __consumer_offsets-9 in /tmp/kafka-logs/__consumer_offsets-9 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,654] INFO [Partition __consumer_offsets-9 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition)
[2021-05-18 15:16:33,654] INFO [Partition __consumer_offsets-9 broker=0] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,817] INFO [Log partition=__consumer_offsets-47, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,821] INFO Created log for partition __consumer_offsets-47 in /tmp/kafka-logs/__consumer_offsets-47 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,821] INFO [Partition __consumer_offsets-47 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition)
[2021-05-18 15:16:33,822] INFO [Partition __consumer_offsets-47 broker=0] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,840] INFO [Log partition=__consumer_offsets-19, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,846] INFO Created log for partition __consumer_offsets-19 in /tmp/kafka-logs/__consumer_offsets-19 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,848] INFO [Partition __consumer_offsets-19 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition)
[2021-05-18 15:16:33,849] INFO [Partition __consumer_offsets-19 broker=0] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:33,920] INFO [Log partition=__consumer_offsets-28, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:33,942] INFO Created log for partition __consumer_offsets-28 in /tmp/kafka-logs/__consumer_offsets-28 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:33,942] INFO [Partition __consumer_offsets-28 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition)
[2021-05-18 15:16:33,942] INFO [Partition __consumer_offsets-28 broker=0] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,077] INFO [Log partition=__consumer_offsets-38, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,082] INFO Created log for partition __consumer_offsets-38 in /tmp/kafka-logs/__consumer_offsets-38 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,082] INFO [Partition __consumer_offsets-38 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition)
[2021-05-18 15:16:34,082] INFO [Partition __consumer_offsets-38 broker=0] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,102] INFO [Log partition=__consumer_offsets-35, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,106] INFO Created log for partition __consumer_offsets-35 in /tmp/kafka-logs/__consumer_offsets-35 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,167] INFO [Partition __consumer_offsets-35 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition)
[2021-05-18 15:16:34,167] INFO [Partition __consumer_offsets-35 broker=0] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,285] INFO [Log partition=__consumer_offsets-6, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,289] INFO Created log for partition __consumer_offsets-6 in /tmp/kafka-logs/__consumer_offsets-6 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,290] INFO [Partition __consumer_offsets-6 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition)
[2021-05-18 15:16:34,291] INFO [Partition __consumer_offsets-6 broker=0] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,301] INFO [Log partition=__consumer_offsets-44, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,304] INFO Created log for partition __consumer_offsets-44 in /tmp/kafka-logs/__consumer_offsets-44 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,305] INFO [Partition __consumer_offsets-44 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition)
[2021-05-18 15:16:34,305] INFO [Partition __consumer_offsets-44 broker=0] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,328] INFO [Log partition=__consumer_offsets-25, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,329] INFO Created log for partition __consumer_offsets-25 in /tmp/kafka-logs/__consumer_offsets-25 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,330] INFO [Partition __consumer_offsets-25 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition)
[2021-05-18 15:16:34,331] INFO [Partition __consumer_offsets-25 broker=0] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,345] INFO [Log partition=__consumer_offsets-16, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,347] INFO Created log for partition __consumer_offsets-16 in /tmp/kafka-logs/__consumer_offsets-16 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,347] INFO [Partition __consumer_offsets-16 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition)
[2021-05-18 15:16:34,347] INFO [Partition __consumer_offsets-16 broker=0] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,365] INFO [Log partition=__consumer_offsets-22, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,368] INFO Created log for partition __consumer_offsets-22 in /tmp/kafka-logs/__consumer_offsets-22 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,368] INFO [Partition __consumer_offsets-22 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition)
[2021-05-18 15:16:34,368] INFO [Partition __consumer_offsets-22 broker=0] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,380] INFO [Log partition=__consumer_offsets-41, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,382] INFO Created log for partition __consumer_offsets-41 in /tmp/kafka-logs/__consumer_offsets-41 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,384] INFO [Partition __consumer_offsets-41 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition)
[2021-05-18 15:16:34,385] INFO [Partition __consumer_offsets-41 broker=0] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,405] INFO [Log partition=__consumer_offsets-32, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,408] INFO Created log for partition __consumer_offsets-32 in /tmp/kafka-logs/__consumer_offsets-32 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,409] INFO [Partition __consumer_offsets-32 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition)
[2021-05-18 15:16:34,409] INFO [Partition __consumer_offsets-32 broker=0] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,423] INFO [Log partition=__consumer_offsets-3, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,424] INFO Created log for partition __consumer_offsets-3 in /tmp/kafka-logs/__consumer_offsets-3 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,425] INFO [Partition __consumer_offsets-3 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition)
[2021-05-18 15:16:34,425] INFO [Partition __consumer_offsets-3 broker=0] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,530] INFO [Log partition=__consumer_offsets-13, dir=/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-05-18 15:16:34,532] INFO Created log for partition __consumer_offsets-13 in /tmp/kafka-logs/__consumer_offsets-13 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> compact, flush.ms -> 9223372036854775807, segment.bytes -> 104857600, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.8-IV1, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager)
[2021-05-18 15:16:34,534] INFO [Partition __consumer_offsets-13 broker=0] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition)
[2021-05-18 15:16:34,535] INFO [Partition __consumer_offsets-13 broker=0] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition)
[2021-05-18 15:16:34,583] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 22 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,595] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,632] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 25 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,633] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,650] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 28 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,650] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,651] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 31 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,654] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,655] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 34 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,655] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,656] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 37 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,657] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,661] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 40 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,771] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,721] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 81 milliseconds, of which 35 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,794] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 145 milliseconds, of which 145 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,795] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 144 milliseconds, of which 143 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,795] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 140 milliseconds, of which 140 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,795] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 139 milliseconds, of which 139 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,796] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 135 milliseconds, of which 135 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,798] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 27 milliseconds, of which 27 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,801] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 43 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,801] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,801] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 46 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,801] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,801] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 49 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,801] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,801] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 41 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,802] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 1 milliseconds, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,813] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,813] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 44 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,813] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,813] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 47 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,813] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,813] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 1 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,832] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,832] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 4 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,833] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,833] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 7 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,833] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,833] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 10 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,833] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,833] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 13 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,833] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,833] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 16 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,833] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,833] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 19 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,833] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,834] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 2 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,840] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,840] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 5 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,834] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 33 milliseconds, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,841] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,841] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 8 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,841] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,842] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 11 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,842] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,842] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 14 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,844] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 41 milliseconds, of which 41 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,958] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 145 milliseconds, of which 145 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,962] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 149 milliseconds, of which 148 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,962] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 149 milliseconds, of which 149 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,962] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 130 milliseconds, of which 130 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,963] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 130 milliseconds, of which 129 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,963] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 130 milliseconds, of which 130 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 131 milliseconds, of which 130 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 131 milliseconds, of which 131 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 131 milliseconds, of which 131 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,964] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 130 milliseconds, of which 130 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 125 milliseconds, of which 124 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,965] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 124 milliseconds, of which 124 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,969] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 128 milliseconds, of which 128 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,970] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 128 milliseconds, of which 127 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,844] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,971] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 17 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,971] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,972] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 20 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,971] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,972] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,973] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 2 milliseconds, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,973] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 23 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,973] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,973] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,974] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 26 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,974] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,974] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 29 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,974] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,974] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 32 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,974] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 0 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,974] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 0 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,975] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 1 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,974] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,975] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 35 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,975] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,975] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 0 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,976] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 38 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,976] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,976] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 0 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,976] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,976] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 3 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,976] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,976] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 6 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,979] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,979] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 9 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,979] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,979] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 12 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,979] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,979] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 15 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,979] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,979] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 18 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,979] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:34,980] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 21 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:34,977] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 2 milliseconds, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,029] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 53 milliseconds, of which 53 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,030] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 54 milliseconds, of which 54 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,030] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 54 milliseconds, of which 54 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,030] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 51 milliseconds, of which 51 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,030] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 51 milliseconds, of which 51 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,030] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 51 milliseconds, of which 51 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,031] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 52 milliseconds, of which 51 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,032] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 52 milliseconds, of which 51 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,033] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,033] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 24 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,034] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,034] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,036] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 27 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,036] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,036] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 0 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,037] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 1 milliseconds, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,036] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 30 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,037] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,038] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 33 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,038] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,038] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 36 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,038] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,038] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 39 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,038] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,038] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 42 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,038] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,039] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 45 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,038] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 1 milliseconds, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,040] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,041] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 3 milliseconds, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,041] INFO [GroupCoordinator 0]: Elected as the group coordinator for partition 48 (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:35,042] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,042] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 4 milliseconds, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,042] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 4 milliseconds, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,042] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 3 milliseconds, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,043] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 2 milliseconds, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,043] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 1 milliseconds, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:16:35,752] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-56058 in state PreparingRebalance with old generation 0 (__consumer_offsets-5) (reason: Adding new member consumer-console-consumer-56058-1-c29fae38-5096-4548-bddf-75047f5d7555 with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:36,334] INFO [GroupCoordinator 0]: Stabilized group console-consumer-56058 generation 1 (__consumer_offsets-5) with 1 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:16:36,449] INFO [GroupCoordinator 0]: Assignment received from leader for group console-consumer-56058 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:21:35,358] INFO [GroupCoordinator 0]: Preparing to rebalance group my-group in state PreparingRebalance with old generation 0 (__consumer_offsets-12) (reason: Adding new member consumer-my-group-1-87999654-627c-454f-b215-2a502b803dd5 with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:21:35,363] INFO [GroupCoordinator 0]: Stabilized group my-group generation 1 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:21:35,378] INFO [GroupCoordinator 0]: Assignment received from leader for group my-group for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:22:58,585] INFO [GroupCoordinator 0]: Preparing to rebalance group my-group in state PreparingRebalance with old generation 1 (__consumer_offsets-12) (reason: Adding new member consumer-my-group-1-703ad4fb-994f-4d4a-8568-9b359548df93 with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:22:59,511] INFO [GroupCoordinator 0]: Stabilized group my-group generation 2 (__consumer_offsets-12) with 2 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:22:59,516] INFO [GroupCoordinator 0]: Assignment received from leader for group my-group for generation 2. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:26:30,514] INFO [GroupCoordinator 0]: Preparing to rebalance group my-group in state PreparingRebalance with old generation 2 (__consumer_offsets-12) (reason: Adding new member consumer-my-group-1-bd81f809-7591-49a5-924f-35c7aab48c4c with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:26:51,326] INFO [GroupCoordinator 0]: Member[group.instance.id None, member.id consumer-console-consumer-56058-1-c29fae38-5096-4548-bddf-75047f5d7555] in group console-consumer-56058 has left, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:26:51,347] INFO [GroupCoordinator 0]: Preparing to rebalance group console-consumer-56058 in state PreparingRebalance with old generation 1 (__consumer_offsets-5) (reason: removing member consumer-console-consumer-56058-1-c29fae38-5096-4548-bddf-75047f5d7555 on LeaveGroup) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:26:51,352] INFO [GroupCoordinator 0]: Group console-consumer-56058 with generation 2 is now empty (__consumer_offsets-5) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:26:53,943] INFO [GroupCoordinator 0]: Member consumer-my-group-1-87999654-627c-454f-b215-2a502b803dd5 in group my-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:26:53,978] INFO [GroupCoordinator 0]: Stabilized group my-group generation 3 (__consumer_offsets-12) with 2 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:26:54,230] INFO [GroupCoordinator 0]: Assignment received from leader for group my-group for generation 3. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:27:20,264] INFO [GroupCoordinator 0]: Preparing to rebalance group my-group in state PreparingRebalance with old generation 3 (__consumer_offsets-12) (reason: Adding new member consumer-my-group-1-ea8f51b0-c233-4e0c-b7eb-9090a7cfe70a with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:27:39,056] INFO [GroupCoordinator 0]: Member consumer-my-group-1-bd81f809-7591-49a5-924f-35c7aab48c4c in group my-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:27:44,337] INFO [GroupCoordinator 0]: Member consumer-my-group-1-703ad4fb-994f-4d4a-8568-9b359548df93 in group my-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:27:44,345] INFO [GroupCoordinator 0]: Stabilized group my-group generation 4 (__consumer_offsets-12) with 2 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:27:44,413] INFO [GroupCoordinator 0]: Assignment received from leader for group my-group for generation 4. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:29:08,568] INFO [GroupCoordinator 0]: Preparing to rebalance group my-group in state PreparingRebalance with old generation 4 (__consumer_offsets-12) (reason: Adding new member consumer-my-group-1-1cd7edb8-d121-45a2-bf73-a86b7e17da97 with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:29:32,493] INFO [GroupCoordinator 0]: Member consumer-my-group-1-ea8f51b0-c233-4e0c-b7eb-9090a7cfe70a in group my-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:29:32,509] INFO [GroupCoordinator 0]: Stabilized group my-group generation 5 (__consumer_offsets-12) with 2 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:29:32,533] INFO [GroupCoordinator 0]: Assignment received from leader for group my-group for generation 5. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:30:06,003] INFO [GroupMetadataManager brokerId=0] Group console-consumer-56058 transitioned to Dead in generation 2 (kafka.coordinator.group.GroupMetadataManager)
[2021-05-18 15:31:35,661] INFO [GroupCoordinator 0]: Member consumer-my-group-1-1cd7edb8-d121-45a2-bf73-a86b7e17da97 in group my-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:31:35,668] INFO [GroupCoordinator 0]: Preparing to rebalance group my-group in state PreparingRebalance with old generation 5 (__consumer_offsets-12) (reason: removing member consumer-my-group-1-1cd7edb8-d121-45a2-bf73-a86b7e17da97 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:31:35,712] INFO [GroupCoordinator 0]: Stabilized group my-group generation 6 (__consumer_offsets-12) with 1 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:31:35,721] INFO [GroupCoordinator 0]: Assignment received from leader for group my-group for generation 6. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:42:01,395] INFO [GroupCoordinator 0]: Preparing to rebalance group example-group in state PreparingRebalance with old generation 0 (__consumer_offsets-28) (reason: Adding new member consumer-example-group-1-6e88be0c-87ec-4ba1-9d1b-75098511d635 with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:42:01,413] INFO [GroupCoordinator 0]: Stabilized group example-group generation 1 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:42:01,428] INFO [GroupCoordinator 0]: Assignment received from leader for group example-group for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:42:19,532] INFO [GroupCoordinator 0]: Member consumer-my-group-1-08186c8b-7c24-4b78-825c-94bdb576742e in group my-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:42:19,539] INFO [GroupCoordinator 0]: Preparing to rebalance group my-group in state PreparingRebalance with old generation 6 (__consumer_offsets-12) (reason: removing member consumer-my-group-1-08186c8b-7c24-4b78-825c-94bdb576742e on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:42:19,541] INFO [GroupCoordinator 0]: Group my-group with generation 7 is now empty (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:44:02,257] INFO [GroupCoordinator 0]: Preparing to rebalance group example-group in state PreparingRebalance with old generation 1 (__consumer_offsets-28) (reason: Adding new member consumer-example-group-1-f33e0106-be5b-42c0-8e8e-1f8dff3f28af with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:44:04,594] INFO [GroupCoordinator 0]: Stabilized group example-group generation 2 (__consumer_offsets-28) with 2 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:44:04,601] INFO [GroupCoordinator 0]: Assignment received from leader for group example-group for generation 2. The group has 2 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:44:50,230] INFO [GroupCoordinator 0]: Preparing to rebalance group example-group in state PreparingRebalance with old generation 2 (__consumer_offsets-28) (reason: Adding new member consumer-example-group-1-8bf20ab7-724a-4a3a-b81e-323f53d5758d with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:44:52,675] INFO [GroupCoordinator 0]: Member consumer-example-group-1-6e88be0c-87ec-4ba1-9d1b-75098511d635 in group example-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:45:16,653] INFO [GroupCoordinator 0]: Member consumer-example-group-1-f33e0106-be5b-42c0-8e8e-1f8dff3f28af in group example-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:45:16,662] INFO [GroupCoordinator 0]: Stabilized group example-group generation 3 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 15:45:16,718] INFO [GroupCoordinator 0]: Assignment received from leader for group example-group for generation 3. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 17:08:51,232] WARN Client session timed out, have not heard from server in 13799ms for sessionid 0x100002219360000 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:09:02,578] INFO Client session timed out, have not heard from server in 13799ms for sessionid 0x100002219360000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:09:05,941] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:09:05,956] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:54613, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:10:12,679] WARN Unable to reconnect to ZooKeeper service, session 0x100002219360000 has expired (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:10:12,898] INFO Unable to reconnect to ZooKeeper service, session 0x100002219360000 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:10:13,167] INFO EventThread shut down for session: 0x100002219360000 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:10:27,835] INFO [ZooKeeperClient Kafka server] Session expired. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:10:34,150] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:11:49,234] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@799f10e1 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 17:11:49,689] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-05-18 17:11:58,272] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:11:59,732] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:12:00,505] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:54685, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:12:04,772] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-05-18 17:12:04,885] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x100002219360001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:12:05,202] INFO Updated cache from existing FinalizedFeaturesAndEpoch(features=Features{}, epoch=0) to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2021-05-18 17:12:05,885] INFO Stat of the created znode at /brokers/ids/0 is: 161,161,1621338125151,1621338125151,1,0,0,72057740489785345,202,0,161
 (kafka.zk.KafkaZkClient)
[2021-05-18 17:12:16,730] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 161 (kafka.zk.KafkaZkClient)
[2021-05-18 17:13:02,547] WARN Client session timed out, have not heard from server in 45378ms for sessionid 0x100002219360001 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:02,558] INFO [GroupCoordinator 0]: Member consumer-example-group-1-8bf20ab7-724a-4a3a-b81e-323f53d5758d in group example-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 17:13:02,562] INFO [GroupCoordinator 0]: Preparing to rebalance group example-group in state PreparingRebalance with old generation 3 (__consumer_offsets-28) (reason: removing member consumer-example-group-1-8bf20ab7-724a-4a3a-b81e-323f53d5758d on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 17:13:02,564] INFO [GroupCoordinator 0]: Group example-group with generation 4 is now empty (__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 17:13:02,597] INFO Client session timed out, have not heard from server in 45378ms for sessionid 0x100002219360001, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:09,572] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:09,575] INFO [GroupCoordinator 0]: Preparing to rebalance group example-group in state PreparingRebalance with old generation 4 (__consumer_offsets-28) (reason: Adding new member consumer-example-group-1-dabf5219-573b-4da3-8b8c-2b33e67de1d4 with group instance id None) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 17:13:09,587] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:09,667] INFO [GroupCoordinator 0]: Stabilized group example-group generation 5 (__consumer_offsets-28) with 1 members (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 17:13:09,779] INFO [GroupCoordinator 0]: Assignment received from leader for group example-group for generation 5. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 17:13:10,487] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:10,515] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:54748, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:10,587] WARN Unable to reconnect to ZooKeeper service, session 0x100002219360001 has expired (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:10,590] INFO EventThread shut down for session: 0x100002219360001 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:10,594] INFO [ZooKeeperClient Kafka server] Session expired. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:10,600] INFO Unable to reconnect to ZooKeeper service, session 0x100002219360001 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:12,905] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:12,906] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@799f10e1 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 17:13:12,909] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-05-18 17:13:12,913] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:12,923] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-05-18 17:13:13,293] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:13,303] INFO Socket connection established, initiating session, client: /127.0.0.1:54758, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:55,557] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:55,675] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:55,676] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:55,677] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:57,070] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:57,072] INFO Socket connection established, initiating session, client: /127.0.0.1:54778, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:57,100] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x100002219360002, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:13:57,100] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:57,100] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:57,100] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:13:57,116] INFO Updated cache from existing FinalizedFeaturesAndEpoch(features=Features{}, epoch=0) to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2021-05-18 17:13:57,872] INFO Stat of the created znode at /brokers/ids/0 is: 165,165,1621338237106,1621338237106,1,0,0,72057740489785346,202,0,165
 (kafka.zk.KafkaZkClient)
[2021-05-18 17:13:57,873] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 165 (kafka.zk.KafkaZkClient)
[2021-05-18 17:14:11,664] WARN Client session timed out, have not heard from server in 12008ms for sessionid 0x100002219360002 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:14:11,671] INFO Client session timed out, have not heard from server in 12008ms for sessionid 0x100002219360002, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:14:13,330] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:14:13,494] INFO Socket connection established, initiating session, client: /127.0.0.1:54800, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:14:14,047] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x100002219360002, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 17:14:17,273] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 17:14:17,273] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 22:26:43,029] WARN Client session timed out, have not heard from server in 19260ms for sessionid 0x100002219360002 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:42,322] INFO Client session timed out, have not heard from server in 19260ms for sessionid 0x100002219360002, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:44,267] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:44,393] INFO Socket connection established, initiating session, client: /127.0.0.1:55120, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:44,458] WARN Unable to reconnect to ZooKeeper service, session 0x100002219360002 has expired (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:44,459] INFO [ZooKeeperClient Kafka server] Session expired. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 22:26:44,558] INFO Unable to reconnect to ZooKeeper service, session 0x100002219360002 has expired, closing socket connection (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:44,559] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2021-05-18 22:26:44,558] INFO EventThread shut down for session: 0x100002219360002 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:51,105] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@799f10e1 (org.apache.zookeeper.ZooKeeper)
[2021-05-18 22:26:51,841] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2021-05-18 22:26:51,858] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:51,950] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2021-05-18 22:26:52,175] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:52,176] INFO Socket connection established, initiating session, client: /127.0.0.1:55126, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:52,256] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x100002219360003, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
[2021-05-18 22:26:52,673] INFO Stat of the created znode at /brokers/ids/0 is: 170,170,1621357012649,1621357012649,1,0,0,72057740489785347,202,0,170
 (kafka.zk.KafkaZkClient)
[2021-05-18 22:26:52,675] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://localhost:9092, czxid (broker epoch): 170 (kafka.zk.KafkaZkClient)
[2021-05-18 22:26:52,689] INFO Updated cache from existing FinalizedFeaturesAndEpoch(features=Features{}, epoch=0) to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache)
[2021-05-18 22:58:22,799] INFO [GroupCoordinator 0]: Member consumer-example-group-1-dabf5219-573b-4da3-8b8c-2b33e67de1d4 in group example-group has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 22:58:22,827] INFO [GroupCoordinator 0]: Preparing to rebalance group example-group in state PreparingRebalance with old generation 5 (__consumer_offsets-28) (reason: removing member consumer-example-group-1-dabf5219-573b-4da3-8b8c-2b33e67de1d4 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
[2021-05-18 22:58:22,834] INFO [GroupCoordinator 0]: Group example-group with generation 6 is now empty (__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)

2.5 Consumer Group

In Kafka, ConsumerGroup is a set of consumers which are multi-threaded consuming from the Kafka Topics. Consumers register to the group using the group. id. Parallelism in the ConsumerGroup is related to the count of the consumers in the group. The partitions of the topic are assigned to the consumer in the group. Each partition is consumed only by one consumer in the group. A message sent to the topic is read-only by one consumer in the group. Consumers can have the order of the messages in which they are sent. Kafka can rebalance based on the number of the processes. Kafka cluster can reconfigure if the heartbeat is not sent to the zookeeper. Kafka can assign the partitions available to the threads available. This is done by shifting the partition to the other process.

Let us now look at the kafka ConsumerGroup example.

Apache Kafka Consumer Group Example

import java.util.Properties;
import java.util.Arrays;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.ConsumerRecord;

public class ConsumerGroupExample {
	
   public static void main(String[] args) throws Exception {
      if(args.length < 2){
         System.out.println("Input this command: consumer  ");
         return;
      }
      
      String topic = args[0].toString();
      String consumerGroup = args[1].toString();
	  

	  
      Properties properties = new Properties();
      properties.put("bootstrap.servers", "localhost:9092");
      properties.put("group.id", consumerGroup);
      properties.put("enable.auto.commit", "true");
      properties.put("auto.commit.interval.ms", "1000");
      properties.put("session.timeout.ms", "30000");
      properties.put("key.deserializer",          
         "org.apache.kafka.common.serialization.StringDeserializer");
      properties.put("value.deserializer", 
         "org.apache.kafka.common.serialization.StringDeserializer");
      KafkaConsumer consumer = new KafkaConsumer(properties);
      
      consumer.subscribe(Arrays.asList(topic));
      System.out.println("Subscription to topic complete " + topic);
      int i = 0;
         
      while (true) {
         ConsumerRecords records = consumer.poll(100);
            for (ConsumerRecord record : records)
               System.out.printf("offset value = %d, key = %s, value = %s\n", 
               record.offset(), record.key(), record.value());
      }     
   }  
}

The command to compile the above code is shown below:

Apache Kafka ConsumerGroup Compilation Command

javac -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*" ConsumerGroupExample.java

The output of the above command is shown as below:

Apache Kafka ConsumerGroup Compilation Output

apples-MacBook-Air:apachekafkaproducer bhagvan.kommadi$ javac -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*" ConsumerGroupExample.java
apples-MacBook-Air:apachekafkaproducer bhagvan.kommadi$

You can use commandline producer script for creating topic and messages.

Apache Kafka Producer Command Line

./kafka-console-producer.sh --topic topickafka --bootstrap-server localhost:9092

The output of the above command is shown as below:

Apache Kafka Consumer Group Execution Output

apples-MacBook-Air:bin bhagvan.kommadi$ ./kafka-console-producer.sh --topic topickafka --bootstrap-server localhost:9092
>Test consumer group 01
>Test consumer group 02

The command for executing the sample consumer as process 1 is shown below:

Apache Kafka Consumer Group Execution Command

java -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*:." ConsumerGroupExample  example-group

The output of the above command showing the message sent to the example topic created in process 1is shown as below:

Apache Kafka Consumer 1 Output

apples-MacBook-Air:apachekafkaconsumergroup bhagvan.kommadi$ java -cp "/Users/agvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*:." ConsumerGroupExample topickafka example-group
Subscription to topic complete topickafka
offset value = 15, key = null, value = Test consumer group 01

The command for executing the sample consumer as process 2 is shown below:

Apache Kafka Consumer Group Execution Command

java -cp "/Users/bhagvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*:." ConsumerGroupExample  example-group

The output of the above command showing the message sent to the example topic created in process 2 is shown as below:

Apache Kafka Consumer 2 Output

apples-MacBook-Air:apachekafkaconsumergroup bhagvan.kommadi$ java -cp "/Users/agvan.kommadi/desktop/kafka_2.12-2.8.0/libs/*:." ConsumerGroupExample topickafka example-group
Subscription to topic complete topickafka
offset value = 17, key = null, value = Test consumer group 02

3. Download the Source Code

That was an in-depth article related to the Apache Kafka Consumer group.

Download
You can download the full source code of this example here: Apache Kafka Consumer Group Example

Bhagvan Kommadi

Bhagvan Kommadi is the Founder of Architect Corner & has around 20 years’ experience in the industry, ranging from large scale enterprise development to helping incubate software product start-ups. He has done Masters in Industrial Systems Engineering at Georgia Institute of Technology (1997) and Bachelors in Aerospace Engineering from Indian Institute of Technology, Madras (1993). He is member of IFX forum,Oracle JCP and participant in Java Community Process. He founded Quantica Computacao, the first quantum computing startup in India. Markets and Markets have positioned Quantica Computacao in ‘Emerging Companies’ section of Quantum Computing quadrants. Bhagvan has engineered and developed simulators and tools in the area of quantum technology using IBM Q, Microsoft Q# and Google QScript. He has reviewed the Manning book titled : "Machine Learning with TensorFlow”. He is also the author of Packt Publishing book - "Hands-On Data Structures and Algorithms with Go".He is member of IFX forum,Oracle JCP and participant in Java Community Process. He is member of the MIT Technology Review Global Panel.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button