Kafka日志管理在CentOS上的配置指南
一 核心概念与目录规划
二 服务端日志配置 log4j 与 systemd 轮转
log4j.rootLogger=INFO, stdout, R
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=/var/log/kafka/server.log
log4j.appender.R.MaxFileSize=100MB
log4j.appender.R.MaxBackupIndex=10
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=[%d] %p %m (%c)%n
# 按包调整级别
log4j.logger.org.apache.kafka=INFO
# log4j.logger.org.apache.kafka=DEBUG
[Unit]
Description=Apache Kafka
After=network.target
[Service]
Type=simple
User=kafka
Group=kafka
Environment="JAVA_HOME=/usr/lib/jvm/java-11-openjdk"
ExecStart=/opt/kafka_2.13-3.6.1/bin/kafka-server-start.sh /opt/kafka_2.13-3.6.1/config/kraft/server.properties
ExecStop=/opt/kafka_2.13-3.6.1/bin/kafka-server-stop.sh
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
/var/log/kafka/*.log {
daily
rotate 30
missingok
compress
delaycompress
copytruncate
notifempty
su kafka kafka
}
systemctl daemon-reload
systemctl enable --now kafka
logrotate -f /etc/logrotate.d/kafka # 测试轮转
/opt/kafka_2.13-3.6.1/bin/kafka-configs.sh \
--bootstrap-server localhost:9092 \
--alter --entity-type broker --entity-name 1 \
--add-config log4j.logger.org.apache.kafka=DEBUG
说明:log4j 配置与服务端日志轮转是 Kafka 在 CentOS 上最常见、最稳妥的落地方式;动态调级适合临时排障。三 消息日志保留与清理策略
log.dirs=/var/lib/kafka/data
log.retention.hours=168
log.retention.bytes=1099511627776 # 1TB
log.segment.bytes=1073741824 # 1GB
log.retention.check.interval.ms=300000
log.cleanup.policy=delete
log.cleaner.enable=true
log.delete.delay.ms=60000
# 仅对指定 topic 生效
bin/kafka-configs.sh --bootstrap-server localhost:9092 \
--alter --entity-type topics --entity-name __consumer_offsets \
--add-config cleanup.policy=compact
四 快速验证与常用运维命令
# 查看
bin/kafka-topics.sh --bootstrap-server localhost:9092 \
--describe --topic __consumer_offsets
# 调整(示例:开启压缩)
bin/kafka-configs.sh --bootstrap-server localhost:9092 \
--alter --entity-type topics --entity-name __consumer_offsets \
--add-config cleanup.policy=compact
tail -f /var/log/kafka/server.log
ls -lh /var/log/kafka/ | head
logrotate -f /etc/logrotate.d/kafka
/opt/kafka_2.13-3.6.1/bin/kafka-configs.sh \
--bootstrap-server localhost:9092 \
--alter --entity-type broker --entity-name 1 \
--add-config log4j.logger.org.apache.kafka=DEBUG
df -h /var/lib/kafka/data
du -sh /var/lib/kafka/data
bin/kafka-topics.sh --bootstrap-server localhost:9092 --list | wc -l
提示:若使用 KRaft 模式,以上配置同样适用;仅需确保 controller.quorum.voters 与 listeners 正确,且以 kafka-server-start.sh config/kraft/server.properties 启动。