sudo yum install -y java-11-openjdk-devel)。zoo.cfg配置文件,设置dataDir(数据目录)、clientPort(2181)、initLimit(5)、syncLimit(2)及服务器列表(server.x=ip:2888:3888);启动Zookeeper(./zkServer.sh start)并验证状态(./zkServer.sh status)。broker.id=0(多节点需递增);listeners=SASL_PLAINTEXT://your_server_ip:9092(启用SASL认证);advertised.listeners=SASL_PLAINTEXT://your_public_ip:9092(避免客户端连接失败);zookeeper.connect=zk1_ip:2181,zk2_ip:2181,zk3_ip:2181(多节点用逗号分隔)。num.partitions:新Topic的默认分区数(根据业务并发需求设置,如16/32,需结合CPU核心数);default.replication.factor:Topic副本数(生产环境建议设为3,提升数据可靠性);num.network.threads:网络处理线程数(建议设为CPU核心数的2~3倍,如8核设为16);num.io.threads:磁盘IO线程数(建议设为CPU核心数的5~8倍,如8核设为40);log.dirs:日志存储目录(多目录用逗号分隔,如/data/kafka/logs1,/data/kafka/logs2,提升IO吞吐);log.retention.hours:日志保留时间(建议7~168小时,如168小时=7天);log.segment.bytes:日志分段大小(建议1GB,如1073741824,平衡磁盘IO与查询效率);compression.type:消息压缩类型(推荐lz4,兼顾吞吐量与CPU开销);batch.size:生产者批处理大小(建议1MB,如1048576,减少网络请求次数);linger.ms:生产者等待批处理的时间(建议100~500ms,平衡延迟与吞吐量)。security.inter.broker.protocol=SASL_PLAINTEXT(Broker间通信协议);sasl.enabled.mechanisms=PLAIN(认证机制);sasl.mechanism.inter.broker.protocol=PLAIN(Broker间认证机制);kafka_server_jaas.conf):KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_producer="producer-secret"
user_consumer="consumer-secret";
};
启动Kafka时指定配置文件:export KAFKA_OPTS="-Djava.security.auth.login.config=/path/to/kafka_server_jaas.conf"。sudo sysctl -w fs.file-max=1000000);sudo sysctl -w net.core.rmem_default=262144、net.core.wmem_default=262144);sudo sysctl -w net.ipv4.tcp_wmem="4096 16384 131072"、net.ipv4.tcp_rmem="4096 65536 1048576");sudo sysctl -w vm.swappiness=1,避免磁盘IO瓶颈)。noatime选项,减少文件访问时间戳更新的开销),如:sudo mkfs.xfs /dev/sdb
sudo mount -o noatime /dev/sdb /data/kafka
并添加至/etc/fstab实现开机自动挂载。/etc/systemd/system/kafka.service):[Unit]
Description=Apache Kafka Server
After=network.target zookeeper.service
[Service]
Type=simple
User=kafka
Group=kafka
Environment="KAFKA_OPTS=-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
ExecStart=/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
sudo useradd kafka
sudo mkdir -p /opt/kafka/{logs,data}
sudo chown -R kafka:kafka /opt/kafka
sudo systemctl daemon-reload
sudo systemctl enable kafka
sudo systemctl start kafka
验证状态:sudo systemctl status kafka。jconsole或kafka-run-class.sh查看)。log.dirs),避免磁盘空间耗尽;可通过log.retention.bytes(单个分区最大日志大小)和log.retention.check.interval.ms(日志检查间隔)参数自动清理旧日志。dataDir)和Kafka日志目录(log.dirs);灾难恢复时,可通过备份数据恢复Zookeeper集群和Kafka Broker。