Kafka Linux环境安全设置指南
kafka)和组(如kafka),用于运行Kafka进程,避免使用root用户。sudo groupadd kafka
sudo useradd -g kafka kafka
/usr/local/kafka)及日志目录(如/usr/local/kafka/kafka-logs)的所有权赋予kafka用户/组,设置合理权限(目录750、配置文件644)。sudo chown -R kafka:kafka /usr/local/kafka
sudo chmod -R 750 /usr/local/kafka
sudo chmod 644 /usr/local/kafka/config/server.properties
kafka用户启动Kafka和Zookeeper,防止权限提升。#!/bin/bash
sudo -u kafka /usr/local/kafka/bin/zookeeper-server-start.sh /usr/local/kafka/config/zookeeper.properties &
sleep 5
sudo -u kafka /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &
firewalld或iptables限制Kafka端口(默认9092、2181)的访问,仅允许受信任的IP地址或网络段连接。sudo firewall-cmd --permanent --add-port=9092/tcp --add-port=2181/tcp
sudo firewall-cmd --reload
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
setenforce 0
SCRAM-SHA-256或SCRAM-SHA-512算法(强度高于PLAIN)。
server.properties,指定SASL协议、机制及JAAS文件路径。security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=false
kafka_server_jaas.conf),定义用户凭据:KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret";
};
--override参数指定JAAS文件。bin/kafka-server-start.sh config/server.properties --override java.security.auth.login.config=/path/to/kafka_server_jaas.conf
security.protocol(如SASL_SSL)和sasl.mechanism(如SCRAM-SHA-256),并提供用户凭据。security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="producer" password="producer-secret";
server.properties中设置authorizer.class.name为kafka.security.authorizer.AclAuthorizer,禁止未定义ACL的默认访问。authorizer.class.name=kafka.security.authorizer.AclAuthorizer
allow.everyone.if.no.acl.found=false
kafka-acls.sh工具为用户或用户组分配资源权限(如主题的读、写、创建权限)。# 允许用户"producer"向"test-topic"写入数据
kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 \
--add --allow-principal User:producer --operation Write --topic test-topic
# 允许用户"consumer"从"test-topic"读取数据
kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 \
--add --allow-principal User:consumer --operation Read --topic test-topic
keytool生成密钥库(Keystore)和信任库(Truststore)。keytool -genkeypair -alias kafka -keyalg RSA -keystore kafka.keystore.jks -validity 365 -storepass password -keypass key-password
keytool -exportcert -alias kafka -file kafka.crt -keystore kafka.keystore.jks -storepass password
keytool -importcert -alias kafka -file kafka.crt -keystore kafka.truststore.jks -storepass truststore-password -noprompt
server.properties,指定SSL端口、密钥库及信任库路径。listeners=SSL://:9093
security.inter.broker.protocol=SSL
ssl.keystore.location=/path/to/kafka.keystore.jks
ssl.keystore.password=password
ssl.key.password=key-password
ssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=truststore-password
security.protocol=SSL
ssl.truststore.location=/path/to/kafka.truststore.jks
ssl.truststore.password=truststore-password
INFO),记录关键操作(如认证、授权、ACL变更),便于审计。log4j.logger.kafka=INFO
log4j.logger.org.apache.zookeeper=INFO
log4j.logger.kafka.authorizer.logger=DEBUG
yum update),及时安装系统补丁。kafka.log.LogConfig中的audit.enable=true),记录所有客户端的访问操作(如生产、消费、创建主题)。