Filebeat日志压缩存储的实用方案
一、传输层压缩
output.elasticsearch:
hosts: ["http://localhost:9200"]
compress: true
output.logstash:
hosts: ["localhost:5044"]
compress: true
output.kafka:
hosts: ["kafka1:9092","kafka2:9092"]
topic: "filebeat-logs"
compression: gzip
required_acks: 1
二、存储层压缩
PUT _template/filebeat_best_compression
{
"index_patterns": ["filebeat-*"],
"settings": {
"index.codec": "best_compression",
"index.number_of_shards": 1,
"index.number_of_replicas": 1
}
}
应用后,新创建的索引将使用更高压缩比的存储编码,适合长期留存的数据。注意重建索引或按模板创建新索引后才会生效。三、源日志文件压缩归档
/var/log/*.log {
daily
rotate 7
compress
missingok
notifempty
create 0640 root root
}
该策略每日轮转、保留 7 天并对历史日志进行 gzip 压缩,有助于控制磁盘占用。四、注意事项与选择建议