配置 Filebeat 告警通知的可行方案与步骤
核心思路
方案一 基于 Kibana Alerting 或 Elasticsearch Watcher
PUT _watcher/watch/error_alert
{
"trigger": { "schedule": { "interval": "1m" } },
"input": {
"search": {
"request": {
"indices": ["filebeat-*"],
"body": {
"query": {
"bool": {
"must": [{ "match": { "message": "error" } }],
"filter": [{ "range": { "@timestamp": { "gte": "now-1m", "lte": "now" } } }]
}
}
}
}
}
},
"condition": {
"compare": { "ctx.payload.hits.total": { "gt": 5 } }
},
"actions": {
"email_admin": {
"email": {
"to": "admin@example.com",
"subject": "Elasticsearch Alert: High Error Logs",
"body": "Errors detected:\n{{#ctx.payload.hits.hits}}{{_source.message}}\n{{/ctx.payload.hits.hits}}"
}
}
}
}
xpack.notification.email.account:
default:
smtp:
host: "smtp.example.com"
port: 587
user: "your_email@example.com"
password: "your_email_password"
authentication: "plain"
starttls.enable: true
from: "alert@example.com"
以上步骤完成后,当条件满足时将按配置渠道发送告警。方案二 基于 ElastAlert 或 ElastAlert2
rules_folder: /opt/elastalert/rules
run_every:
minutes: 1
buffer_time:
minutes: 15
es_host: localhost
es_port: 9200
writeback_index: elastalert_status
alert_time_limit:
days: 1
rules:
- ruletype: frequency
name: HighErrorRate
index: filebeat-*
num_events: 10
timeframe:
minutes: 5
filter:
- term:
log.level: "ERROR"
alert:
- email
email:
- to: "your_email@example.com"
smtp_host: "smtp.example.com"
smtp_port: 587
smtp_user: "your_email@example.com"
smtp_password: "your_email_password"
smtp_from: "elastalert@example.com"
smtp_tls: true
方案三 基于 Prometheus + Alertmanager 监控 Filebeat 指标
groups:
- name: filebeat_rules
rules:
- alert: FilebeatDown
expr: up{job="filebeat_metrics"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Filebeat is down"
description: "Filebeat has been down for more than 1 minute."
验证与排错要点