Integrating Filebeat with ELK Stack on Ubuntu: A Step-by-Step Guide
Integrating Filebeat (a lightweight log shipper) with ELK (Elasticsearch, Logstash, Kibana) enables centralized log collection, processing, and visualization. Below is a structured guide to setting up this integration on Ubuntu.
Before starting, ensure your Ubuntu system meets these requirements:
First, deploy the core ELK components (Elasticsearch, Logstash, Kibana) on your Ubuntu server.
Elasticsearch stores and indexes log data. Run these commands to install it:
# Import Elasticsearch’s GPG key and add its APT repository
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
# Update package lists and install Elasticsearch
sudo apt update && sudo apt install elasticsearch -y
# Configure Elasticsearch (disable security for local testing; enable in production)
sudo nano /etc/elasticsearch/elasticsearch.yml
# Set `network.host: localhost` (change to server IP for remote access)
# Start and enable Elasticsearch
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch
Verify Elasticsearch is running:
curl -X GET "localhost:9200" # Should return cluster health info
Kibana visualizes log data. Install it with:
sudo apt install kibana -y
# Configure Kibana to allow remote access (edit /etc/kibana/kibana.yml)
sudo nano /etc/kibana/kibana.yml
# Set `server.host: "0.0.0.0"` (disable for production; use firewall rules)
# Start and enable Kibana
sudo systemctl start kibana
sudo systemctl enable kibana
Access Kibana in a browser at http://<server-ip>:5601.
Logstash processes logs before sending them to Elasticsearch. Install it if you need advanced parsing:
sudo apt install logstash -y
# Create a basic Logstash config (e.g., /etc/logstash/conf.d/filebeat.conf)
sudo nano /etc/logstash/conf.d/filebeat.conf
# Add this config (replace with your Elasticsearch details):
input {
beats {
port => 5044
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "filebeat-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug } # For debugging (remove in production)
}
Start and enable Logstash:
sudo systemctl start logstash
sudo systemctl enable logstash
Filebeat collects logs from your system/applications and sends them to Logstash/Elasticsearch.
sudo apt install filebeat -y
Edit the main config file (/etc/filebeat/filebeat.yml) to define log sources and output:
Uncomment and modify the filebeat.inputs section to monitor logs (e.g., system logs, Nginx logs):
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/syslog # System logs
- /var/log/auth.log # Authentication logs
- /var/log/nginx/*.log # Nginx logs (adjust path as needed)
tags: ["ubuntu", "system"] # Optional: Add tags for filtering
fields:
log_source: "ubuntu-server" # Custom field for context
fields_under_root: true # Include fields in the root of the event
Send logs to Logstash (recommended for processing) or directly to Elasticsearch (simpler but less flexible):
Option 1: Output to Logstash (Recommended)
output.logstash:
hosts: ["localhost:5044"] # Match Logstash’s input port
Option 2: Output to Elasticsearch (Direct)
output.elasticsearch:
hosts: ["localhost:9200"]
index: "filebeat-%{+YYYY.MM.dd}" # Dynamic index name (daily)
Filebeat modules simplify log collection for common applications (e.g., Nginx, MySQL). Enable the Nginx module as an example:
sudo filebeat modules enable nginx
This creates a pre-configured config file at /etc/filebeat/modules.d/nginx.yml.
Filebeat includes an index template to optimize Elasticsearch performance. Load it with:
sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
--index-management: Applies the default index template.-E output.logstash.enabled=false: Disables Logstash output during setup (use Elasticsearch directly).sudo systemctl start filebeat
sudo systemctl enable filebeat
Check Filebeat status:
sudo systemctl status filebeat
Ensure logs are flowing from Filebeat to Elasticsearch/Kibana.
Query Elasticsearch for Filebeat-indexed logs:
curl -XGET 'localhost:9200/filebeat-*/_search?pretty' # Replace with your index name
Look for documents with your configured tags (e.g., "tags": ["ubuntu", "system"]).
http://<server-ip>:5601).filebeat-*, and select @timestamp as the time field.If logs don’t appear, check Filebeat logs for errors:
sudo journalctl -u filebeat -f # Follow real-time logs
For complex log formats (e.g., JSON, multi-line), use Logstash’s grok filter. Edit /etc/logstash/conf.d/filebeat.conf:
filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:log_message}" }
}
date {
match => [ "timestamp", "MMM dd HH:mm:ss", "ISO8601" ]
}
}
Enable TLS for Filebeat-Logstash and Elasticsearch-Kibana communication. Refer to the Elastic Security Guide.
By following these steps, you’ll have a fully functional ELK integration with Filebeat on Ubuntu, enabling scalable log collection and analysis. Adjust configurations based on your log volume, application requirements, and security policies.