温馨提示×

ubuntu filebeat如何与ELK集成

小樊
35
2025-11-09 09:25:27
栏目: 智能运维

Integrating Filebeat with ELK Stack on Ubuntu: A Step-by-Step Guide
Integrating Filebeat (a lightweight log shipper) with ELK (Elasticsearch, Logstash, Kibana) enables centralized log collection, processing, and visualization. Below is a structured guide to setting up this integration on Ubuntu.


1. Prerequisites

Before starting, ensure your Ubuntu system meets these requirements:

  • Ubuntu 20.04/22.04 (or later) with root/sudo access.
  • Internet connectivity to download packages.
  • At least 2 CPU cores, 4GB RAM (for production, scale up based on log volume).
  • Basic familiarity with Linux command-line operations.

2. Install ELK Stack Components

First, deploy the core ELK components (Elasticsearch, Logstash, Kibana) on your Ubuntu server.

2.1 Install Elasticsearch

Elasticsearch stores and indexes log data. Run these commands to install it:

# Import Elasticsearch’s GPG key and add its APT repository
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list

# Update package lists and install Elasticsearch
sudo apt update && sudo apt install elasticsearch -y

# Configure Elasticsearch (disable security for local testing; enable in production)
sudo nano /etc/elasticsearch/elasticsearch.yml
# Set `network.host: localhost` (change to server IP for remote access)

# Start and enable Elasticsearch
sudo systemctl start elasticsearch
sudo systemctl enable elasticsearch

Verify Elasticsearch is running:

curl -X GET "localhost:9200"  # Should return cluster health info

2.2 Install Kibana

Kibana visualizes log data. Install it with:

sudo apt install kibana -y

# Configure Kibana to allow remote access (edit /etc/kibana/kibana.yml)
sudo nano /etc/kibana/kibana.yml
# Set `server.host: "0.0.0.0"` (disable for production; use firewall rules)

# Start and enable Kibana
sudo systemctl start kibana
sudo systemctl enable kibana

Access Kibana in a browser at http://<server-ip>:5601.

2.3 (Optional) Install Logstash

Logstash processes logs before sending them to Elasticsearch. Install it if you need advanced parsing:

sudo apt install logstash -y

# Create a basic Logstash config (e.g., /etc/logstash/conf.d/filebeat.conf)
sudo nano /etc/logstash/conf.d/filebeat.conf
# Add this config (replace with your Elasticsearch details):
input {
  beats {
    port => 5044
  }
}
output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "filebeat-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }  # For debugging (remove in production)
}

Start and enable Logstash:

sudo systemctl start logstash
sudo systemctl enable logstash

3. Install and Configure Filebeat

Filebeat collects logs from your system/applications and sends them to Logstash/Elasticsearch.

3.1 Install Filebeat

sudo apt install filebeat -y

3.2 Configure Filebeat

Edit the main config file (/etc/filebeat/filebeat.yml) to define log sources and output:

3.2.1 Enable Log Inputs

Uncomment and modify the filebeat.inputs section to monitor logs (e.g., system logs, Nginx logs):

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/syslog  # System logs
    - /var/log/auth.log  # Authentication logs
    - /var/log/nginx/*.log  # Nginx logs (adjust path as needed)
  tags: ["ubuntu", "system"]  # Optional: Add tags for filtering
  fields:
    log_source: "ubuntu-server"  # Custom field for context
  fields_under_root: true  # Include fields in the root of the event
3.2.2 Configure Output

Send logs to Logstash (recommended for processing) or directly to Elasticsearch (simpler but less flexible):

  • Option 1: Output to Logstash (Recommended)

    output.logstash:
      hosts: ["localhost:5044"]  # Match Logstash’s input port
    
  • Option 2: Output to Elasticsearch (Direct)

    output.elasticsearch:
      hosts: ["localhost:9200"]
      index: "filebeat-%{+YYYY.MM.dd}"  # Dynamic index name (daily)
    
3.2.3 Load Filebeat Modules (Optional)

Filebeat modules simplify log collection for common applications (e.g., Nginx, MySQL). Enable the Nginx module as an example:

sudo filebeat modules enable nginx

This creates a pre-configured config file at /etc/filebeat/modules.d/nginx.yml.

3.3 Load Index Template

Filebeat includes an index template to optimize Elasticsearch performance. Load it with:

sudo filebeat setup --index-management -E output.logstash.enabled=false -E 'output.elasticsearch.hosts=["localhost:9200"]'
  • --index-management: Applies the default index template.
  • -E output.logstash.enabled=false: Disables Logstash output during setup (use Elasticsearch directly).

3.4 Start Filebeat

sudo systemctl start filebeat
sudo systemctl enable filebeat

Check Filebeat status:

sudo systemctl status filebeat

4. Verify Integration

Ensure logs are flowing from Filebeat to Elasticsearch/Kibana.

4.1 Check Elasticsearch Logs

Query Elasticsearch for Filebeat-indexed logs:

curl -XGET 'localhost:9200/filebeat-*/_search?pretty'  # Replace with your index name

Look for documents with your configured tags (e.g., "tags": ["ubuntu", "system"]).

4.2 View Logs in Kibana

  1. Open Kibana (http://<server-ip>:5601).
  2. Go to Stack Management > Index Patterns.
  3. Click Create index pattern, enter filebeat-*, and select @timestamp as the time field.
  4. Go to Discover to view and analyze logs.

If logs don’t appear, check Filebeat logs for errors:

sudo journalctl -u filebeat -f  # Follow real-time logs

5. Optional: Advanced Configurations

5.1 Parse Logs with Logstash

For complex log formats (e.g., JSON, multi-line), use Logstash’s grok filter. Edit /etc/logstash/conf.d/filebeat.conf:

filter {
  grok {
    match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:hostname} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:log_message}" }
  }
  date {
    match => [ "timestamp", "MMM dd HH:mm:ss", "ISO8601" ]
  }
}

5.2 Secure Communication

Enable TLS for Filebeat-Logstash and Elasticsearch-Kibana communication. Refer to the Elastic Security Guide.

5.3 Scale Horizontally

  • Deploy multiple Elasticsearch nodes in a cluster.
  • Use Logstash forwarders or Beats in a distributed setup.

By following these steps, you’ll have a fully functional ELK integration with Filebeat on Ubuntu, enabling scalable log collection and analysis. Adjust configurations based on your log volume, application requirements, and security policies.

0