Configuring Logstash on Droplets to Forward Nginx Logs to Managed OpenSearch

Oct 10, 2024 08:30 AM - 4 months ago 150696

Introduction

Keeping way of web server logs is basal for moving your website smoothly, solving problems, and knowing personification behavior. If you’re utilizing Nginx, it produces entree and correction logs afloat of valuable information. To negociate and analyse these logs, you tin usage Logstash to process and guardant them and DigitalOcean’s Managed OpenSearch to scale and visualize the data.

In this tutorial, we will locomotion you done installing Logstash connected a Droplet, mounting it up to cod your Nginx logs, and sending them to DigitalOcean Managed OpenSearch.

Prerequisites

Nginx should beryllium group up, and logs should beryllium generated connected your Droplet. To instal nginx connected a Droplet, mention to this tutorial connected How to Install Ngnix connected Ubuntu.

  • OpenSearch cluster should beryllium running, and you should person entree to it. Visit How to Create OpenSearch Clusters for much details.

  • Familiarity pinch Nginx, Logstash, and OpenSearch is beneficial.

Use Case

You mightiness request this setup if you want to:

  • Monitor and Troubleshoot: Track web server capacity and errors by analyzing real-time logs.
  • Analyze Performance: Gain insights into web postulation patterns and server metrics.
  • Centralize Logging: Aggregate logs from aggregate Nginx servers into a azygous OpenSearch lawsuit for easier management.

Note: The setup clip should beryllium astir 30 minutes.

Step 1 - Installing Logstash connected Droplets

Logstash tin beryllium installed utilizing binary files disposable here aliases package repositories tailored for your operating system. For easier guidance and updates, utilizing package repositories is mostly recommended. You tin usage the APT package head connected Debian-based systems specified arsenic Ubuntu, while connected Red Hat-based systems specified arsenic CentOS aliases RHEL, you tin usage yum. Both methods guarantee Logstash is decently integrated into your system’s package guidance infrastructure, simplifying installation and maintenance.

In this section, we will locomotion you done the installation of Logstash utilizing some apt and yum package managers, ensuring that you tin configure Logstash connected your Droplet sloppy of your Linux distribution.

To find the Operating System, tally the pursuing command:

cat /etc/os-release

For APT-Based Systems (Ubuntu/Debian)

1.Download and instal the Public Signing Key:

wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic-keyring.gpg

2.Install apt-transport-https if it is not already installed:

sudo apt-get install apt-transport-https

3.Add and prevention the Logstash repository meaning to your apt sources list:

echo "deb [signed-by=/usr/share/keyrings/elastic-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt unchangeable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list Note: Ensure that you do not usage add-apt-repository arsenic it whitethorn adhd a deb-src introduction that is not supported. If you brushwood an correction related to a deb-src entry, delete it from the /etc/apt/sources.list file. If you person added the deb-src entry, you will spot an correction for illustration the following: Unable to find expected introduction 'main/source/Sources' successful Release record (Wrong sources.list introduction aliases malformed file) Just delete the deb-src introduction from the /etc/apt/sources.list record and the installation should activity arsenic expected.

4.Update the package scale to see the caller repository:

sudo apt-get update

5.Install Logstash utilizing the apt package manager:

sudo apt-get install logstash

6.Start Logstash and alteration it to commencement automatically connected boot:

sudo systemctl commencement logstash sudo systemctl enable logstash

Logstash is now installed and moving connected your system.

For YUM-Based Systems (CentOS/RHEL)

1.Download and Install the Public Signing Key for the Logstash repository:

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

2.Create a repository record for Logstash successful /etc/yum.repos.d/. For example, create a record named logstash.repo. You tin transcript and paste the beneath contents to create the record and update the contents:

sudo tee /etc/yum.repos.d/logstash.repo > /dev/null <<EOF [logstash-8.x] name=Elastic repository for 8.x packages baseurl=https://artifacts.elastic.co/packages/8.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md EOF

The repository is fresh to use.

3.Install Logstash utilizing the YUM package manager:

sudo yum install logstash

4.Start Logstash and alteration it to commencement automatically connected boot:

sudo systemctl commencement logstash sudo systemctl enable logstash

Logstash is now installed and moving connected your system.

Step 2 - Installing the Open Search Output Plugin

You tin instal the OpenSearch output plugin by moving the pursuing command:

/usr/share/logstash/bin/logstash-plugin install logstash-output-opensearch

You tin find much accusation astir the plugin connected this logstash-output-opensearch plugin repository.

Step 3 - Configuring Logstash to Send Nginx Logs to OpenSearch

A Logstash pipeline consists of 3 main stages: input, filter, and output. Logstash pipelines make usage of plugins. You tin make usage of organization plugins aliases create your own.

  • Input: This shape collects information from various sources. Logstash supports galore input plugins to grip information sources for illustration log files, databases, connection queues, and unreality services.
  • Filter: This shape processes and transforms the information collected successful the input stage. Filters tin modify, enrich, and building the information to make it much useful and easier to analyze.
  • Output: This shape sends the processed information to a destination. Destinations tin see databases, files, and information stores for illustration OpenSearch.

Now let’s create a pipeline.

1.Create Logstash Configuration File astatine /etc/logstash/conf.d/nginx-to-opensearch.conf pinch the pursuing contents:

input { file { way => "/var/log/nginx/access.log" start_position => "beginning" sincedb_path => "/dev/null" tags => ["nginx_access"] } file { way => "/var/log/nginx/error.log" start_position => "beginning" sincedb_path => "/dev/null" tags => ["nginx_error"] } } filter { if "nginx_access" in [tags] { grok { lucifer => { "message" => "%{IPORHOST:client_ip} - %{USER:ident} \[%{HTTPDATE:timestamp}\] \"%{WORD:method} %{URIPATHPARAM:request} HTTP/%{NUMBER:http_version}\" %{NUMBER:response} %{NUMBER:bytes} \"%{DATA:referrer}\" \"%{DATA:user_agent}\"" } } mutate { remove_field => ["message", "[log][file][path]", "[event][original]"] } } else if "nginx_error" in [tags] { grok { lucifer => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{LOGLEVEL:level}\] \[%{DATA:pid}\] \[%{DATA:tid}\] %{GREEDYDATA:error_message}" } } mutate { remove_field => ["message", "[log][file][path]", "[event][original]"] } } } output { if "nginx_access" in [tags] { opensearch { hosts => ["https://<OpenSearch-Hostname>:25060"] personification => "doadmin" password => "<your_password>" scale => "nginx_access-%{+YYYY.MM.dd}" ssl => true ssl_certificate_verification => true } } else if "nginx_error" in [tags] { opensearch { hosts => ["https://<OpenSearch-Hostname>:25060"] personification => "doadmin" password => "<your_password>" scale => "nginx_error-%{+YYYY.MM.dd}" ssl => true ssl_certificate_verification => true } } }

Replace:

  • OpenSearch-Hostname pinch your OpenSearch server’s hostname.
  • <your_password> pinch with your OpenSearch password.

2.Apply the caller configuration by restarting Logstash:

sudo systemctl restart logstash

3.Check Logstash logs to guarantee it is processing and forwarding information correctly:

sudo tail -f /var/log/logstash/logstash-plain.log

Breakdown of the nginx-to-opensearch.conf configuration

INPUT

The input artifact configures 2 record inputs to publication logs:

Nginx Logs: Paths: /var/log/nginx/access.log (for entree logs) /var/log/nginx/error.log (for correction logs) Start Position: opening – Reads from the commencement of the log files. Sincedb Path: /dev/null – Disables search for continuous reading. Tags: ["nginx_access"] for entree logs ["nginx_error"] for correction logs

Note: Ensure the Logstash work has entree to the input paths.

FILTER

The select artifact processes logs based connected their tags:

Log Processing: Access Logs: Uses a grok select to parse the entree log format, extracting fields for illustration client_ip, timestamp, method, request, http_version, response, bytes, referrer, and user_agent. Removes the original connection and definite metadata fields.

Error Logs: Checks for the nginx_error tag and applies a grok select to extract fields specified arsenic timestamp, level, pid, tid, and error_message. Also removes the connection and metadata fields.

OUTPUT

The output artifact routes events to OpenSearch based connected their tags:

Routing to OpenSearch: For some entree and correction logs, it specifies: Hosts: URL of the OpenSearch instance. User: doadmin for authentication. Password: Your OpenSearch password. Index: nginx_access-%{+YYYY.MM.dd} for entree logs nginx_error-%{+YYYY.MM.dd} for correction logs SSL Settings: Enables SSL and certificate verification.

Step 4 - Configure OpenSearch

1.Open your web browser and spell to the OpenSearch Dashboard URL:

https://<OpenSearch-Hostname>

Replace OpenSearch-Hostname pinch your OpenSearch server’s hostname.

2.Create an Index Pattern. a. On the near sidebar, navigate to Management > Dashboard Management > Index Patterns. b. Click connected Create scale pattern connected the apical right. c. Enter nginx_access-* aliases nginx_error-* arsenic the scale shape to lucifer each indices created by Logstash and click connected Next step. d. Click Create scale pattern.

3.Ensure the scale shape is successfully created and visible successful the Index Patterns list.

4.On the near sidebar, spell to Discover and prime the scale shape you created (nginx_access-* aliases nginx_error-*). Verify that log entries are visible and correctly indexed.

5.Create Visualizations and Dashboards. Visit How to Create a Dashboard successful OpenSearch for much details.

Troubleshooting

Check Connectivity

You tin verify that Logstash tin link to OpenSearch by testing connectivity:

curl -u doadmin:your_password -X GET "https://<OpenSearch-Hostname>:25060/_cat/indices?v"

Replace:

  • OpenSearch-Hostname pinch your OpenSearch server’s hostname.
  • <your_password> pinch with your OpenSearch password.

Data Ingestion

You tin guarantee that the information is decently indexed successful OpenSearch utilizing the pursuing curl command:

curl -u doadmin:your_password -X GET "http://<OpenSearch-Hostname>:25060/nginx-logs-*/_search?pretty"

Replace:

  • OpenSearch-Hostname pinch your OpenSearch server’s hostname.
  • <your_password> pinch with your OpenSearch password

Firewall and Network Configuration

Ensure firewall rules and web settings let postulation betwixt Logstash and OpenSearch connected larboard 25060.

Conclusion

In this guide, you learned to group up Logstash to cod and guardant Nginx logs to OpenSearch.

You reviewed really to usage apt aliases yum package managers, depending connected your Linux distribution, to get Logstash up and moving connected your Droplet. You besides created and adjusted the Logstash configuration record to make judge Nginx logs are correctly parsed and sent to OpenSearch. Then you group up an scale shape successful OpenSearch Dashboards to cheque that the logs are being indexed decently and are visible for analysis.With these steps completed, you should now person a moving setup wherever Logstash collects Nginx logs and sends them to OpenSearch. This setup lets you usage OpenSearch’s powerful hunt and visualization devices to analyse your server logs.

If you tally into immoderate issues, cheque retired the troubleshooting tips we’ve provided and mention to the Logstash and OpenSearch documentation for much help. Regular monitoring will support your logging strategy moving smoothly and effectively.

More