Collect and Forward DigitalOcean Kubernetes (DOKS) Logs to DigitalOcean Managed OpenSearch

Oct 29, 2024 08:30 AM - 2 weeks ago 18003

Introduction

This tutorial demonstrates really to cod and guardant logs from a DigitalOcean Kubernetes (DOKS) cluster to a DigitalOcean Managed OpenSearch lawsuit utilizing AxoSyslog, a scalable information information processor. By pursuing this guide, you’ll study really to group up a robust logging strategy that captures and analyzes logs from your Kubernetes applications, making it easier to monitor, troubleshoot, and unafraid your infrastructure.

In this tutorial, you will usage AxoSyslog to guardant logs from a Kubernetes cluster to OpenSearch.

Prerequisites

Before getting started, guarantee that you person the pursuing prerequisites successful place:

  1. You’ll request entree to a DigitalOcean Cloud Account to create and negociate your Kubernetes and OpenSearch resources.
  2. The DigitalOcean Command Line Interface (CLI) tool, doctl, should beryllium installed and configured connected your section machine.
  3. A moving DigitalOcean Kubernetes (DOKS) cluster.
  4. The Kubernetes package manager, Helm, should beryllium installed to negociate Kubernetes applications.
  5. Familiarity pinch Kubernetes, Helm, and DigitalOcean’s managed services.

Use Case

This task is perfect for scenarios wherever you request a centralized logging solution to show and analyse logs from various applications moving successful a Kubernetes cluster. Whether you are managing a mini group of applications aliases a large-scale infrastructure, collecting and forwarding logs to a dedicated OpenSearch cluster helps in:

  • Security Monitoring: Detect and respond to information incidents by analyzing logs successful real-time.
  • Troubleshooting: Quickly place and resoluteness issues wrong your Kubernetes applications by accessing elaborate logs.
  • Compliance: Maintain a log of events for compliance pinch manufacture regulations.

By integrating AxoSyslog pinch DigitalOcean Managed OpenSearch, you tin efficiently process and shop ample volumes of logs, making it easier to extract valuable insights and support your systems’ wellness and security.

Step 1 - Create an OpenSearch cluster

In this step, you’ll group up the halfway constituent of your logging system, the OpenSearch cluster. OpenSearch will beryllium the destination for each the logs you cod from your Kubernetes cluster. You’ll create a caller OpenSearch lawsuit successful your chosen region connected DigitalOcean by moving the pursuing command.

doctl databases create opensearch-doks --engine opensearch --region lon1 --size db-s-1vcpu-2gb --num-nodes 1

Replace lon1 pinch your desired region. To database disposable size slugs, sojourn our API reference documentation.

Step 2 - Generate immoderate random logs

Before forwarding logs to OpenSearch, you request immoderate logs to activity with. If you don’t person an exertion already generating logs wrong your Kubernetes cluster, this measurement will show you really to deploy a log generator. This log generator will nutrient a dependable watercourse of sample logs that tin beryllium utilized to trial and show your logging pipeline.

First, adhd the log generator Helm floor plan repository and instal the log generator:

helm repo add kube-logging https://kube-logging.github.io/helm-charts helm repo update

Then, instal the log generator utilizing Helm:

helm install --generate-name --wait kube-logging/log-generator

You tin verify that the log generator is moving by viewing the logs it produces:

kubectl logs -l app.kubernetes.io/name=log-generator

Step 3 - Prepare AxoSyslog Collector for Installation

In this step, you’ll configure the AxoSyslog Collector, which is responsible for gathering logs from your Kubernetes cluster and forwarding them to OpenSearch. This involves providing the correct relationship specifications for your OpenSearch cluster (hostname, user, and password).

We’ll usage helm to instal AxoSyslog Collector and walk civilization values.

To configure the AxoSyslog collector pinch the correct address, user, and password for your OpenSearch database, travel these steps:

Automated Script

To simplify the configuration, you tin usage an automated book that fetches the basal OpenSearch relationship specifications and updates your AxoSyslog configuration file.

Save the pursuing book arsenic update_axoflow_demo.sh:

update_axoflow_demo.sh

#!/bin/bash DB_ID=$(doctl databases database --format Name,ID --no-header | grep opensearch-doks | awk '{print $2}') OPENSEARCHHOSTNAME=$(doctl databases relationship $DB_ID --no-header --format Host) OPENSEARCHUSERNAME=$(doctl databases relationship $DB_ID --no-header --format User) OPENSEARCHPASSWORD=$(doctl databases relationship $DB_ID --no-header --format Password) yq eval ".config.destinations.opensearch[0].address = \"$OPENSEARCHHOSTNAME\"" -i axoflow-demo.yaml yq eval ".config.destinations.opensearch[0].user = \"$OPENSEARCHUSERNAME\"" -i axoflow-demo.yaml yq eval ".config.destinations.opensearch[0].password = \"$OPENSEARCHPASSWORD\"" -i axoflow-demo.yaml echo "axoflow-demo.yaml has been updated."

Ensure you person execute support connected your book earlier moving it:

chmod +x update_axoflow_demo.sh && ./update_axoflow_demo.sh

This book will fetch the basal accusation from your DigitalOcean relationship utilizing doctl and update your axoflow-demo.yaml record accordingly.

Manual Steps to Update axoflow-demo.yaml

If you for illustration to manually configure your AxoSyslog Collector, travel these steps:

Run the pursuing bid to extract database ID for opensearch-doks:

doctl databases database --format Name,ID --no-header | grep opensearch-doks | awk '{print $2}'

To retrieve hostname, username, and password, execute the pursuing commands respectively:

doctl databases relationship <id> --no-header --format Host doctl databases relationship <id> --no-header --format User doctl databases relationship <id> --no-header --format Password

Now, you request to manually update the axoflow-demo.yaml file:

Open your axoflow-demo.yaml record successful a matter editor and switch the applicable fields pinch the extracted values:

axoflow-demo.yaml

config: sources: kubernetes: enabled: true destinations: opensearch: - address: "x.k.db.ondigitalocean.com" index: "doks-demo" user: "doadmin" password: "AVNS_x" tls: peerVerify: false template: "$(format-json --scope rfc5424 --exclude DATE --key ISODATE @timestamp=${ISODATE} k8s=$(format-json .k8s.* --shift-levels 2 --exclude .k8s.log))"

Step 4 - Install AxoSyslog-collector

Now that the configuration is complete, the adjacent measurement is to deploy the AxoSyslog Collector to your Kubernetes cluster. This will alteration the postulation and forwarding of logs to OpenSearch.

Add the AxoSyslog Helm repository and instal the AxoSyslog Collector utilizing the customized configuration file:

helm repo add AxoSyslog https://axoflow.github.io/AxoSyslog-charts helm repo update helm install AxoSyslog -f axoflow-demo.yaml AxoSyslog/AxoSyslog-collector --wait

To guarantee that logs are being sent to the correct OpenSearch port, update the AxoSyslog Collector’s configuration by updating your configmap:

kubectl get configmap AxoSyslog-AxoSyslog-collector -o yaml | sed 's/9200\/_bulk/25060\/_bulk/' | kubectl use -f -

Finally, delete the existing pods to use the updated configuration:

kubectl delete pods -l app=AxoSyslog-AxoSyslog-collector

Conclusion

Setting up a logging pipeline from DigitalOcean Kubernetes to OpenSearch utilizing AxoSyslog not only centralizes your logs but besides enhances your expertise to monitor, analyze, and unafraid your applications. With the steps provided successful this guide, you tin quickly deploy this solution, gaining deeper visibility into your Kubernetes situation and ensuring that your infrastructure remains resilient and compliant.

More