Kubernetes Logging Solution (AWS Cloud Watch vs Elasticsearch, FluentBit& Kibana EFK)

Waq Ahmed
5 min readDec 26, 2021

One of the greatest nightmare for DevOps/Cloud Engineer is how to handle the application logs running in a Kubernetes Cluster. Although containerization technology totally outclass the virtualization technology due to its short spin up time and ability to scale out and scale in within seconds. However, it introduced new challenges to monitor the environment due to the containers fragile nature. For example if a custom app is running in a Pod, the log will be maintained by default inside that pod only and incase the pod crashes we will loss all the logs and would be nearly impossible to find out the root cause. But thanks to monitoring solution available in market to remedy this situation. In this article I will discuss about the AWS CloudWatch to monitor the EKS cluster logs. We will also discuss about the 3rd party tool like Elasticsearch incase the Kubernetes Cluster is on-prem.

AWS CloudWatch Logs Insight

CloudWatch Logs Insights enables you to interactively search and analyze your log data in Amazon CloudWatch Logs. You can perform queries to help you more efficiently and effectively respond to operational issues. If an issue occurs, you can use CloudWatch Logs Insights to identify potential causes and validate deployed fixes.

Setting up Container Insights on Amazon EKS

Fluent Bit is an open source Log Processor and Forwarder which allows you to collect any data like metrics and logs from different sources, enrich them with filters and send them to multiple destinations (AWS CloudWatch, EFK).

Setup the FluentBit as DaemonSet (so one agent will be deployed on each node) in EKS Cluster. Use the following command and run it on a EKS cluster Change the ClusterName and RegionName accordingly

ClusterName=<my-cluster-name>
RegionName=<my-cluster-region>
FluentBitHttpPort='2020'
FluentBitReadFromHead='Off'
[[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On'
[[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On'
curl https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/quickstart/cwagent-fluent-bit-quickstart.yaml | sed 's/{{cluster_name}}/'${ClusterName}'/;s/{{region_name}}/'${RegionName}'/;s/{{http_server_toggle}}/"'${FluentBitHttpServer}'"/;s/{{http_server_port}}/"'${FluentBitHttpPort}'"/;s/{{read_from_head}}/"'${FluentBitReadFromHead}'"/;s/{{read_from_tail}}/"'${FluentBitReadFromTail}'"/' | kubectl apply -f -

Make sure that AWS EKS worker node has access to write logs to CloudWatch. Go to IAM Role (attached with worker node) and attach policy to provide full access to Cloud Watch. It will take few minutes and then your Fluentbit will do its magic by sending logs to CloudWatch Insight

Node Performance Monitoring for EKS Cluster
Monitor per Pod Performance on EKS

One metric I really like is to visualize the cluster with heat map.

Cofigure Fluentbit to send Logs to Elasticsearch

In case the Kubernetes cluster is running on-prem and you want to utilize a open-source logging solution. We can configure Fluentbit to send logs to Elasticsearch and then visualize the logs on Kibana

Pod write the logs to PV, FuentBit will extract the log, process (optional) and transfer to Elasticsearch

Make sure you have Elasticseach and Kibana installed as a Pod in same cluster. Or you can use the Elasticsearch cloud to send logs. Make sure that Elasticsearch pod is running as Stateful pod and mount to a volume. To install Elasticsearch on K8s cluster click here

To give FluentBit permission to read the logs from the PV, create the Fluentbit Cluster role with appropriate Cluster role binding.

$ kubectl create namespace logging
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
$ kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml

Deploy the FluentBit Agent as DaemonSet

Before creating the Daemonset make sure that environmental variables are setup correctly. FluentBit pipeline contains following section which can be viewed in ConfigMap

kubectl desribe fluent-bit-config -n logging
  1. Input
  2. Parsers
  3. Filter
  4. Output

Click here to view the all output supported configuration parameters.

fluentbit-ds.yaml

To verify that FluentBit is working as expected. Create a test pod to create some logs

couter.yaml

To view all resources running in my Cluster

Open Kibana as http://localhost:5601/ and see the logs will be collected on Discovery page 😎

Different Filter Options available in Kibana Dashboard

The best thing about Kibana, it provide the Console to write the query and aggregate the results for metrics. The fields under _source contains the actual logs. The Kibana return top 10 field in single hit

To further investigate, only query fields [logs, master_URL] and search for container name “counter”

Show me the container id for all pods named as ‘counter’

I hope you like this article. If yes, please hit the clap icon 👏 . Leave your comments if you have any question or provide me the feedback

--

--

Waq Ahmed

I’m an DevOps Engineer and have keen interest and experienced in Cloud Computing, Docker, Kubernetes, and InfraStructure provisioning tool