Monitoring Elastic Stack

featured image

Gather metrics and statistics from Elastic Stack with Metricbeat and monitor the services using a Kibana dashboard.

What we’re going to build

We’re going to add the monitoring functionality to the Elastic Stack services used in the spring-boot-log4j-2-scaffolding application. As a result, we’ll be able to view metrics collected from Elasticsearch, Kibana, Logstash and Filebeat in a Kibana dashboard. All services will run in Docker containers. Meanwhile, You can clone the project and test it locally with Docker Compose.

Introducing Metricbeat

Metricbeat collects the metrics and statistics from services and ships them to the output, in this example – Elasticsearch with minimum required configuration. Thus, it is a recommended tool for monitoring Elastic Stack in a production environment:


Metricbeat is the recommended method for collecting and shipping monitoring data to a monitoring cluster. If you have previously configured internal collection, you should migrate to using Metricbeat collection. Use either Metricbeat collection or internal collection; do not use both.

https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-production.html

Firstly, we’re going to enclose our configuration for Metricbeat in a single file – metricbeat.yml. To see all non-deprecated configuration options visit the reference file. The configuration file used in this article has the following structure:

In addition, in the project repository you’ll find:

Monitor Elastic Stack with Metricbeat

All services described in this article are run with Docker Compose and use the following default environment variables:

Get metrics from Elasticsearch

  • Enable and set up the elasticsearch module in the metricbeat configuration:

Get metrics from Kibana

  • Disable the default metrics collection by setting the monitoring.kibana.collection.enabled option to false in the docker-compose.yml file (written in capital letters with underscores as separators):
  • Enable and set up the kibana module in the metricbeat configuration:

Get metrics from Logstash

  • Enable and set up the logstash module in the metricbeat configuration:

Get metrics from Filebeat

  • Disable the default metrics collection by setting the monitoring.enabled option to false and allow external collection of monitoring data by enabling the HTTP endpoint in the filebeat.yml file (documentation):
  • Enable and set up the beat module in the metricbeat configuration:

Define output for collected metrics

Finally, we can configure the output for collected metrics. In this example project we’re using only a single Elasticsearch node. Therefore, we’re going to send the monitoring data there as well:

Display metrics in a Kibana dashboard

To view monitoring data in a Kibana dashboard we have to use the following code:

By default, the data will be collected from the cluster specified in the elasticsearch.hosts value in the kibana.yml file – the project uses the default elasticsearch:9200 value which works with our one Elasticsearch node. However, if you run a dedicated cluster for monitoring, don’t forget to set monitoring.ui.elasticsearch.hosts option for the kibana service.

Use a custom Metricbeat image to configure monitoring Elastic Stack

We want to start our metricbeat service in a Docker container. Therefore, we’re going to add the metricbeat service to the Elastic Stack services that were already defined in the docker-compose.yml file. Remember to provide all environment variables that we used in the metricbeat.yml file (the KIBANA_URL is used in our custom entrypoint for this container):

As you can see, the context for this build is expected in the metricbeat directory. We want to apply our custom configuration from the metricbeat.yml file and make sure that the metricbeat service will wait for the kibana service. With this purpose in mind, let’s create the following Dockerfile:

  • In conclusion, we copy our custom config to the container and set the proper privileges to this file.
  • Furthermore, this image uses the wait-for-kibana.sh script which ensures that the metricbeat service will be started only after kibana is ready. The KIBANA_URL variable provided in the docker-compose.yml file is required for this script to work. You can read about this part of configuration in the How to make one Docker container wait for another post.

Verify whether Elastic stack monitoring works

At last, we can run all services defined in the docker-compose.yml file with the following command:

Above all, wait until the metricbeat service connects to Kibana. You can see the containers by running the $docker ps command in the command line. The output should contain the following information:

Finally, go to the http://localhost:5601/app/monitoring to see the clusters. Since our example Filebeat instance sends data to a Logstash instance, its metrics will be displayed in the Standalone cluster. Consequently, the rest of the Elastic Stack metrics are available under the docker-cluster:

monitor Elastic Stack clusters

In addition, read the Get rid of the Standalone cluster in Kibana monitoring dashboard post if you want to change that.

In the end, you can see the docker-cluster metrics on the image below:

monitor Elasti Stack services in the docker-cluster

Likewise, you can see the Standalone cluster metrics on the image below:

Standalone cluster screenshot

Finally, you can find the code responsible for monitoring Elastic Stack with Metricbeat in the commit c2a6f0d082072a52f56ee3ebc49bcc30ef482b99.

Troubleshooting – when Metricbeat doesn’t monitor Elastic Stack properly

Check out this section in case something goes wrong.

Verify that your Metricbeat instance actually gets data from monitored services

You can test Metricbeat connection to a chosen service, even when data are not shown in the Kibana dashboard. In order to achieve that, you have to enter the container and send a request to the monitored service. Let’s assume that we want to see whether the connection to the Filebeat /stats endpoint works. We can achieve this with the following commands:

In other words, the example correct output may look like on the screenshot below:

test filebeat connection screenshot

Specifically, if you got the Connection refused error, make sure that:

  • the http.host and http.enabled options in the filebeat.yml file are correct;
  • the port exposed in the docker-compose.yml file for the filebeat service is correct.

Collect proper metricsets

Secondly, if we list the metricsets that are not compatible with the config we can get the error similar to the following one caused by wrong metricsets applied in the elasticsearch module configuration:

Fortunately, all required and supported metricsets are listed in the error message. Thanks to that, we know that the elasticsearch module configuration shown below collects correct metricsets:

Verify permissions to the config file

What’s more, when the permission to the metricbeat.yml file used in the container are invalid, we’ll get the following error:

Of course, it can be fixed by applying the correct permissions to the metricbeat.yml file in the Dockerfile for the metricbeat service:

Provide credentials used in the Elasticsearch node

Lastly, if you enabled Elasticsearch security features, make sure that you pass the proper Elasticsearch credentials to Metricbeat and Kibana services. Otherwise, Metricbeat won’t be able to send monitoring data and Kibana won’t be able to read them. Take note, that the environment variables for Elasticsearch username and password in Kibana have slightly different names than the ones used in the other elements of the Elastic Stack:

Monitor Elastic Stack in a production environment

To sum up, in this article we used only one Elasticsearch node for storing business and monitoring data. However, remember to use a separate cluster for metrics in a production environment, as stated in the docs:

In production, you should send monitoring data to a separate monitoring cluster so that historical data is available even when the nodes you are monitoring are not.

https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-production.html

In production, we strongly recommend using a separate monitoring cluster. Using a separate monitoring cluster prevents production cluster outages from impacting your ability to access your monitoring data. It also prevents monitoring activities from impacting the performance of your production cluster. For the same reason, we also recommend using a separate Kibana instance for viewing the monitoring data.

https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-overview.html

Learn more on how to monitor Elastic Search using Metricbest

Photo by Lechon Kirb on StockSnap

Leave a Reply

Your email address will not be published. Required fields are marked *