Processing logs with Elastic Stack #2 – configure Kibana

featured image

Kibana allows us to browse data indexed in Elasticsearch nodes. We’re going to run Kibana in a Docker container and set up its configuration so it displays logs from our example Spring Boot app.

What we are going to build

We’re going to work with the example project described in the Processing logs with Elastic Stack #1 – parse and send various log entries to Elasticsearch post. The Elastic Stack applications are run as Docker services. Meanwhile, you can find the project in the spring-boot-log4j-2-scaffolding repository.

Kibana configuration with Docker Compose

We’re going to configure our kibana service with environment variables. If you need more customized config, use the kibana.yml file – check out the available settings and the bind-mounted configuration in the documentation. You can see our example configuration added to the docker-compose.yml file below:

I’m using the default variables stored in the .env file:

Set up credentials

Environment variables for credentials in Elasticsearch are:

  • ELASTIC_USER,
  • ELASTIC_PASSWORD.

In Kibana they are called:

  • ELASTICSEARCH_USERNAME,
  • ELASTICSEARCH_PASSWORD.

Using incorrect variable names will result in the following WARNING:

In the end you’ll get the following ERROR:

Write environment variables correctly

Make sure that all environment variables in your docker-compose.yml file are written in capital letters with underscores as separators. As it’s stated in the documentation:

When the container starts, a helper process checks the environment for variables that can be mapped to Kibana command-line arguments. (…) The helper translates these names to valid Kibana setting names.

https://www.elastic.co/guide/en/kibana/current/docker.html#environment-variable-config

Therefore, write ELASTICSEARC_PASSWORD instead of elasticsearch.password. The helper will map your variable name to the appropriate value from the settings.

Verify whether you use the default Elasticsearch host

If you run Elasticsearch on a different host than default, don’t forget to include the ELASTICSEARCH_HOSTS variable in the docker-compose.yml.

Manual Kibana configuration

We have to complete configuration of our Kibana service manually. Go to http://localhost:5601 to log in to the app:

kibana login screenshot

I already have some data in my Elasticsearch service as described in this post. Therefore, I’ll chose Explore on my own option from the next screen:

kibana welcome screenshot

Create index pattern

I want to use the index I have in my Elasticsearch: “spring-boot-app-logs-%{+YYYY.MM.dd}” (defined in my logstash.conf file). Thus, I set the spring-boot-app-logs* as the Index pattern:

creating an index in Kibana manual configuration

If you can’t see the Success! Your index patterns matches xx index message:

  • Use ElasticHQ to see whether Elasticsearch contains your index.
  • Check out the Logstash output to verify what data sets are send to Elasticsearch.
  • Make sure that the log source is not empty (contains more than 1 line if you use Filebeat).

You can learn how to do all of the above in the Processing logs with Elastic Stack #1 – parse and send various log entries to Elasticsearch post.

Next step requires choosing the Time filter field name – select @timestamp.

Now we can see all fields included in our index:

index field list

Browse log entries

In the Discover section, adjust the filter to see entries from the desired period:

kibana filter entries screenshot

You can easily flip through all log entries:

kibana browse logs screenshot

The work presented in this article is contained in the commit 52477331a6459bdf63a902d009d4cf190cd05b26.

Learn more on how to set up Kibana configuration

Photo by Mel Elías on StockSnap

Leave a Reply

Your email address will not be published. Required fields are marked *