Troubleshooting ELK Elasticsearch & Logstash Pt 2 of 2

Troubleshooting Logstash

Logstash is our log parser and shipper that gets logs and writes them to the elasticsearch database which creates a daily or weekly index depending on your configuration.

If you are not receiving logs in Kibana, it might be one of the following problems.

Logstash Issue # 1: Low disk space

Run the following command:

If you have 100% usage, logstash will not work.

The most likely scenario is that the log folder is filled with error or other related logs in your logstash server.

You can run the following to check the log folder which is located in the /var/log/logstash/ folder

If the log folder is the culprit of the low disk space, you may delete the logs within the folder:

Once you see which files are taking the most space, you may delete them individually.

Example: rm –rf logstashLogName.log

Logstash Issues # 2: Other error messages

Another reason why you might not be seeing logs in elasticsearch is that Logstash might be having problems with a plug-in that you are currently using, or might have problems parsing in general.

Logstash logs are copied to /var/log/logstash which are essential for troubleshooting purposes.

To identify if Logstash is working and listening on the specific port you’ve specified you may run the following:

Verify that your ports are running. If you don’t see your port running, perform the following to see what kind of error messages we are getting.

You should see the errors related to Logstash, which might be related to:

  • Plug-in error – Issues with Plugins such as (GeoMapping, CIDR, etc)
  • GROK patters – You might have to inspect your GROK patterns to ensure they work.
  • Connectivity issues to the Elasticsearch ES instance – Dead Elasticsearch instances
  • Too many opened connections

You may simply restart the logstash service.

If simply restarting this doesn’t fix it, then take a look at the error messages and manually resolve the problem.

Thank you.

Leave a Reply