Monitor, Graph, Report, Analyze & Alert On All Log Files On Any UNIX Host:

Java, HTTP, Apache, Tomcat;catalina.out, Splunk, Mysql, Oracle, Postfix, Log4j, Mail, Weblogic, Glassfish, System Logs, Custom logs and much more!

 Avoid scouring the internet for tools that end up being inadequate or overkill. LoGrobot delivers reliability, ease of use and focused specialty!

 

License

LoGrobot

Unlimited UNIX Hosts / Servers

Monitor / Alert / Analyze / Graph / Report

5 Tools in One - Use ONE Utility for All Log Monitoring Tasks!

Real Time Log Monitor - Utilize Graphs to see up to the Minute Details

Remote Agent Included: Enables Centralized monitoring of remote logs!

Monitor Unlimited Log Files on Unlimited UNIX Hosts / Servers

3 Months Free Technical Support Included!

Instant Download!

(Fully Featured Log Monitoring Suite)

Alert on Log Content, Log Size, Log Growth, Timestamps, File count

Generate Excel Reports on LogCheck Alerts, Analyze Log Content

Scan, Monitor & Alert on Multiple Log Files with one log check!

Monitor Multiple different patterns in the same log (with no configs)

Simplify / Automate all day-to-day log monitoring tasks and much more!

(Automated Install & Configuration of Graphite (statsD/collectd) Available)

One-Time Fee / No Recurring Monthly Payments!

Buy Now ($299.95)

Generate Near-Real-time Graphs on Log Checks...all with a simple ONE liner! No complex coding, No complicated configuration file changes!

Aggregate Multiple log file Graphs into ONE - Utilize logs to Visualize Application, System & Database Health - Analyze security related events with a simple Glance!

Fully Loaded Solution for Monitoring Log Files - Monitor, Report, Analyze, Alert & Graph

(fully featured, 5 log monitoring tools in ONE - Use as a plugin, standalone, or as a monitoring system)

No time-consuming configurations, Plug n Play - Start monitoring logs in just seconds after download!

24 / 7 Support Included ; Contact us for Assistance / Custom Development Requests!

Why is LoGrobot the Log Monitoring Tool for you?

State of Monitored Log

Log Entr(ies) Not Found

Show Errors Found in Log

Monitor Large Log Files

Monitor Dynamic Log Files

Alert when Log Not found

Alert on Log Time Stamp

Automatically Send Log File Data to Graphite Directly from the Monitoring Server - Setup Checks in Nagios, Crontab or Any Application, see them in Graphite

Quick References to some of logXray's most popular Log File Monitoring functions

  1. Exclusions - Specify a List of Patterns to Exclude via Filtering

  2. Monitor Log Files for Expected Record of Events - Alert If Not Found!

  3. Detailed Alerting - Show Offending Entries from Monitored Log Files

  4. Less Detailed Alerting - Do NOT show the Offending entries in alerts

  5. Check Dynamic Logs - Take into account Log Rotation and monitor accordingly

  6. Timeframe - Pull information from logs using user specified Time Frame

  7. Apache Log File Analysis (Nagios) - Alert / Report / Graph Apache Access logs

  8. Use one check to monitor multiple strings within a log - Set thresholds per string

  9. Monitor A Directory of Log Files - Avoid specifying each log file separately

  10. Directory File Count Monitoring - Monitor the number of files in a directory

  1. Automatically Generate Live Graphs for all Log File Monitors / Log File Checks

  2. Check Log Time Stamps - Set up Monitoring Checks to Alert when logs stop updating

  3. Check Log File Size - Monitor the disk space consumption of specific files

  4. Show X number of lines before and/or after matching pattern is found

  5. Log Analysis - Alert when a deviation is identified in overall behavior of a log file

  6. Automatically Generate Color-Coded Excel Reports on Log Alert History

  7. Custom Monitoring Agent - Monitor logs with our unique Perl monitoring agent

  8. Log Monitoring Options - Use any one of our monitoring features to alert on logs

  9. Splunk Log Monitoring - Monitor Splunk log files for inactivity

  10. NRPE - Monitor logs using the very common nagios NRPE monitoring agent

Common Log Monitoring Scenarios

 

  Monitor Specific Log Files in A Specific Directory for New Occurrences of Specific Strings 

Case Scenario:

Monitor all log files in the /var/log directory that have the word 'messages' in their names.  Check each log found matching this criteria for new entries containing the string 'ERROR'. 

If the number of entries found in any 'messages' file in the directory is less than 5, exit with an OK status.  If above 5 but less than 10, alert as Warning.  If above or equal to 10, alert as Critical.

Command:

[root@monitor jbowman]#   ./logrobot  localhost  /var/tmp/logXray  autoblz  /var/log,include:messages  30m  'ERROR'  '.'  5  10  log_mon_3  -ndfoundn

 

Back to Top

 


Monitor log files for user-specified entries, then EXCLUDE specific lines from the results  

Case Scenario:

Within the last 30 minutes, find out how many lines in the log file [ /var/log/app.log ] contained both entries of "ERROR" and "Client". If any lines are found containing these two strings (ERROR.*Client), take note of that.

From the list of lines found, see if there are any lines that also contain the keywords "error 404" OR "updateNumber".  If there are, remove them from the list.  After removing them, show me what is left.  If the number of lines left is between 5 and 9, alert as WARNING.  If equal to or over 10, alert as CRITICAL.  If below 5, do not alert!

Command:

[root@monitor jbowman]#  ./logxray  localhost  /var/tmp/logXray  autonda  /var/log/app.log  30  ‘ERROR.*Client’  '(error 404|updateNumber)'  5  10  applog_tag  -ndshowexcl

 

Back to Top


  Monitor log files for certain entries - ALERT IF those entries are NOT found!  

Case Scenario:

For instance, within the last 30 minutes, if logrobot does not find at least 2 lines containing the words "Success" and "Client"  and "returned 200" OR "update:OK" in the log file, it must alert.  So in other words, the lines to search for MUST contain both words of Success & Client (Success.*Client) AND one or both of the strings returned 200 and update:OK.

Command:

[root@monitor jbowman]#  ./logxray  localhost  /var/tmp/logXray  /var/log/app.log  30  ‘SUCCESS.*Client’  '(returned 200|update:OK)'  2  2  expected_entry_tag  -ndnotfoundn

 

Back to Top


Monitor Log files for specific entries - When found, display all offending lines in alert  

 

This is particularly helpful in cases where you might want to see the actual lines that contain the patterns you instructed the tool to search for.

 

 Example:

[root@monitor jbowman]#  ./logxray  localhost  /var/tmp/logXray  autonda  /var/log/app.log  30  ‘ERROR.*Client’  '(returned 200|update:OK)'  5 10  error_exceptions  -ndshow

 


  Scan log files for minutes, hours, days, weeks or months worth of data  

 

For instance, to pull out 2 weeks of information from within a large log file and to find out how many lines contain certain strings and patterns, you can run a command similar to this:

 

Example:

[root@monitor jbowman]#  ./logxray  localhost  /var/tmp/logXray  autofig  /var/log/app.log  2w  ‘ERROR|error|panic|fail’  ‘ERROR|error|panic|fail’  5  10  -foundn

 

Notice the [ 2w ].  And also, notice the strings being searched for.  I repeated the strings ‘ERROR|error|panic|fail’ twice because there is no need to specify different search terms to look for.  You don't have to repeat the first string.  You can just enter a dot in its place for the second string..i.e:

 

[root@monitor jbowman]#  ./logxray  localhost  /var/tmp/logXray  autofig  /var/log/app.log  2w  ‘ERROR|error|panic|fail’  ‘.’  5  10  -foundn

 

From this specific example, I'm telling logrobot that I care about EVERY single line that contains any of the keywords I provided.  The [ 2w ] of course means 2 weeks. 

 

 See below for the different ways of specifying the date range:

5m = 5 minutes (changeable to any number of minutes)

10h = 10 hours (changeable to any number of hours)

2d = 2 days (changeable to any number of days)

2w = 2 weeks (changeable to any number of weeks)

3mo = 3 months (changeable to any number of months)

 

Back to Top


Monitor log file for specific patterns, use user-specified strings to filter out lines to alert on:  

 

 

Command:

 

[root@monitor jbowman]#  ./logxray localhost /var/tmp/logXray autonda /var/log/syslog 60m 'kernel|panic' 'abrt'  3 5 syslog_check -ndshow

 

Explanation:

  1. Monitor the /var/log/syslog file

  2. Scan the log for any line containing "kernel" or "panic".

  3. When the above lines are found, from those lines select only the lines that also contain the pattern/keyword "abrt"

    • Ignore all lines which do not have 'abrt' on them

  4. If the number of lines found is less than 3, exit with an OK

  5. If the number of lines found is greater than or equal to 3, and less than 5, exit with a WARNING

  6. If the number of lines found is greater than or equal to 5, exit with a CRITICAL

  7. The name of this log check is syslog_check

  8. From the list of lines found containing 'kernel' or 'panic', exclude/ignore lines that have the string 'abrt' in them

    1. Indicated with the -ndshow

  9. While trying to scan this log file, if it is detected that the timestamp of the log itself is older than 60 minutes, abort immediately

 

 

Back to Top

 

Monitoring Splunk with LoGrobot

 

  Monitor Splunk Log Files in and alert if Inactivity is detected

Case Scenario:

I had a client contact me recently, asking how he can utilize his copy of LoGrobot to monitor Splunk for inactivity.  The solution was pretty easy.  

To resolve the request, I provided the client with two options:

 

Option 1

 

To pull the last 30 minutes worth of data from a log file and check it for a pattern AND then have the patterns outputted to the screen:
 

[root@monitor jbowman]#   sudo ./logxray localhost /var/tmp/logXray autofig /opt/splunk/var/log/splunk/python.log 30m 'Splunk.*Alert' '.' 1 2 -show | tr '\t' '\n'


To pull the last 30 minutes worth of data from a log file and check it for a pattern, AND ONLY SHOW the stats:
 

[[jbowman@pgsfosplunkhead111 ~]$ sudo ./logxray localhost /var/tmp/logXray autofig /opt/splunk/var/log/splunk/python.log 30m 'Splunk.*Alert' '.' 1 2 -notfoundn
0---120---11---(2017-09-22)-(13:12)---(2017-09-22)-(13:40)---ETWNFILF---(2017-09-22)-(13:15)---(2017-09-22)-(13:40) NAGAV
[jbowman@pgsfosplunkhead111 ~]$

 

What this means is:

  1. The time the log check was run was 1:42pm. The log check tried to search 30 minutes back in the log.

  • 30 minutes back would be 1:12pm or 13:12

  • So the log check tried to search from 13:12pm to the very last entry in the log

  • The very last entry in the log has a time stamp of 2017-09-22 13:40

  • However, there was no entry in log for the time 1:12pm or 13:12.

  • So the log check grabbed the closest entry it could find in the log

  • That closest entry has a time stamp of ---(2017-09-22)-(13:15)

  • So when the log check scanned the log from: (2017-09-22)-(13:15) TO (2017-09-22)-(13:40), it found 11 entries that have the pattern you are searching for.

 

  1. When you run the log check and you see “ETWNFILF”. That is telling you that the exact time you originally wanted to search for was not found in the log file. So, the closest time was grabbed.

    • The “ETWNFILF” means “Exact Time Was Not Found In Log File”.

    • When you run the log check and the actual time you wanted was found in the log, you’ll see a response like this:

      • 0---0---16---ATWFILF---(2017-09-22)-(13:20)---(2017-09-22)-(13:50) SEAGM

        • The “ATWFILF” means – Actual Time Was Found In Log File

         

  2. Now, when you want to get an alert if the pattern you’re looking for was not found, you’ll run this:
     

jbowman@pgsfosplunkhead111 ~]$ sudo ./logxray localhost /var/tmp/logXray autofig /opt/splunk/var/log/splunk/python.log 30m 'Splunk.*Alert' '.' 20 30 -notfoundn

2---60---14---(2017-09-22)-(13:21)---(2017-09-22)-(13:50)---ETWNFILF---(2017-09-22)-(13:25)---(2017-09-22)-(13:50) NAGCQ


Note, in the command above, I raised the warning to 20 and critical to 30 so that this can alert.

Meaning of the above output:

  • 2 = CRITICAL , 1 = WARNING, 0 = OK

  • 60 – Seconds since log was last updated by whatever application that was writing to it.

  • 14 – This many entries were found matching the pattern you specified.

  • (2017-09-22)-(13:21)---(2017-09-22)-(13:50) – Original timeframe we wanted to search, which was not available.

  • ETWNFILF - The “ETWNFILF” means “Exact Time Was Not Found In Log File”.

  • (2017-09-22)-(13:25)---(2017-09-22)-(13:50) – The alternative time frame we searched instead. This is the closest the log check could find to the original time.

  • NAGCG/SEAGM – Ignore this. They’re just placed holders I put in the script that helps me know where to look if I needed to tweak something.
     

 

Option 2

 

A better alternative is to put this check in Nagios (or whatever your monitoring application is) and have the monitoring app only run it once every 30 minutes.

This is actually much more self explanatory:
 

[jbowman@pgsfosplunkhead111 ~]$ sudo ./logxray localhost /tmp/logXrayMoTest autonda /opt/splunk/var/log/splunk/python.log 15m 'Splunk.*Alert' '.' 1 1 splunkmonitor -ndnotfoundn

SUCCESS: The logXray Directory [ /tmp/logXrayJBTest ] Was Succesfully Created and is now Owned by the User [ root ] - Enjoy!

[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$ sudo ./logxray localhost /tmp/logXrayMoTest autonda /opt/splunk/var/log/splunk/python.log 15m 'Splunk.*Alert' '.' 1 1 splunkmonitor -ndnotfoundn

NEW CHECK: Should recover on next run. [ 6598 ] instance(s) of string(s) found. Tag = [ splunkmonitor ]. Date & Time = [ Fri Sep 22 14:16:12 2017 ]. Logfile = [ /opt/splunk/var/log/splunk/python.log ]. Log Size [ 4.1M ]. Actual Logfile Line Count = [ 19521 ]. | New_Patterns=6598;1;1;0; New_Entries=19521;;;;19521 Error_Percent=33.80% Time_Frame=0s;;;;0s Total_Log_Entries=19521;;;;19521 Rate_of_Update=0lps;;;;0lps Log_Size=4.1M;;;;4.1M

[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$ sudo ./logxray localhost /tmp/logXrayMoTest autonda /opt/splunk/var/log/splunk/python.log 15m 'Splunk.*Alert' '.' 1 1 splunkmonitor -ndnotfoundn

CRITICAL: [ 0 ] instance(s) of [ Splunk.*Alert.*. ] found in log [ /opt/splunk/var/log/splunk/python.log ]. Scan time = [ Sep-22-(14:16)-2017 (to) Sep-22-(14:16)-2017 ; 10s(time)s ]. Scan range = [ 19521,19521(rnge) ; 0(lnct) ]. Log Size = [ 4.1M ]. Tag = [ splunkmonitor ]. | New_Patterns=0;1;1;0; New_Entries=0;;;;19521 Error_Percent=0% Time_Frame=10s;;;;10s Total_Log_Entries=19521;;;;19521 Rate_of_Update=0lps;;;;0lps Log_Size=4.1M;;;;4.1M

[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$ sudo ./logxray localhost /tmp/logXrayMoTest autonda /opt/splunk/var/log/splunk/python.log 15m 'Splunk.*Alert' '.' 1 1 splunkmonitor -ndnotfoundn

CRITICAL: [ 0 ] instance(s) of [ Splunk.*Alert.*. ] found in log [ /opt/splunk/var/log/splunk/python.log ]. Scan time = [ Sep-22-(14:16)-2017 (to) Sep-22-(14:16)-2017 ; 23s(time)s ]. Scan range = [ 19521,19521(rnge) ; 0(lnct) ]. Log Size = [ 4.1M ]. Tag = [ splunkmonitor ]. | New_Patterns=0;1;1;0; New_Entries=0;;;;19521 Error_Percent=0% Time_Frame=23s;;;;23s Total_Log_Entries=19521;;;;19521 Rate_of_Update=0lps;;;;0lps Log_Size=4.1M;;;;4.1M

[jbowman@pgsfosplunkhead111 ~]$
[jbowman@pgsfosplunkhead111 ~]$


 

Back to Top

 

 

 

Copyright        |        Restrictions        |        Licensed Product        |        Grant of License        |        License Agreement