Digesting Log data - part 2

By , ITworld |  Open Source, Perl, Unix

Last week, we looked at a script that digests log files by making clever use of Perl's impressive implementation of arrays. This week, we look at a pared down version of the same script, paying close attention to performance and making some significant efficiency improvements. Though Perl seems to provide us with many ways of accomplishing the same task as does Unix in general, some methods are considerably more efficient than others. 

The first change we'll make to the script this week is that, instead of reading the log file into an array (i.e., into memory), we'll simply read it and digest one line at a time. This won't make a lot of difference if you're digesting small files, but if you're using the script to digest huge log files on a busy computer, this can make a big difference in the amount of memory the script will use and how long it takes to run.

Another change is that we no longer check or manipulate the @ARGV structure. In very terse Perl style, we're simply going to read the file that's passed on the command line. The only down side to this approach is that, if you don't include a file name on the command line, the script will not warn you. Instead, it will simply process nothing, waiting for you to realize what you did and type control-C.

We continue to use an associative array (or "hash") to count up the number of duplicate messages. After all, this proved to be a very convenient way to keep track of each unique message type, but we no longer bother with removing duplicates from the original array. This is both because the original array no longer exists (i.e., we're reading the file directly instead) and more importantly, because the index of the hash already contains a single entry for each message type.

Now, let's look at the extremely simplified script. In its barest form, the script to digest a log file while removing digits looks like this:

#!/bin/perl

while ( <> ) {
s/\d+/#/g; # change digits to # signs $count{$_}++; # count repeats }

for $line ( sort keys %count) {
print "$count{$line}: $line"; }

This script reads the log file, changes digits to # signs, and creates a hash with each message type as an index and the count as value. It then runs through the hash, printing out each line and each count. This is about the simplest and fastest way to digest a log file down to the unique record types. However, for some log files, this might still leave you with too much data to examine. Let's look at one reason why this might be the case.

Complications

The script that we've been looking at turns unique messages into message types by removing the numbers that, under many circumstances, make messages unique without adding to their value in a log file summary. Certain log files (such as the /var/adm/messages file) also contain date information such as the day of week and month of the year in a non-numeric format. Neutralizing this information can similarly remove a lot of information of little value at summary time. Adding lines such as these to our script:

s/Mon|Tue|Wed|Thu|Fri|Sat|Sun/DAY/;
s/Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec/MON/;

will nullify differences between messages of the same type for different days by replacing the day of week string with "DAY" and the month with "MON". Remove the DAY and MON strings if you prefer to simply remove this information instead.

With the addition of the modifications to replace day of week and month of year from each record, the script is only slightly more complex. It would look like this:

#!/bin/perl

while ( <> ) {
s/\d+/#/g; # change digits to # signs s/Mon|Tue|Wed|Thu|Fri|Sat|Sun/DAY/; s/Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec/MON/; $count{$_}++; # count repeats }

for $line ( sort keys %count) {
print "$count{$line}: $line"; }

The risk of replacing date information in this way is that these lines could change other data if it contains the specified strings and appears before the date information. This is not, however, true of most log files, in which the date appears first. Alternatively, you could use your knowledge of the record format to remove this information. For example, log files maintained by syslog start with dates specified in the "Dec 2 16:06:44" format. This line would remove the month from each line.

s/^\S+//;

Apache log files generally start with a date in this format:

[Tue Dec 10

This information could be stripped with a line such as this:

s/^\[\S+ \S+//;

Digested log files can provide you with a summary of what's happening on your system. In many cases, you will still want to go back to the original log file to determine when and how often problems are occurring. However, without the summary, reading the detailed log file is not likely to seem like a good investment of your time.

The performance improvements and reduction in complexity in this new version of the script could not have happened without the insightful comments of these readers:

Brian Hatch, Jack Hawk, Pete Peterson, John Wiersba, Josh English, Dan Kubb, Dennis Pereira, and someone named Matt.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Ask a Question
randomness