Unix commands: The best tool for the job

By  

You can't live very long with a general purpose handyman without hearing how important it is to have the proper tool for nearly any task you're about to undertake. You don't use a knife when you need a saw and you don't use a hammer when you need a punch.

Well, I've written far too many scripts that have used awkward syntax when an easier way was available. Here are a some examples of when the right command is clearly the right choice for the task at hand.

If you need to count how many times a particular string appears in a file, your gut impulse might be to do something like this:

boson> grep "my string" myfile | wc -l
       37

This works very well, but you can use a simpler syntax and maybe even save yourself some precious milliseconds. If you use the -c argument with grep, you can get grep to do the counting for you:

boson> grep -c "my string" myfile
37

The "grep -c" might even save a little time on particularly large files, though for smaller files, the times are likely to be about the same.

boson> time grep -c logo.gif access_log
6528

real    0m0.592s
user    0m0.370s
sys     0m0.140s
boson> time grep logo.gif access_log | wc -l
    6528

real    0m0.639s
user    0m0.440s
sys     0m0.190s

There are some particularly nice advantages when you're looking through a number of files for your string. For example, notice

boson> # grep -c "file system full" messages*
messages:848
messages.1:155
messages.2:7

Now that's handy. We see that we've been getting "file system full" messages over the span of several messages files and how many times the messages have appeared in each. Try doing that with a pipe to wc -l. It's a lot more trouble.

boson> grep "file system full" messages* | wc -l
    1010

No, that's not right.

boson> for file in `ls mess*`
> do
>     grep "file system full" $file | wc -l
> done
     848
     155
      7

That's closer, but not quite right.

boson> for file in `ls mess*`
> do
>     count=`grep "file system full" $file | wc -l`
>     echo $file: $count
> done
messages: 848
messages.1: 155
messages.2: 7

That's right, but that's a lot of trouble. The grep -c approach makes a lot more sense for counting lines across a number of files.

The similar -c option for the uniq command gets my vote as the right tool on many occasions. Any time I need to figure out how many uniq values I have in a list and how many of each, I do something like this:

awk '{print $NF}' access_log | sort -n | uniq -c
35707 -
   6  2
6991  43
 400  46
6113  49
11176 52
  51  62
18056 64
17512 66
  25  67
  10  85
 389  103
 391  125
   1  177
   1  201
   7  225
...

Here, we're looking at the sizes of objects (files, etc.) returned by a web server to its clients.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Operating SystemsWhite Papers & Webcasts

See more White Papers | Webcasts

Answers - Powered by ITworld

Ask a Question
randomness