Tuning the security analysts

Monitoring of the SIEM has gone offshore, but the overseas analysts are escalating a lot of events that aren't really worth investigating.

I have told you about the need to tune our security incident and event monitoring (SIEM) tool. Now we need to tune the analysts who are monitoring the SIEM.

We had no problem when SIEM monitoring was done in-house. My security team consists of two Level 3 analysts, who are well seasoned and very familiar with our company and have more than seven years of experience in information security. But it was recently decided that SIEM monitoring was something we could offshore. There's no arguing with a decision like that. The cost of offshore services such as this is compellingly competitive, and we've had good experiences with our offshoring of the help desk, network operations and development. Besides, my team wouldn't be laid off but instead freed to do more pressing things.

And so we found ourselves executing the statement of work and creating dozens of operating procedures, an escalation matrix, incident response training materials and a fairly comprehensive incident analysis reporting tool. We conducted countless interviews and endured weeks of knowledge transfers, several overseas trips, negotiations, dog and pony shows and other activities, bringing us to where we are now: disappointed.

The thing about SIEM analysts is that they need to be judicious. Hundreds of devices report their logs to the SIEM, which currently is identifying about 1 billion events per quarter. Clearly, we can't investigate them all, and in fact we average about 1,500 events of interest per quarter. We do far fewer investigations than that, though, since some events that wouldn't be worthy of investigation on their own become of interest when they are correlated with others to show a pattern. But when you're dealing with a billion events every quarter, managing false positives is essential, and that's why it's necessary to properly tune both the SIEM and the analysts.

For example, like most companies, mine is constantly getting scanned. Many years ago, when port scanning was less common, a single incident might have warranted action. Today, though, if we responded to every port scan, we would need an army of analysts. Therefore, we've created rules to trigger only when a port scan exceeds certain thresholds, suggesting that someone is being very persistent, versus just "driving by." So when one of the new offshore analysts began escalating dozens of port scans that didn't meet our pre-defined criteria, we had to spend time tuning the analyst.

Another example had to do with one of the analysts responding to a single FTP "change working directory" alert. This event triggers when someone who has accessed an FTP server attempts to enter more than a certain number of characters when changing to a different working directory. Such activity can be a prelude to a denial-of-service or buffer overflow attack. However, none of our FTP servers are vulnerable to attacks related to misuse of the FTP CWD command. The analyst should have made sure of this before escalating the event.

The problem with false positives, of course, is that my staff, instead of having extra time to respond to real events now that they have been freed from SIEM monitoring, are losing time by having to look into events that don't warrant the attention. And some of these ludicrous security incidents are forwarded to IT operations team members for action. If they see too many of these false positives, they could end up with "the boy who cried wolf" syndrome and begin ignoring incidents. Besides all that, false positives are tracked; when you have a lot of them, management receives the wrong message.

All we can do, it seems, is to hold lots of training sessions and reduce the number of false positives coming from overseas. Just as important, of course, is the need to do that while making sure that the analysts don't miss the events that really do deserve our attention. When you think about it, going too far the other way would be even worse.

This week's journal is written by a real security manager, "Mathias Thurman," whose name and employer have been disguised for obvious reasons. Contact him at mathias_thurman@yahoo.com.

Join in the discussions about security!computerworld.com/blogs/security

This story, "Tuning the security analysts" was originally published by Computerworld.

What’s wrong? The new clean desk test
Join the discussion
Be the first to comment on this article. Our Commenting Policies