Security professionals and developers don't think alike. And, yes, I am making a sweeping generalization with that observation. There are exceptions, of course. But for the most part the characterization is true, and that has a lot to do with why security professionals often can't comprehend developers' attitude toward security matters, and developers can't stomach the ways security professionals want to mess with their creations.
Developers look at systems, apps and other software tools and are impressed by the cool things they can do, and maybe by the economy with which it was all achieved. They marvel at features and innovation. In software parlance, they focus on their products' functional specifications (or user stories, for you agile folks).
Security professionals look at those same things and immediately analyze them for what can go awry. We have a healthy presumption that things will go wrong more often than not. We are always trying to anticipate how we can respond to the things that go wrong and thinking about how we can keep them from going wrong in the first place. For security professionals, the coolness factor isn't all that meaningful if it's overshadowed by the risk factor.
I spend a great deal of my time working directly with software developers, architects and other people whose focus is on building things. As a class, they are full of optimism, and that trait probably has a lot to do with why they are good at building things. On my side of the table, though, pessimism reigns, and the builders often complain that we security folk are always raining on their parade by constantly focusing on negative things.
But you cannot shake a security professional's default attitude: We assume everything is dangerous until proven safe. We know that when we live by that tenet, we end up with more secure results. Say that I'm presented with an app that takes user input. I am going to presume that the input data is poisonous and will not be shaken in that presumption until it can be proved to be safe. I know that software that immediately acts on input is often susceptible to injection attacks, cross-site scripting (XSS) and other security defects. So I demand proof that none of those things can happen. When I do this, we end up with software that's more resilient.
So how do we instill in our junior software staff a healthy sense of mistrust? We need them to take a lessons from squirrels. Let me explain. When I ride my mountain bike, I often encounter deer, fox, raccoons, squirrels and other wildlife. In all of them, the survival instinct is strong. As soon as they see me approaching, they presume me to be a threat and immediately scurry away. It's a reasonable reaction -- and not just because it's me.
And how do we get developers to emulate squirrels? One answer might be to expose them to a knowledge base of security issues in a way that they can internalize. Don't hand them a bunch of papers on SQL injection, XSS, etc., and hope for miracles.
I've found that developers, like most people, learn best by hands-on experience. I like exposing them to tools like OWASP's venerable WebGoat and having them work through exercises where they perform attacks like SQL injection and XSS themselves. Once they see what can go wrong when untrustworthy data inputs poison an application and get the application to misbehave in sometimes spectacular ways, they tend to internalize the issues thoroughly.
Most computer scientists that enter the workforce are not exposed to much in the way of security training, if any at all. When you hire these folks, invest the time and energy to show them firsthand what can go wrong. When they can see those sorts of things with their own eyes, they're far more likely to have the right sort of attitude about software security. You'll end up not just developers who act a bit like squirrels (in a good way), but like highly sarcastic squirrels, who will look at the requirements for a very cool piece of software, roll their eyes and say, "What could possibly go wrong?"
With more than 20 years in the information security field, Kenneth van Wyk has worked at Carnegie Mellon University's CERT/CC, the U.S. Deptartment of Defense, Para-Protect and others. He has published two books on information security and is working on a third. He is the president and principal consultant at KRvW Associates LLC in Alexandria, Va.
Read more about security in Computerworld's Security Topic Center.
This story, "If you want developers to give a hoot about security, take a lesson from the squirrels" was originally published by Computerworld.