Linux Code Security: Today's Top Code Quality Issue

RELATED TOPICS

Coverity, Inc. –

Don Marti is chair of Open Source World (previously LinuxWorld). A Linux user since 1994, Don has been a writer, editor, professional services consultant, and conference organizer. Marti walks us through Linux security from the developer perspective and shares his thoughts on today’s biggest code quality issues, open source advantages and best practices for proprietary and OSS teams.

Q. So you’ve been coding with Linux since practically the dawn of the kernel. What is the appeal for you? Is Linux an obvious choice for certain developers and, if so, for what reasons?

A. The advantage of Linux is that you can keep your suppliers honest by being able to go elsewhere. Senescent IT companies either fail, or squeeze their users for short-term revenue at the expense of quality, or both. Linux lets the actual product survive a company’s end. The other advantage is that you can strip out the functionality you don’t need. Your security and quality risks are lower when you can turn stuff off, either at the source code level or the install level.

Q. In the time that you’ve been involved with Linux, have you observed if the “strength” of the code base grown or changed?

A. I’ve been running Linux since 1994, and in that time almost everything has been rewritten a couple of times. The complexity has grown a lot, but the level of work you can get done with an out-of-the-box install has grown way more, so I can’t complain.

Q. From your experience, what has been the Linux community’s perception of quality and security in the Linux code base? Has concern been voiced and addressed and how?

A. There has always been tension between fast feature changes and code quality. Everybody has a different comfort level, and almost all users want a version that’s stable and tested and never changes — except, of course, for the one essential feature I want.

Q. Has concern for Linux code security been tackled from a community manager perspective?

A. For Linux proper, the kernel, no. There’s no “community manager” for the kernel developers, and I wouldn’t wish that job on anybody.

If you expand the question to include all the software subsystems that go into a “Linux” install, then yes. Many of them have community managers and well-established security protocols.

The active distributions are doing a good job on security advisories and software update services, but users who choose to run software outside a distribution’s package manager are still going to be playing the same security game that users of other OSs have to. In the long run, as more and more software gets delivered through the distribution, the administrator’s job gets easier, but running a full-service distribution gets harder.

Q. As you know, Coverity is a big advocate of software integrity, by which we mean addressing performance, quality and security as a holistic effort. That philosophy underlies the Scan Project. What do you think about these concerns? Does one concern trump the other?

A. If you don't have functionality, including performance, security doesn't matter because nobody wants the software. If you don't have quality, you don't know if you have security or not. The difference between just a bug and a security bug is somebody coming up with a way to exploit it.

Being able to test and deploy a bug-fixed version is critical.

Besides performance, quality, and security, another factor to look at is usability. If a user can easily turn off some unused functionality in an application, that could cut risks. Usability in administration tools is a security feature — make administration hard, and time-pressed administrators will just open stuff up until the errors stop coming.

(A lot of malware spreads because the instructions for spreading it are less confusing than many of the instructions for using legit software. The secure way to do something and the easy way to do it need to overlap as much as possible.)

Q. As a developer, what do you see as the biggest code quality issues/concerns?

A. Web applications. It’s much, much easier to turn your static HTML site into an application — a fun discussion board, game, mashup, or whatever — than it is to learn about cross-site scripting attacks, SQL injection, and all the risks of running a web service. The web opened up development to lots of people, but you can’t do a new developer’s security learning for him or her.

It’s going to be a challenge for the authors and packagers of infrastructure software to give all those creative but inexperienced web developers a safer environment. Virtualization and SELinux work are both important here, but again the challenge is making the administration straightforward enough that people really use it.

Q. In your opinion, is there an open source advantage when it comes to maintaining code integrity?

A. Yes, for end users who do their homework. If you just download an open source application, deploy it, and forget it, there’s no real security advantage.

If you have software installed, you need a source of trusted updates. You could have an individual download that scored very high on any integrity metric you want at the time you downloaded it, but if there’s one exploitable flaw discovered later, it’s a problem for you.

You can’t just look at integrity at one point in time — you have to, first, be able to practice “software avoidance” by configuring your system to remove or disable unused software, and then, for the software you do need, you have to think about the whole process by which you would become aware of a problem and fix it.

Q. What could proprietary software development learn from open source development?

A. Open source projects are often more willing than proprietary ones to throw out a homegrown component for a better tested externally developed one — a good practice commonly understood by those who contribute to open source communities.

The main advantage open source has is culture and training. Proprietary vendors should encourage employees to participate in open source projects that don’t conflict with their work NDAs. (Having employees in the open source scene is also a good recruiting tool.)

And, some proprietary companies are catching on — look at what Apple is doing with SQLite as a storage back-end for proprietary software and see if it makes sense for you.

Q. What best practices do you feel the OSS community and proprietary software developers should adopt from each other with respect to code integrity?

A. Everybody should be using a distributed revision control system with strong hashes for all changes back to the beginning of the project, and everyone who’s allowed to see the code should have a full tree checked out. There’s no excuse for “one server gets compromised, we don’t know what got changed.”

Anyone on the project should be able to do a full integrity check, build, and test quickly.

Q. Any final thoughts on what’s next for software development?

A. The biggest change, starting in the mid/late 1990s, was open source companies. So now that there’s so much “commercial open source” you can’t really use “commercial” as the opposite of “open source” any more. And many vendors with an essentially proprietary business model will release the non-paying stuff as open source.

Very few users are going to be all one or all the other, so there won’t be much value in being the best in one category if there’s an excellent choice on the other side.

Everyone is going to be using outside-developed components as much as possible, with the level of “possible” defined by how well you can fit test-driven development and distributed revision control into your processes.

RELATED TOPICS
What’s wrong? The new clean desk test
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies