Can new software testing frameworks bring us to provably correct software?

By Matthew Heusser, CIO |  Software

Brooks argues that any radical change in a specific activity, such as eliminating testing by making software proven correct, can only modestly decrease the cost of delivering an entire system. Instead of trying to automate the task of programming, something Brooks equates to finding a "silver bullet" to slay the werewolf of runaway costs, Brooks suggests a collection of "bronze bullets," or small improvements to each activity that combine for a large effect.

For proof of correctness, that means moving from a single technique to a set of techniques, each designed to improve confidence in the software--preferably in a way that can be automated and repeated with the next build.

Dave Nicolette, an independent consultant and agile coach in Cleveland, recently published his list of techniques in Delivering Provably Correct Code. Like Brooks, Nicolette gives up on the idea of formal proof and instead suggests four other approaches.

Leveraging higher-level frameworks. Once a tag cloud is demonstrated correct (enough) for a half-dozen customers, then testing the search becomes a matter of checking examples and inputs, not measuring font sizes with a rule and calculator. The same can be said of a search indexer, GPS code library or key-value NoSQL tool. Focusing on delivering the right inputs to a library, outcomes and edge conditions can be much less expensive than writing it from scratch. Nicollete suggests frameworks, code generators and libraries as three places to start.

Programming language constructs. Nicolette starts with an example of declaring a typed list in Java, to ensure that the list only has members of a specific type, and goes on to recommend static code analysis and automated unit tests. Static code analyzers can, for example, find errors such as an assignment in a comparison ("if a=b" instead of "if a==b") and are often available freely and easily to add to a continuous integration framework. Automated unit tests, most commonly the xUnit framework, provide examples of a class in operational use and are also straightforward enough to add to a build framework.

Collaborative work environments. Even if we could create a provably correct requirements document, there still remains the risk that the requirement contains an error, that an architect misunderstood a phrase in a English document as he changed it into some program. Collaborative environments solve this problem--not through code, but by having people sit together and talk through issues and examples before putting their hands on a keyboard. These types of environments reduce the risk of defect through conversation, examples, questions, answers and feedback.


Originally published on CIO |  Click here to read the original story.
Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Ask a Question
randomness