Supercharging QA: The Power of Metrics

By Jeffery Gainer , executivebrief.com  |  Software, analysis, measurement

Learn how you can solidify your process improvement strategy by collecting vital data at each quality checkpoint. Study these best practices for implementing a program that will stand the test of time!

In our previous article, Mitigating Risk with Quality Checkpoints, we defined how to identify and exploit Quality Checkpoints. As mentioned, a simple definition of quality checkpoints is an opportunity within a project sub-process to identify and remove defects or “bugs.” These defects may be found through walkthroughs, reviews or testing.

Quality checkpoints, however, are only the beginning when it comes to quality assurance. Once checkpoints have been identified within the project lifecycle, these checkpoints can then be used to extract objective metrics data. A client once dubbed this end product the “Metrics Report Card” – an opportunity to use the Quality Checkpoint sub-process to capture objective metrics that can evaluate quality, scheduling, or productivity.

Why Measure?

What business value can your organization expect from a measurement program?

An effective measurement and analysis program can help evaluate current performance and quality as well as predict future performance and quality. Further, measurement programs can communicate these particular assessments to others, help an organization to focus on areas that require process improvement and these programs can also identify the steps that must be taken to implement these improvements.

Please keep in mind that measurement itself is not a goal. Measurement and the associated metrics are diagnostic tools and therefore they simply are a means to an end. Further, to be most effective, measurement activities must be focused on seeking corrective actions that will build a foundation for continuous process improvement within an organization.

What to Measure?

At its simplest, “metrics” can be defined as a collection of measurements that provide information about a product and/or a process. An overly ambitious metrics program can collapse under its own weight if a metrics program attempts to measure too much. Similar results can be expected if an organization does not initially evaluate why a certain metric will be collected or how a metric will be used.

For a new metrics initiative, I typically recommend a “back to basics” approach that concentrates on collecting the following four base types of metrics:
1. The size of the project or product
2. The effort required to make the product or complete a project
3. The quality of the product
4. The schedule of the project

Further, it is essential that stakeholders understand how they will use the metrics - and how they will benefit from the metrics. A well-designed metrics program will:
1. Provide motivational and tangible goals for management and staff
2. Provide information for managing current projects
3. Provide baseline information for planning future projects
4. Identify opportunities for improvement (i.e. continuous process improvement)
5. Eliminate rework

Types of Metrics – Project and Process

Two types of interrelated categories of raw metric data should be tracked, recorded, and assessed. Using raw metrics data, derived information can demonstrate whether or not the project is on schedule, on budget, and within acceptable levels of quality.
The first category encompasses project measures, which are metrics that are concerned with the operational aspects of cost, schedule, staffing, and quality. Project measures also encompass longer-term issues such as cost, scheduling, quality trend analysis, and future estimates.

The second category involves process measures which are metrics that allow assessment of organizational capabilities, evaluate ongoing progress, and to set goals for further process improvement.

The Basic Metrics Set

As noted above, the most common reason for failure of new metrics projects is that these projects try to measure too much.
I recommend that new metrics projects focus on the following four basic measures:

Schedule

Typical schedule metrics are estimated vs. actual schedule and measures of performance vs. service level agreements. Historical data from previous projects can be used as inputs to predict future project estimates.

Size

A size metric might be the number of requirements, lines of code, function points or cost measures. Other sizing measures can involve screens, objects, or the number of objects per screen. An important issue for sizing metrics, however, is to always use consistent approaches for counting.

Effort

Work effort is typically measured in person-hours. Thus, productivity metrics are derived by measuring work effort against scheduled activities or defect metrics.

Quality

To measure the quality of a product, the density of defects at various points during the lifecycle are measured. Defect density is defined as the number of defects divided by a measure - i.e. lines of code, requirements, use cases, function points, objects or user interface screens. Over time, accumulated metrics can be used to track trends - including identifying areas for improvement and root cause analysis.

Best Practices for Implementing Analysis and Measurement

As previously mentioned, the most common mistake when setting up a metrics program is trying to measure too much too soon. Indeed, one key measure recommended for evaluating whether or not a metrics program is successful is to determine how long it has remained in place. In fact, Howard Rubin, a noted expert in this field, sets a minimum measure of two years as the first benchmark criteria for success. That said, nearly 78% of all software measurement programs fail or are abandoned within these first two years.

Lastly, since most industry literature focuses on the technical aspects of software metrics, but falls short on addressing the “cultural” or human side of implementing and maintaining a metrics program, these important issues should be addressed.

To properly implement and maintain a metrics program, the following best practices should be followed:
1. Do not try to measure everything. Based on your organizational needs, establish a core set of raw metrics that your teams will track. Then, remember to stick to collecting these core measures alone.
2. Set clear objectives with deliverable-based milestones.Treat developing and maintaining a metrics program as a development project - with clearly defined phases of requirements, design, implementation and reporting.
3. Raw metric data becomes useful information only when it is contrasted against another metric—such as schedule, size, actual or planned effort metrics.
4. Make metrics collection and analysis an integral part of the organizational process. Although collecting metrics inevitably creates additional work, the additional work can be minimized to only a few minutes a day. Wherever possible, automate metrics collection, and otherwise make the process as “transparent” as possible, or at least an integral work process. As an aside, where there is already a good process in place, the data will likely already be readily available.
5. Avoid blame. Make metrics unthreatening and impersonal. Establish early on that metrics will never be used in a punitive fashion to “punish” under-performing individuals or teams - or as a decision source for downsizing. Personnel should not be threatened by metrics collection. The purpose of collecting and analyzing metrics is to improve the software development process and its products - and not to evaluate individual personnel. Using metrics as a punitive measure for finding and punishing sub-par performers is a misuse of metrics.
6. Metrics belong to the organization. Rather than applying metrics to individuals, metrics should be applied only to projects, products, and the organization as a whole. Make project-level metrics, particularly quality metrics and project progress vs. plan, readily available to project stakeholders and participants (including front-line designers, developers and testers).
7. Promote metrics actively with an empowered process improvement group.

Jeffery Gainer specializes in software process improvement consulting and automated testing, consulting and lecturing primarily in the United States and Canada, and with some engagements in Siberia, Thailand and Vietnam. He has written over 50 management and technical articles for numerous publications including Cutter IT Journal, .NET Magazine, Enterprise Development, Visual Basic Programmer’s Journal, and The Denver Post. He can be contacted at: gainerj@jeffgainer.com; http://www.jeffgainer.com.

ExecutiveBrief, the technology management resource for business leaders, offers proven tips, techniques, and action plans that companies can use to better manage people, processes and tools – the keys to improving their business performance. To learn more, please visit: www.executivebrief.com.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question