In April 1999, one of Capital Blue Cross's health-care insurance plans had been in the field for three years but hadn't performed as well as expected for the Harrisburg, Pa.-based company. The ratio of premiums to claims payments wasn't meeting historic norms. In order to revamp the product features or pricing to boost performance, the company needed to understand why it was underperforming. To do this, it needed to analyze relational customer usage data -- medical claims, drug claims and demographics of its 1.5 million customers -- to figure out the"" problem and where to make adjustments. Unfortunately, that data was dispersed in transactional systems, stored in flat files. It had been a struggle to make meaningful deductions using existing methods of reentering transactional flat file data into spreadsheets.
External deadlines added to the pressure. Before Capital could make any product modifications, it had to submit the proposed changes for approval by state regulators, and the submission cutoff date was February 2000 -- only 10 months away. Miss the date, and Capital would have to wait six months to a year for another chance to submit.
Because any change to the insurance product would affect the finance and operations groups and involve data, the major stakeholders were the CFO, the executive vice president of operations, and CIO and Senior Vice President Ted DellaVecchia. (DellaVecchia recently left this post to become CIO and senior vice president at Starbucks Coffee Co.) These three, plus members from the user teams, sat down last April to do the Concept Exploration, facilitated by Director of Quality Mick Meckler and his team.
Identify the Business Need
The stakeholders came to the discussion already knowing they needed better extraction and analysis of usage data in order to understand product shortcomings and recommend improvements. "We hadn't gotten at the answers and the underlying issues," DellaVecchia says. "The true facts weren't there to work with."
After listening to input from the user teams, the stakeholders proposed three options. One was to persevere with the current manual method of pulling data from flat files via ad hoc reports and retyping it into spreadsheets. This wasn't merely a straw-man alternative, DellaVecchia points out.
The second option was to write a program to dynamically mine the needed data from Capital's customer information control system (CICS). While the system was processing claims, for instance, the program would pull out up-to-the-minute data at a given point in time for users to analyze. (In retrospect, this option probably would have proved technologically impractical, DellaVecchia says, but it was a legitimate alternative at the time.)
The third alternative was to develop a decision-support system to allow users to make relational queries from a data mart containing a replication of the relevant claims and customer data.
Each of these alternatives was evaluated on cost, benefits, risks and intangibles.
Estimate Cost Versus Value
Capital's valuation incorporates cost and payback in dollars, the relative quality of the outputs the systems would deliver to users, the levels of risk and the intangibles.
Dollars: Capital bases cost on dollars that would be spent on resources and time, including hardware and software licenses and development time spent by users, IT staff and consultants (calculated from fully weighted compensation for full-timee equivalents). In this case, the team chose to factor costs over five years to avoid the fallacy of choosing an alternative based on misleading short-term costs.
The team came up with initial cost estimates quickly in order to expedite the analysis and choice of alternatives. After the analysis period, when the final approach was selected, Capital did additional ROI analysis to help establish a budget for the initiative. This budgeting process took four to five weeks because of the time needed to research the vendor's technology offerings. Because this project was expected to be expensive, extra care was devoted to this part of the analysis (the final budget for the project amounted to 4 percent of Capital's annual IT budget).
To estimate dollar payback, the team used numbers from two areas: internal savings and the return that the health-care plan modifications would generate in the field. Depending on the option considered, internal savings could derive from the number of IT people freed up from support and redeployed, unnecessary software licenses that could be jettisoned and reduced need for hardware such as direct access storage devices (DASDs). Pursuing the decision-support option and its requisite data mart would also yield savings and cost avoidance by allowing Capital to cancel development of a costlier, redundant, less-focused data warehouse project underway at the time.
Estimates of health-care plan returns from the field were based on historical "medical/loss ratios," a metric insurers use to calculate premiums coming in against claim payments going out. If plan adjustments resulted in the current ratio improving to meet or exceed Capital's traditional ratio, the resulting value would be directly attributable to the new system. The difference between the current ratio and the target ratio was the number applied to the ROI estimate, DellaVecchia explains. He admits that there is a big assumption behind this number -- that the marketplace would react as anticipated to any changes in the health-care plan.
The manual, existing method came up high on cost because of the long-term expenses of writing and maintaining ad hoc data reporting programs and maintaining staff with CICS expertise, a skill set DellaVecchia wanted to transition away from.
The second option, writing a dynamic transactional data extraction program, would also incur high costs because the program would have to be ever-changing in order to accommodate user requests for different slices of data.
The third option, the decision-support data mart system, had the highest short-term costs because of the pricey software license and the costs of hiring consultants to assist in the implementation.
The payback calculations were thus: The decision-support system option would return 12 percent annually over five years, based on anticipated internal savings and improvement in the medical/loss ratio. Capital did not bother to spend much time calculating full returns for the other two options because it became apparent that they were far outclassed by the decision-support system based on other factors. "If all three options were equally viable, we would have done a full business case for each," DellaVecchia says. In this instance, it didn't make sense to expend that effort.
Quality: This is a measure of the accuracy of the data and the relative quality to those business functions using it for management decisions. For different projects, the impact of quality may vary, increasing or decreasing its importance in the Concept Exploration equation. In this instance, quality was the key decision factor -- without high-quality data to base decisions on, the health-care plan changes might be ineffective, or even worsen the situation. Best of breed was important to this effort, but if the company wanted a quick-and-dirty, one-time solution, quality may not have been weighted as heavily.
With the manual method, inaccurate rekeying of report data into spreadsheets could easily undermine data integrity, and the processes would be virtually unauditable. For the transactional extraction program alternative, the data would presumably be accurate, but it would not be practical and perhaps not even possible for the program to provide relationships between the data, reducing its value. The decision-support data mart, because of its auditable processes, data cleansing and relational capability, would yield the highest level of data quality, making it much more likely that user decision making would be sound.
Risk: The ideal risk rating for any alternative under Concept Exploration would be low-low, meaning a low risk of project failure and, in the case of failure, a low impact on the business. Capital calculates the impact of a failed system using estimates of the costs of downtime, lost sales, fines and even lawsuits. The risk of a given project failing is mainly relative -- how much more likely is this project to flounder than that project, based on its complexity, past experience, having appropriate skill sets, the viability of the vendors and so on.
In the case of the three data analysis alternatives, "they were all risky," says DellaVecchia. "Continuing to do things manually was risky because we might never have gotten the information we needed. Creating a proprietary program to extract data from a dynamic transaction system that wasn't auditable was risky. Building a new environment -- a relational data mart with a customer interface -- would be risky because we had never done it before." And the tight timing mandated by the state regulators upped the ante for all three options. Each one rated a medium-medium risk, so this category of evaluation was not a differentiator.
Intangibles: The Concept Exploration team did not identify any intangible benefits for the alternative approaches. "The other differentiators were so overwhelming, we didn't feel we needed to consider intangibles," Meckler says. When the other decision factors seem relatively equal, intangibles get more consideration to help guide the decision.
In hindsight, DellaVecchia sees several choice intangibles from the decision-support system, including an infectious demand for data marts throughout the enterprise and a generally stronger regard for the power of information for competitive advantage.
Quality emerged as the big differentiator in this Concept Exploration and, along with the dollar valuation, was the deciding factor in the stakeholders' choice to pursue the decision-support system option. The other two options, Meckler says, "boiled down to throwing more resources at the problem or automating the wrong thing." The decision-support data mart went live at press time, and Capital appears on track to meet the state regulator's deadline for change submissions this month.
Critical Analysis By Douglas Hubbard -- Be More Actuarial
Capital Blue Cross starts off on the right foot. Though the process of identifying business needs and options for fulfilling them is often ad hoc, Capital uses a deliberate, formal method. I would recommend that Capital and others employ this process not only when they have a specific problem to react to but also to identify strategic opportunities.
Capital felt that only one of the proposed alternatives -- the data mart -- deserved more measurement attention. DellaVecchia and his group show good intuition here; measurement matters most where the uncertainty is greatest. But it's important to devote as much attention to measuring benefits as estimating costs. It's common in IT to spend much more time on costs, but it is usually a misallocation of effort. However uncertain you think you might be about costs, you are usually even more uncertain about benefits. Our research shows that reducing uncertainty about benefits, as opposed to costs, usually has a larger effect on the likelihood of making the best investment decision.
Categorizing a system's risk as high, medium or low isn't really a measurement of risk. Risk is too important to leave up to vague labels. IT groups -- even in insurance companies -- don't often speak about or measure risk in ways that are actuarially sound. Insurance company actuaries measure risk probabilistically, and IT should be able to copy that discipline and do the same thing to quantify risk. Once you start measuring risk rather than labeling it, you might discover that return-on-investment expectations, such as Capital's 12 percent, may be too low to justify the risk. (See "Hurdling Risk," by Douglas Hubbard, CIO Enterprise, June 15, 1998.)
As for intangibles, if you think something falls under this classification, ask yourself, "How do I know when I have it?" If it is truly undetectable in any way, then it probably is of no consequence to your company. But if you can identify how to observe it, you are halfway to measuring it. Labeling something "intangible" removes some of the most important benefits of IT from the ROI calculation.
Even Capital's evaluation of quality is quantifiable. Quality of information should be represented in Capital's "dollars" estimate (ROI) as the economic consequences of better decisions resulting from reduced data errors. Capital's quality focus is important, but it needs to end up in the ROI calculation.
The bottom line: Capital's Concept Exploration focuses on identifying the right investment, and most firms would do well to emulate it. But these efforts will be more accurate and effective when the team spends at least as much time calculating benefits as it does cost, and finds a quantitative representation for risk, intangibles and quality that can be rolled up in the ROI figure.
This story, "Capital Blue Cross 80" was originally published by CIO.