|
|
IT Metrics For Success
When done right,
project-management measurement can boost a company's productivity
By Deborah Asbrand
(http://www.informationweek.com/)
Despite years of coaxing, IT organizations are largely reluctant to
institute project-management metrics programs. IT departments at small and
midsize companies in particular seem disinclined to institute policies to
track the performance of development projects. With limited management
resources available and the pressure to quickly deploy new business
solutions, productivity measurement often becomes a low priority for many
organizations.
But some companies are bucking this trend--and
reaping the benefits of improved productivity.
Belk Inc., a
national retailer, was forced to adopt productivity metrics as a means of
staving off devastating system failures. Conda Lashley, the veteran IT
consultant that Belk hired, was used to nursing client organizations
through crashes that periodically downed their systems. But nothing could
prepare Lashley for the failure rate at Belk. Soon after joining the
company as senior VP for systems development, Lashley discovered that
Belk's batch systems went down an astounding 800 times a month.
The
Charlotte, N.C., outfit, a private company with estimated annual revenue
of $1.7 billion, paid a heavy price for the constant bandaging: In 1997,
Belk spent $1.1 million of its $30 million IT budget on unplanned
maintenance.
To steady the systems, Lashley instituted a series of
tracking measures. Programmers began logging their time. Software function
points were carefully counted in application development projects. Belk
compared its cycle time, defect rates, and productivity with competitors'
figures. And systems managers were required to draw up blueprints for
reducing the crashes--with the results reviewed in their performance
evaluations.
Costs Under Control The transition to
tracking the IT department's performance was painful but worthwhile,
Lashley says. Belk's systems are becoming more stable--monthly disruptions
are down to 480 incidents, a figure Lashley hopes to slash by another 30%
this year. Unplanned maintenance costs also have been brought under
control. Belk has cut unplanned maintenance expenses by $800,000 so far
this year.
Among IT organizations, however, Belk's is the
exception. Stern warnings and parental cajoling from consultants and
academics on the importance of productivity measurements have been largely
ignored for years by many organizations' IT executives. The adoption rates
for productivity measurement tools and procedures are bleak. The perceived
ease of object-oriented application design and the Wild West atmosphere of
Internet development further discourage the discipline that the use of
metrics requires, say observers. Still, experts hold out hope that more
organizations will get the message about the importance of tracking
performance metrics.
The advocates of application development
metrics have their work cut out for them. In the United States, fewer than
9% of companies use metrics to measure and monitor software development,
according to a 1997 poll of 1,100 companies by market research firm Rubin
Systems Inc. That's not surprising given metrics' dismal ranking among
technology priorities. On a list of 19 issues that included recruitment,
productivity, and project management, metrics rated dead
last.
What's more, three out of four measurement programs fail,
according to research by the Yankee Group, translating into 1.5% to 3.7%
of IT expenditures being wasted. Many IT management teams lack either the
experience or the will to ensure the success of a performance tracking
system. IT shops that succeed at measuring development output often bring
some outside consulting expertise onboard to get things rolling. Once the
initial implementation is under way, the consultants transfer knowledge to
the IT manager so that he or she can use the metrics in future
projects.
Why aren't more shops successful? "Application
development is poorly managed--period," says Alan Gonchar, president of
Compass America, a business-performance consulting group. Most
organizations do a poor job of time reporting, he says. Even basic
metrics--such as how and where programmers spend their time, and how many
lines of code the organization maintains--aren't done, he adds. One of the
first steps of year 2000 remediation, for example, is to figure out how
much of the system must be fixed. "If application development was properly
managed in the first place, you would already know that," Gonchar
says.
Indeed, metrics make a lot of sense. In other parts of a
company, the use of various performance yardsticks is routine. Sales
representatives are compensated based on their ability to meet quotas.
Marketing departments and other units that service internal customers
regularly track their billable hours in an effort to account for their
time and gauge their productivity. Profit and loss statements serve as
universal benchmarks for overall company performance.
Air Of
Mystery With failures so common, why is there such stubborn
resistance to applying a few measurements to application development,
which typically consumes 10% of overall IT budgets? For starters, software
development has long had an air of mystery about it. Software developers
are often viewed as creative types who should be left to their own
devices. Many developers who pride themselves on creating elegant
technical solutions chafe at the notion of measuring their
output.
Application development has historically bypassed
benchmarking efforts because it has been perceived as a back-office
function, particularly in large companies. "The key thing that has been
lacking in the software development arena is that it hasn't been looked at
as a necessary asset," says Dennis Huber, who, as VP of business
information and technology solutions for Sprint, oversees 2,200
programmers and contractors.
That fundamental lack of understanding
of where development fits into the corporate structure and how best to
manage it helps explain the cost-cutting roller coaster that IT
departments have experienced over the years. As the economy dips and
rises, so do budgets. When senior management cuts IT budgets, efforts in
time reporting, evaluations, and data-gathering--all of which should lead
to greater efficiencies--are among the first areas to be cut.
It's
an IT tradition that strikes many as paradoxical. "If I were a CFO, the
first thing I'd put in place would be a measurement system," says Malcolm
Slovin, VP and service director for performance engineering and
measurement strategies at Meta Group Inc., an IT advisory firm. "I'd need
to know inventory levels, backlogs, outflow, and what my competition is
doing. The last thing I would get rid of is measurement. In IT, it's the
opposite, which is a crazy thing when you think about it."
The
explosion of the Internet and electronic commerce has only worsened the
situation, convincing organizations that quick development times require
less analysis of procedures. "In the age of the Internet, people don't
think they need to do things" like measurement, says Howard Rubin, CEO of
Rubin Systems, and chairman of the computer sciences department at Hunter
College in New York. "There's a cowboy mentality that says you don't need
this stuff to execute the work."
Metrics often seem like so much
theory, worlds removed from the intense daily pressures and high stakes of
large IT organizations. Advocates of measuring IT and software efforts say
efforts to coax businesses into using IT metrics have been hurt by the
academic nature of many techniques. Lines of code, function points, and
information economics have all held the title as the newest and best
barometer of IT productivity. Function points are features and functions
that users request and receive in the applications the IT department
builds. In 1998, balanced business scorecards and business value metrics
are in vogue. These measurement theories track IT functions against the
value chain of the organization.
No Single Best
Method The problem is that there isn't a single best method, and
the various measurements offer little information without analysis.
Several years ago, the Mutual Life Insurance Co. of New York plowed
enormous resources into creating an inventory of function points from its
120 million lines of code. Was the effort useful? "Not really," says E.P.
Rogers, CIO for MONY. "It wasn't telling us what we wanted to know. It
gave us a lot of information. But there were such wide swings that it
wasn't terribly meaningful." Someone could spend five hours on a job that
had 100 function points, and someone else could spend 100 hours on a job
with five function points, he adds. The trouble is that data alone doesn't
tell IT organizations the information they need to know.
However,
sticking with the effort shows substantial payback. The Software
Engineering Institute's Capability Maturity Model estimates returns of
four- or five-to-one for successful metrics programs. The CMM is a set of
processes and procedures that assist development teams in progressing
upward through five levels of quality and achievement. Its adoption is
proceeding, albeit slowly, through large development teams.
Among
the companies that have adopted CMM is the U.S. banking division at EDS.
The division's returns using CMM are substantially higher than the 5:1
ratios seen across U.S. corporations, says chief technology officer Bill
Wilkerson. The unit's 200 programmers are split into teams of 10 to 100
members, and there are two CMM Level 2-certified teams.
But
convincing EDS clients of the value of CMM is another matter. The banks
and insurers among the division's 30 clients have been slow to accept
metrics. Of the 600 companies surveyed by the SEI in June 1997, only 3%
were financial institutions.
Given a 20% return, the 5% investment
of talent that CMM demands from most organizations seems a shrewd move,
Wilkerson argues. He estimates that financial institutions require 35
software engineers to support every $1 billion in assets, and that annual
overhead for each engineer is as much as $150,000 for taxable benefits,
office space, and computing horsepower. For an institution that employs
several thousand programmers, "if you can affect that by 20%, that's
compelling," Wilkerson says.
MONY agrees. After dissolving its
outsourcing agreement and bringing its IT operations back in-house last
year, the company began to rethink its IT function and how it was using
its 260 staff programmers. "We needed to know whether we were delivering
what the business needed us to deliver," says CIO Rogers. The stakes are
extraordinarily high: MONY's 1998 IT budget is $60 million, 50% higher
than the insurer's usual figure. This year's allocations to IT have been
increased to cover year 2000 fixes, as well as the costs associated with
its plan to become a public company.
Rather than adhering to one
measurement philosophy, MONY has combined several. MONY still tracks lines
of code and function points on larger projects, but it has stopped using
function points for code maintenance and support. It has implemented a
balanced business scorecard, a management framework that relies on a broad
range of indicators--customer perspective, internal processes, learning
and growth, and financials--to reveal whether the organization is moving
toward its strategic goals.
"We've focused on percentage of
projects completed on schedule and on budget, defect rates, and customer
satisfaction--issues that the customer is more worried about than internal
IT is," says Rogers. "Our percentage of projects delivered on time has
gone way up."
Sprint, too, has adopted an amalgam of function
points and balanced-scorecard methods to measure its software efforts. The
mixture is critical to success in a market as competitive as long-distance
services, says VP Huber. But so is the reality check that Sprint's
software effort gets by focusing on five interrelated areas: financials,
process metrics, employee metrics, customer metrics, and
leadership.
Pursuing just one area could hamstring development,
says Huber. For example, speed to market is a critical measure. "But if I
solve the problem for the marketplace and meet the customer's date and in
the process work people 99 hours a week, then I destroy morale," he
says.
Measuring Objects Object-oriented and
component-based design holds out tantalizing benefits for development
shops--eventually. But as pro- grammers adjust to the impact of reusable
code, organizations want to measure whether they're getting any real value
from the new technology. The newness of objects and components is
precisely the reason for trying to measure the technology's efficiency and
effectiveness, say metrics experts. Objects "don't eliminate the need for
measurement," says Chris Kemerer, a member of the information systems
faculty at the University of Pittsburgh. "It's not magic." In fact,
Kemerer says organizations' need for evaluation of object-oriented design
is a major driver behind the adoption of metrics programs by many
companies.
Retailer Belk will decide by November whether it will
make a large-scale investment in object-oriented design. This summer, the
company is evaluating the feasibility of harvesting some of the complex
software controls now used in its legacy systems--some of which are two
decades old--for use in component libraries.
Senior VP Lashley
expects reusable software to improve the quality of Belk's systems because
the programs have already been tested. He also expects reusable software
to allow programmers to deliver new applications more quickly. "Why build
something new when we already know this works and has been tested?" he
asks.
In software, as in all things, attitude is everything. Part
of the reason Huber's efforts at Sprint have been so successful is because
he sees software development as essential to Sprint's business, and he
conveys that to top management. "We're a phone company, but what it really
comes down to is we're a software company," he says. "Everything we do is
based on software. We're in one of the most competitive industries out
there. For us to be successful, we have to have application-development
metrics."
One of Huber's credos could be a mantra for all metrics
advocates: "If you keep doing what you're doing, you're going to keep
getting what you're getting."
Our solutions are
the most efficient, flexible, quality-based and cheaper of the market,
due to our fully innovative and customer-driven
approach. Discover them and compare them to the solutions provided you by our
competitors.
|
|
|