Managing IT (information technology) costs has always been of paramount importance in software projects; and it is more so in the last couple of years due to shrinking budgets in corporates globally, says Vanaja Arvind, Executive Director, Thinksoft Global Services Ltd, Chennai. The performance and return on such investments are also scrutinised much more closely now than in the past, she adds, during a recent interaction with eWorld.
Alarmingly, in the last decade, software projects failure costs in the US are estimated at around $50-80 billion annually (The Standish Group's report, 'CHAOS Summary 2009').
As per the 1994-2009 numbers that Vanaja cites, though project successes are around a third, the 'failed' category is at a peak of 40 per cent, and the 'challenged' set hovers at almost 50 per cent.
In 2010, in the US, software represents 34 per cent of enterprise technology spending, but nearly 55 per cent of the applications budget is consumed by maintenance and supporting ongoing operations, according to Forrester Research, she notes.
"At the same time there is a tremendous need for changes from legacy systems to reduce the maintenance costs; hence there will be need for new systems, introducing new features to improve competitiveness, and time-to-market. But budgets for new project rollouts will be very limited and will come under greater control to reduce the failure costs."
Excerpts from the interview.
What are the reasons for cost overrun?
The single-most important factor behind cost overrun is the resource cost or the project effort.
The outsourcing wave to low-cost destinations to a certain extent mitigated that risk for a few years. Even as the low-cost factor has become the norm for budgets, software projects are back to square one with ballooning effort!
It is critical to note that as in an iceberg this entire effort is not visible on the surface when budgeting is done even with advanced estimation methods and techniques. The most visible sign of the iceberg can be compared to its 'drift,' and/ or 'schedule delays.'
The less visible one is the underwater part representing 'poor quality resulting in significant rework,' 'cost overrun due to high rework and idle times,' and 'customer dissatisfaction.'
These are the ones that cause the sinking of 'Titanic' rollouts unless there is a system of studying below the surface and safeguarding against what is beneath the surface.
Indian service providers were largely insulated from this 'iceberg' as the engagement model was mostly one of time and material (T&M;).
But of late there has been major resistance to such an engagement model from clients who are insisting on fixed price model or unit-based pricing model both of which require expertise in domain, strong project management, and six sigma estimation techniques to minimise revenue erosion.
Hence it has become imperative to analyse the reasons as to why software projects go out of control in terms of effort, and to avoid the same traps going forward.
Can you name the top three reasons why projects get out of hand?
First, the software requirements definitions are ambiguous to begin with and always change throughout the project lifecycle and sometimes even after implementation, resulting in substantial rework in projects. Poorly-defined applications contribute to 66 per cent of project failure rate costing the US $30 billion every year, as per Forrester Research estimates.
Second, the lack of business domain knowledge and incomplete understanding of requirements by the development team contribute to increased rework throughout the project lifecycle. For instance, NIST (the National Institute of Standards and Technology) reports that identifying and correcting defects account for 80 per cent of development costs.
Our experience has shown that requirements review and gap analysis of functional specifications against business requirements by an independent domain team can save 25 per cent of rework costs in the entire software development lifecycle.
Third, inefficiencies in managing the large and complex technical infrastructure by organisations result in ballooning resource idle times.
For example, test execution downtimes can arise due to infrastructure problems, leading to project delays.
Do timelines too get blamed, at times?
True, increased global competition forces software changes at a faster pace to retain the competitive edge, resulting in infeasible timelines being adopted.
It is not unusual that in many organisations marketing announces the rollout dates for new product features and IT is expected to comply with that as a drop-dead date.
This results in project requirements to be documented in a hurry and causes a vicious cycle of requirements changes which cascade down to all stages of development cycle engendering delays, rework, bad quality, and so on.
Also, there is an increased trend of M&A; (mergers and acquisitions) in the global market. Merger with another institution needs integration of systems that are fragmented and complex on both sides. Market pressures will necessitate quick fixes which will result in errors and significant rework.
What about the manpower problems?
Yes, projects can be impacted by the deteriorating quality and productivity of manpower.
Rapidly-mobile IT resources lead to gaps in domain/ product/ applications knowledge.
Sample this: 80 per cent of fresh graduates coming out of colleges in India lack business skills and good understanding of commercial applications and they need a steep learning curve before they become productive.
The mobility of technical resources is so high that it does not give them any opportunity to build expertise; hence they continue to have low productivity even after two-three years in the market.
In spite of heavy 'front-ending' by domain experts, issues seem to crop up since the bulk of software development work is still done by technically qualified people with little appreciation of the intricacies of business processes and transaction flows.