Strategy & Operations » Leadership & Management » IT STRATEGY – Just what a busy FD needs: another column about the

IT STRATEGY - Just what a busy FD needs: another column about the

It's far too late to do anything new to fix your Y2K-afflicted systems, says Richard Young. But making sure you're ready for anything in the run-up to 2000 is absolutely essential.

According to most of the experts, it’s pretty much too late to do anything more about the millennium bug. It’s a bit like a cold – you took the vitamin C, avoided the rain and got a few early nights, but the thing’s here now, so the best bet is take a couple of aspirin, snuggle under the blankets and hope it’s gone in the morning. The trouble with the millennium bug is that it’s still something of an enigma. As Taskforce 2000, the privately-run Y2K awareness organisation, said in a recent report, “No one knows what the impact of the problem will be. But it is necessary to make an assessment of outcomes so that organisations can implement effective contingency planning.” In other words, this is a chicken-and-egg situation. You know you’ve got to be prepared in case the thing hits you, but you can’t prepare properly unless you know how it will make an impact. And you don’t know how it will make an impact. Rather ominously, even those companies that have gone to great lengths – and huge expense – to get their systems fixed may be far from safe. Software consultancy Cap Gemini has analysed the so-called “repaired” code of more than 100 US companies, and found that around 10% of it still has serious errors, which makes contingency plans even more important but even harder to design. God forbid the “new” code has new and different glitches in it. In line with Taskforce 2000’s warning about the importance of having contingency plans, it transpires that 85% of Fortune 1000 companies are building Y2K “command centres” to cope with the potential fallout from the date change. This is sensible – hell, it would be sensible to have a disaster recovery plan and off-site command and control even if Y2K had never existed – but it will be interesting to see how the centres perform in the heat of battle. To quote again from the Taskforce 2000 report: “Death by a thousand cuts is more of a threat than one single catastrophic failure.” So perhaps the big nightmare – payroll ceasing to function or your accounting system shutting down – isn’t the real danger; in any case, if you’ve gone through the whole millennium bug testing process those were probably the first things to have been vaccinated. Rather, it is the small, unnoticed parts of the IT system, the parts that are either taken for granted or seldom looked at, which will blind-side and perhaps even cripple unwary companies. In this vein, Taskforce 2000 warns against the focus that many organisations are placing on the actual night of the change. The big stories all seem to revolve around the price of hiring programmers and systems analysts for fire-fighting duties on the crucial evening. But Gartner Group has estimated that 60% of failures will occur this year, and other problems may not arise until well into 2000 or even beyond. According to the Cap Gemini survey, 72% of the Fortune 1000 had suffered some kind of Y2K-related outage in their IT systems in April alone, a huge increase from the 55% recorded three months earlier. On top of this, some errors in data or codes may already have occurred but not been noticed, either because the data hasn’t been used or a program – which may only come into action at a specific time or for a particular task – hasn’t been run. Gartner warns that while a limited number of Y2K failures can be managed internally with no impact on business functionality, when the cumulative total reaches a critical mass, disruption will increase exponentially. So the run-up to the new year is likely to be fraught and the need for coordinated action within and between companies will be acute. Taskforce 2000 has outlined two worst-case scenarios, both with a high probability of occurring. The “death by attrition” scenario rests on well-documented failure rates for IT projects, and what companies will have to do to make up for that: “A project to replace the finance system, one of a dozen within the Year 2000 programme to be delivered within three months, fails to go live on time. Time to recovery is estimated at two months but takes four. The project scope, already reduced to just five modules of the software, is further reduced to three modules. Some staff are diverted to manual procedures, some temporary staff are hired and some reports and automated reconciliations are scrapped. A month later, another project delivery date is missed, more routine reports are scrapped, a semi-manual work-around is agreed and more temporary staff are hired. At this point, build up of paper documents and intake of temporary staff has filled all available office space … and the backlog of paper documents for subsequent computer input is now unrecoverable by existing permanent staff within the financial year. A third failure occurs …” Grim indeed. The second scenario, “death by a thousand cuts”, starts with the failure of a finance system in one or more companies in a supply chain, which ripples through to leave those companies most exposed to cashflow fluctuation high and dry. (Pre-millennium stockpiling by companies will almost certainly have a similar effect for cashflow-sensitive firms in the first months of 2000.) Taskforce 2000 also uses the airline industry as a potential victim of this scenario: “Virtually all individual failures in this tightly integrated industry produce an immediate deterioration in traffic flow, but one that is generally recoverable within two to three days. Multiple individual failures, even of different types but overlapping in time at, say, multiple major European airports, would quickly produce a high level of chaos with a correspondingly lengthy recovery time.” So producing detailed contingency plans that allow for a huge variety and combinations of failure is now the key task for directors charged with business assurance. Knowing how to convert to manual procedures within the constraints of available manpower, having good contacts with all the members of your supply chains, and sharing your contingency plans with them, is a must. It’s simply too late to be sure that any other solution will work in time.

Share
Was this article helpful?

Leave a Reply

Subscribe to get your daily business insights