Subscribe / Unsubscribe Enewsletters | Login | Register

Pencil Banner

Painful lessons from IT outsourcing gone bad

Ephraim Schwartz | Aug. 26, 2008
Outsourcing has worked well for many companies, but it can also lead to business-damaging nightmares, says Larry Harding, founder and president of High Street Partners

While there was plenty of blame to go around at EDS, the Navy took its share of blame as well. One of the major issues with the Navy was that the buck stopped nowhere. There was no single person or entity that could help EDS determine what legacy applications were needed and what applications could be excised. EDS, for example, found 11 different label-making applications, but there was no one who could say which 10 to eliminate.

Most companies will never face outsourcing problems on the scale of the Navy and EDS. But many will face their own horrors on systems and projects just as critical. Consider these four modern examples and what lessons the companies painfully learned.

Horror No. 1: A medical firm's configuration management surprise

When Fran Schmidt, now a configuration engineer with Orasi Consulting ,was told at her previous job in the medical industry to head up a team to outsource the existing in-house development and quality assurance IT departments, she faced the usual problems.

"There was one Indian fellow no one could understand over the phone. It took us months to figure out what he was saying," Schmidt recalls with a smile.

That was expected. But what the medical firm didn't count on was that its existing configuration management tool, Virtual Source Space, which worked fine locally, would be a total bust when used collaboratively between two groups 8,000 miles apart. It took the remote teams in India an average of 13 hours to get updates on source code. And with a time difference of about 11 hours, the outsourcers were behind a full day's work.

"When we hit the [Send] button, there was no code done by the previous shift the entire time they were at work," recalls Schmidt. Not having immediate access to what was previously done the day before caused major problems for in-house developers. "All our progress schedules were behind. It's a domino effect with everyone playing catch-up." And the firm's customers paid the price: They were upset because they were not getting the same level of care that they expected.

The medical firm ultimately switched its configuration management tool to Accurev, cutting the transoceanic updating from 13 hours to about an hour and a half. All told, it took around six months to recover from the disaster, Schmidt recalls.

The obvious lesson was the need to test your infrastructure before going live in an offshoring scenario. But the medical firm also learned another hard lesson: The desire to save big bucks so blinded the executives that they didn't realize they were replacing a group of people experienced with using a product to a group of people who were looking at it for the first time. "We underestimated the loss of knowledge that would take place during the transition," Schmidt says.

 

Previous Page  1  2  3  4  5  Next Page 

Sign up for Computerworld eNewsletters.