“It remains with the engineers to decide if the answer is right, wrong or relevant. An answer doesn’t mean it’s exactly right,” she said. “That’s why it’s a research assistant, rather than the decision maker.”
Watson is currently in production at Woodside and being used by hundreds of people, and has ingested 20,000 documents. This doesn’t happen at the push of a button, however, and Jordan said it took between six and eight months to train up Watson.
“Part of that was it required a lot of collaboration from the business,” she said. “We needed a number of relevant questions engineers could ask. We asked the business and they gave us questions, but importantly, they came and gave up their time to train Watson.
“There were hours and hours of training sessions where Watson would go out with a number of rules-based, machine learning algorithms and come back with proposed answers, then engineers would evaluate them. It’s then that we applied machine learning algorithms on top of those supervised scenarios where we know the right answers should be, or know it’s not good answer.”
As a result, success is dependent on the enthusiasm and input of the rest of the organisation, Jordan said.
“The other lesson is that not everything needs to be that big,” she commented. “You don’t need an engagement adviser for every problem in the company. We are now deploying a number of other solutions with other Watson products which doesn’t require that amount of training and time.”
Ultimately, the investment in Watson is being quantified by the confidence it’s giving teams in data-driven decision making, Jordan said.
“This allows us to look at all of our history and expertise - we don’t reinvent the wheel and don’t want to make the same mistake again,” she added. “If we can avoid making the same mistake, 10 years later, then there’s the value.”
Sign up for Computerworld eNewsletters.