Tomorrow's computers will constantly improve their understanding of the data they work with, which in turn will help them provide users with more appropriate information, predicted the software mastermind behind IBM's Watson system.
Computers in the future "will learn through interacting with us. They will not necessarily require us to sit down and explicitly program them, but through continuous interaction with humans they will start to understand the kind of data and the kind of computation we need," said IBM Fellow David Ferrucci, who was IBM's principal investigator for Watson technologies. Ferrucci spoke at the IBM Smarter Computing Executive Forum, held Wednesday in New York.
"This notion of learning through collaboration and interaction is where we think computing is going," he said.
IBM's Watson project was an exercise for the company in how to build machines that can better anticipate user needs.
IBM researchers spent four years developing Watson, a supercomputer designed specifically to compete in the TV quiz show "Jeopardy," a contest that took place last year. On "Jeopardy," contestants are asked a range of questions across a wide variety of topic areas.
"Jeopardy" proved to be a formidable challenge for IBM, even more of a challenge than its work on building a chess-playing computer that could beat chess master Garry Kasparov, which IBM's Deep Blue did in 1997.
Chess is a finite mathematical problem -- albeit a very large math problem -- whereas succeeding on "Jeopardy" requires a deeper understanding of language, Ferrucci explained. With "Jeopardy," "we don't even know what the questions [are that] we will get," Ferrucci said. The information in a database cannot be "carefully aligned" ahead of time to the questions to be asked. Of course, Watson was loaded with many sources of information, such as encyclopedias and dictionaries. But Watson also needed to map the questions, which were often worded in ambiguous ways, to the data it had.
Complicating the task even further is that words, unlike chess pieces, can change their meaning depending on how they are used. "The usage of words really defines their meaning. And the usage of words happens in human context. This is not a mathematic, well-defined search space. Computers have to do a lot more analysis to get a handle of what these words mean," Ferrucci said.
The research team looked at 20,000 "Jeopardy" questions to determine their structures. They found that the vast majority of questions were too unpredictable to easily model. The only way to generate a plausible answer was to analyze the question in multiple ways, generate multiple answers, and then rank the probability that each answer is correct. And this is what Watson did.
Sign up for Computerworld eNewsletters.