While Hawking and Musk both say A.I. could lead to the annihilation of the human race, Dietterich, Gil and Darrell are quick to point out that artificial intelligence is not a threshold phenomenon.
"It's not like today they're not as powerful as people and then boom they're vastly more powerful than we are," said Dietterich. "We won't hit a threshold and wake up one day to find they've become super-intelligent, conscious or sentient."
Darrell, meanwhile, said he's glad there's enough concern to raise a discussion of the issue.
"There are perils of each point," he said. "The peril of full autonomy is the science fiction idea where we cede control to some imaginary robotic or alien race. There's the peril of deciding to never use technology and then someone else overtakes us. There are no simple answers, but there are no simple fears. We shouldn't be blindly afraid of anything."
Sign up for Computerworld eNewsletters.