Artificial intelligence could be given “common sense” within ten years, according to Google’s leading AI scientist. Professor Geoff Hinton, the man tasked with developing intelligent operating systems for Google, claims the company is close to artificially replicating the human capacity for logic, natural conversation, and even emotive behaviours such as flirting, within a machine.
A new type of algorithm that Hinton and his team are working on, described as thought vectoring, encodes thoughts as number sequences. Though in its infancy, early experiments in thought vectoring has already suggested that human-like reasoning can be replicated with a computer or, as Hinton puts it, “Basically, they’ll have common sense.”
“There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” Hinton adds. “But there’s no reason why not. I think you can capture a thought by a vector.”
Hinton will present his findings to the Royal Society in London on Friday, and he thinks that thought vectoring will break computers out of being simply tools or appliances and into actual relationships with its owner. “It’s not that far-fetched,” he said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.”
As an intellectual exercise, Hinton’s work is fascinating, but do we really want our computers to be anything other than functional tools? Does your laptop need a personality? Hinton isn’t scared about the development of AI itself – unlike Elon Musk, Apple co-founder Steve Wozniak, Bill Gates, and Professor Stephen Hawking – but does fear how it could be misused.
“I’m more scared about the things that have already happened,” Hinton said. “The NSA is already bugging everything that everybody does. Each time there’s a new revelation from Snowden, you realise the extent of it.”
“I am scared that if you make the technology work better, you help the NSA misuse it more. I’d be more worried about that than about autonomous killer robots.”
Thank you The Guardian for providing us with this information.