‘Godfather of AI’ Geoffrey Hinton quits Google with stark warning

His concerns include misuse of AI by human operators and the ‘existential risk’ posed to us by intelligent tech. 

Artificial intelligence (AI) is in the news a lot these days. We recently reported that one company used AI-driven software to stops retailers dumping 11m kg of food. Meanwhile, the London Office of Technology and Innovation (LOTI) is conducting a survey on how local government can and should use generative AI. Now debate has been reignited by the sensational departure from Google of neural network pioneer Geoffrey Hinton. 

Dr Hinton is an acclaimed cognitive psychologist and computer scientist – and a leading authority on AI. As far back as 1986, he co-authored an influential paper on using ‘back-propagation’ to train AI systems. In 2012, he worked with two of his students to develop AlexNet, a key development in the ability of AI to ‘see’ images. 

He has previously been optimistic about the potential of AI. For Hinton of all people to now step back from such technology is, of course, highly concerning. 

But after a decade working for Google on ever better AI, he now feels the technology is ‘scary’ and liable to abuse by ‘bad actors’. He told the BBC this week that such technology could, for example, ‘allow authoritarian leaders to manipulate their electorates.’ 

People already have difficulty in distinguishing photographs of real incidents and images generated entirely by AI.  

There is also the risk that AI will replace people in a variety of administrative jobs. This recognised phenomenon is known as ‘technological unemployment.’ Hinton told the New York Times that AI has the potential to take over more than just the ‘drudge work’ and could ‘upend the job market’ as a whole. 

What’s more, Hinton thinks that as well as the threat posed by other humans misusing this AI, there’s a more profound risk to us from AI itself. There is, he said, an ‘existential risk of what happens when these things get more intelligent than us.’ 

The problem is that AI systems learn in a very different way from us. We learn individually, but a chatbot can pick up a piece of information and immediately share it with 10,000 other AI. The result is that such systems can learn a great deal extremely quickly. 

In recent news, a House of Lords select committee asked why robotics and automation hasn’t revolutionised farming – and the answer is revealing about developments in tech generally. 


Leave a Reply

Your email address will not be published. Required fields are marked *

Help us break the news – share your information, opinion or analysis
Back to top