‘Godfather’ of Artificial Intelligence sees ‘danger’ in it

0

Geoffrey Hinton, who is known as the ‘Godfather’ of Artificial Intelligence (AI) has resigned from his position in tech giant Google for speaking more openly on the danger of AI. Meanwhile, there is speculations about tech giants silencing most qualified people in the field who may inform the public about the possible adverse effects of the technology.

Back in 2012, Geoffrey Hinton and two of his students, Ilya Sutskever and Alex Krishevsky, built a convolutional neural network that revolutionized the field of computer vision. Later that year, Hinton, Sutskever, and Krishevsky went on to turn their technology into a company, DNN Research, which Google bought at auction for US$44 million.

After serving more than a decade at Google, the 75-year-old Hinton, who has been dubbed the ‘Godfather of AI’ is leaving the tech giant.

In a tweet, Geoffrey Hinton said: “I left so that I could talk about the dangers of AI without considering how this impacts Google”.

A New York Times report first broke the news of Hinton’s leaving Google. Earlier, he spoke with the New York Times about how the latest Artificial Intelligence (AI) boon, triggered by the sudden and immense popularity of ChatGPT and emerging popularity of Bing chat and Google Bard made him worried.

The issues that concern Hinton range from the internet being inundated with AI-generated images, videos, and text to the point that people won’t know “what is true anymore”, to the widescale impacts on jobs, and even the threat of AI gaining unexpected properties as it begins to write and run its own code.

“The idea that this stuff could actually get smarter than people – a few people believed that”, he told the New York Times.

“But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously I no longer think that”.

Geoffrey Hinton was a notable absence to the open letter calling for a six-month freeze on AI development to give regulators time to catch up – a letter with many signatories who adhere to the strange long-term ideology. But with his time at Google officially up, Hinton is warning publicly that the arms race emerging between tech giants poses potentially unseen risks to humanity.

“I don’t think they should scale this up more until they have understood whether they can control it”, he said.

The arrival of ChatGPT sent shockwaves through Google whose executives saw the AI-powered chatbot as a direct threat to its lucrative Search product.

When Microsoft announced it would add AI to Bing, Google quickly followed suit much to shareholders’ chagrin.

The conflict between commercial interests and ethical AI development has been a long-time coming.

Back in 2020, Google sacked prominent AI ethics researcher Timnit Gebru after she co-authored an article that offered four main risks associated with developing natural language models like those that power Bing, ChatGPT, and Google Bard.

Those concerns were about environmental costs, embedding bias into AI, the decision to build systems that exist purely to serve business needs, and the potential for mass misinformation.

Gebru went on to create an AI ethics institute and co-authored a critical response to the AI pause letter alongside AI ethicist Margaret Mitchell who was sacked by Google in 2021.

In response to the news about Hinton, Mitchell trough a tweet issued a dire warning that “the most qualified researchers in the world are not able to say what the future might hold for AI because they are gagged, implicitly/culturally if not directly censored, by short-term profit”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here