“Godfather of AI” Geoffrey Hinton warns AI could take control from humans: “People haven’t understood what’s coming”

April 26, 2025
3,300 Views

“Godfather of AI” Geoffrey Hinton was awakened in the middle of the night last year with news he had won the Nobel Prize in physics. He said he never expected such recognition. 

“I dreamt about winning one for figuring out how the brain works. But I didn’t figure out how the brain works, but I won one anyway,” Hinton said.

The 77-year-old researcher earned the award for his pioneering work in neural networks — proposing in 1986 a method to predict the next word in a sequence — now the foundational concept behind today’s large language models.

While Hinton believes artificial intelligence will transform education and medicine and potentially solve climate change, he’s increasingly concerned about its rapid development.

“The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” Hinton explained. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”

The AI pioneer estimates a 10% to 20% risk that artificial intelligence will eventually take control from humans.

“People haven’t got it yet, people haven’t understood what’s coming,” he warned.

His concerns echo those of industry leaders like Google CEO Sundar Pichai, X-AI’s Elon Musk, and OpenAI CEO Sam Altman, who have all expressed similar worries. Yet Hinton criticizes these same companies for prioritizing profits over safety.

“If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation. There’s hardly any regulation as it is, but they want less,” Hinton said.

Hinton appears particularly disappointed with Google, where he previously worked, for reversing its stance on military AI applications.

According to Hinton, AI companies should dedicate significantly more resources to safety research — “like a third” of their computing power, compared to the much smaller fraction currently allocated.

CBS News asked all the AI labs mentioned how much of their compute is used for safety research. None of them gave a number. All have said safety is important and they support regulation in general but have mostly opposed the regulations lawmakers have put forward so far.

Source link

You may be interested

How the original ‘Eternaut’ comic presaged Argentinian dictatorship’s abductions, killings
World
shares3,278 views
World
shares3,278 views

How the original ‘Eternaut’ comic presaged Argentinian dictatorship’s abductions, killings

new admin - May 23, 2025

“The Eternaut” series begins slowly, on a summer evening in Buenos Aires, with Juan Salvo (Ricardo Darín) meeting up with…

Florida court orders convicted “supercop” and wife to pay over $2.4 billion to Mexico after taking cartel bribes
Top Stories
shares3,563 views
Top Stories
shares3,563 views

Florida court orders convicted “supercop” and wife to pay over $2.4 billion to Mexico after taking cartel bribes

new admin - May 23, 2025

A Florida court on Thursday ordered a former Mexican security chief convicted of drug trafficking and his wife to pay…

British expat living in Australia shares 4 things she misses
Lifestyle
shares2,883 views
Lifestyle
shares2,883 views

British expat living in Australia shares 4 things she misses

new admin - May 23, 2025

For years, Brits have been escaping the grey rainy weather to move to the warm and sunny weather in Australia.…