“Godfather of AI” Geoffrey Hinton warns AI could take control from humans: “People haven’t understood what’s coming”

April 26, 2025
3,296 Views

“Godfather of AI” Geoffrey Hinton was awakened in the middle of the night last year with news he had won the Nobel Prize in physics. He said he never expected such recognition. 

“I dreamt about winning one for figuring out how the brain works. But I didn’t figure out how the brain works, but I won one anyway,” Hinton said.

The 77-year-old researcher earned the award for his pioneering work in neural networks — proposing in 1986 a method to predict the next word in a sequence — now the foundational concept behind today’s large language models.

While Hinton believes artificial intelligence will transform education and medicine and potentially solve climate change, he’s increasingly concerned about its rapid development.

“The best way to understand it emotionally is we are like somebody who has this really cute tiger cub,” Hinton explained. “Unless you can be very sure that it’s not gonna want to kill you when it’s grown up, you should worry.”

The AI pioneer estimates a 10% to 20% risk that artificial intelligence will eventually take control from humans.

“People haven’t got it yet, people haven’t understood what’s coming,” he warned.

His concerns echo those of industry leaders like Google CEO Sundar Pichai, X-AI’s Elon Musk, and OpenAI CEO Sam Altman, who have all expressed similar worries. Yet Hinton criticizes these same companies for prioritizing profits over safety.

“If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation. There’s hardly any regulation as it is, but they want less,” Hinton said.

Hinton appears particularly disappointed with Google, where he previously worked, for reversing its stance on military AI applications.

According to Hinton, AI companies should dedicate significantly more resources to safety research — “like a third” of their computing power, compared to the much smaller fraction currently allocated.

CBS News asked all the AI labs mentioned how much of their compute is used for safety research. None of them gave a number. All have said safety is important and they support regulation in general but have mostly opposed the regulations lawmakers have put forward so far.

Source link

You may be interested

Stanley Tucci on Italian politics – through the prism of food | Ents & Arts News
Entertainment
shares3,102 views
Entertainment
shares3,102 views

Stanley Tucci on Italian politics – through the prism of food | Ents & Arts News

new admin - May 11, 2025

Stanley Tucci says he doesn't understand why there has been a sudden rise in the "very far right".The 64-year-old actor,…

Barry Diller on baring his soul in new memoir, “Who Knew”
Top Stories
shares2,380 views
Top Stories
shares2,380 views

Barry Diller on baring his soul in new memoir, “Who Knew”

new admin - May 11, 2025

Last week, a few steps from Manhattan's Little Island, the people enjoying sunshine on the High Line took little notice…

Giant Bomb goes independent
Technology
shares3,765 views
Technology
shares3,765 views

Giant Bomb goes independent

new admin - May 11, 2025

At the beginning of the month, things weren’t looking good for gaming site Giant Bomb after its content was put…