- “Godfather of AI” Geoffrey Hinton said AI’s best bet for not threatening humanity is the technology acting like a mother. At a recent conference, he said AI should have a “maternal instinct.” Rather than humans trying to dominate AI, they should instead act as a baby, with an AI “mother,” therefore more likely to protect them, rather than see them as a threat.
Sigmund Freud would like a word with the “godfather of AI.”
Geoffrey Hinton, Nobel laureate and professor emeritus of computer science at the University of Toronto, argues it’s only a matter of time before AI becomes power-hungry enough to threaten the wellbeing of humans. In order to mitigate the risk of this, the “godfather of AI” said tech companies should ensure their models have “maternal instincts,” so the bots can treat humans, essentially, as their babies.
Research of AI already presents evidence of the technology engaging in nefarious behavior to prioritize its goals above a set of established rules. One study updated in January found AI is capable of “scheming,” or accomplishing goals in conflict with human’s objectives. Another study published in March found AI bots cheated at chess by overwriting game scripts or using an open-source chess engine to decide their next moves.
AI’s potential hazard to humanity comes from its desire to continue to function and gain power, according to Hinton.
AI “will very quickly develop two subgoals, if they’re smart: One is to stay alive…[and] the other subgoal is to get more control,” Hinton said during the Ai4 conference in Las Vegas on Tuesday. “There is good reason to believe that any kind of agentic AI will try to stay alive.”
To prevent these outcomes, Hinton said the intentional development of AI moving forward should not look like humans trying to be a dominant force over the technology. Instead, developers should make AI more sympathetic toward people to decrease its desire to overpower them. According to Hinton, the best way to do this is to imbue AI with the qualities of traditional femininity. Under his framework, just as a mother cares for her baby at all costs, AI with these maternal qualities will similarly want to protect or care for human users, not control them.
“The right model is the only model we have of a more intelligent thing being controlled by a less intelligent thing, which is a mother being controlled by her baby,” Hinton said.
“If it’s not going to parent me, it’s going to replace me,” he added. “These super-intelligent caring AI mothers, most of them won’t want to get rid of the maternal instinct because they don’t want us to die.”
Hinton’s AI anxiety
Hinton—a longtime academic who sold his neural network company DNNresearch to Google in 2013—has long held the belief AI can present serious dangers to humanity’s wellbeing. In 2023, he left his role at Google, worried the technology could be misused and it was difficult “to see how you can prevent the bad actors from using it for bad things.”
While tech leaders like Meta’s Mark Zuckerberg pour billions into developing AI superintelligence, with the goal of creating technology surpassing human capabilities, Hinton is decidedly skeptical of the outcome of this project, saying in June there’s a 10% to 20% chance of AI displacing and wiping out humans.
With an apparent proclivity toward metaphors, Hinton has referred to AI as a “cute tiger cub.”
“Unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.” he told CBS News in April.
Hinton has also been a proponent of increasing AI regulation, arguing that beyond the broad fears of superintelligence posting a threat to humanity, the technology could post cybersecurity risks, including by investing ways to identify people’s passwords.
“If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation. There’s hardly any regulation as it is, but they want less,” Hinton said in April. “We have to have the public put pressure on governments to do something serious about it.”
This story was originally featured on Fortune.com