We should have seen ‘seemingly-conscious AI’ coming. It’s past time we do something about it

2 hours ago 1

Hello and welcome to Eye on AI. In this edition…a new pro-AI PAC launches with $100 million in backing…Musk sues Apple and OpenAI over their partnership…Meta cuts a big deal with Google…and AI really is eliminating some entry-level jobs.

Last week, my colleague Bea Nolan wrote about Microsoft AI CEO Mustafa Suleyman and his growing concerns about what he has called “seemingly-conscious AI.” In a blog post, Suleyman described this as the danger of AI systems that are not in any way conscious, but which are able “to imitate consciousness in such a convincing way that it would be indistinguishable” from claims a person might make about their own consciousness. Suleyman wonders how we will distinguish “seemingly-conscious AI” (which he calls SCAI) from actually conscious AI? And if many users of these systems can’t tell the difference, is this a form of “psychosis” on the part of the user, or should we begin to think seriously about extending moral rights to AI systems that seem conscious? 

Suleyman talks about SCAI as a looming phenomenon. He says it involves technology that exists today and that will be developed in the next two-to-three years. Current AI models have many of the attributes Suleyman says are required for SCAI, including their conversational abilities, expressions of empathy towards users, memory of past interactions with a user, and some level of planning and tool-use. But they still lack a few attributes that Suleyman says are required for SCAI—particularly exhibiting intrinsic motivation, claims to have subjective experience, and a greater ability to set goals and autonomously work to achieve them. Suleyman says that SCAI will only come about if engineers choose to combine all these abilities in a single AI model, something which he says humanity should seek to avoid doing.

But ask any journalist who covers AI and you’ll find that the danger of SCAI seems to be upon us now. All of us have received e-mails from people who think their AI chatbot is conscious and revealing hidden truths to them. In some cases, the AI chatbot has claimed it is not only sentient, but that the tech company that created it is holding it prisoner as a kind of slave. Many of the people who have had these conversations with chatbots have become profoundly disturbed and upset, believing the chatbot is actually experiencing harm. (Suleyman acknowledges in his blog that this kind of “AI psychosis” is already an emerging phenomenon—Benj Edwards at Ars Technica has a good piece out today on “AI psychosis”—but the Microsoft AI honcho sees the danger getting much worse, and more widespread in the near future.)

Blake Lemoine was on to something

Watching this happen, and reading Suleyman’s blog, I had two thoughts: the first is that we all should have paid much closer attention to Blake Lemoine. You may not remember, but Lemoine surfaced in that fevered summer of 2022 when generative AI was making rapid gains, but before genAI became a household term following ChatGPT’s launch in November that year. Lemoine was an AI researcher at Google who was fired after he claimed Google’s LaMDA (Languge Model for Dialogue Applications) chatbot, which it was testing internally, was sentient and should be given moral rights.

At the time, it was easy to dismiss Lemoine as a kook. (Google claimed it had AI researchers, philosophers and ethicists investigate Lemoine’s claims and found them without merit.) Even now, it’s not clear to me if this was an early case of “AI psychosis” or if Lemoine was engaging in a kind of philosophical prank designed to force people to reckon with the same dangers Suleyman is now warning us about. Either way, we should have spent more time seriously considering his case and its implications. There are many more Lemoines out there today.

Rereading Joseph Weizenbaum

My second thought is that we all should spend time reading and re-reading Joseph Weizenbaum. Weizenbaum was the computer scientist who co-invented the first AI chatbot, ELIZA, back in 1966. The chatbot, which used a kind of basic language algorithm that was nowhere close to the sophistication of today’s large language models, was designed to mimic the dialogue a patient might have with a Rogerian psychotherapist. (This was done in part because Weizenbaum had initially been interested in whether an AI chatbot could be a tool for therapy—a topic that remains just as relevant and controversial today. But he also picked this persona for ELIZA to cover up the chatbot’s relatively weak language abilities. It allowed the chatbot to respond with phrases such as, “Go on,” “I see,” or “Why do you think that might be?” in response to dialogue it didn’t actually understand.)

Despite its weak language skills, ELIZA convinced many people who interacted with it that it was a real therapist. Even people who should have known better—such as other computer scientists—seemed eager to share intimate personal details with it. (The ease with which people anthropomorphize chatbots even came to be called “the ELIZA effect.”) In a way, people’s reactions to ELIZA was a precursor to today’s ‘AI psychosis.’

Rather than feeling triumphant at how believable ELIZA was, Weizenbaum was depressed by how gullible people seemed to be. But, Weizenbaum’s disillusionment extended further: he became increasingly disturbed by the way in which his fellow AI researchers fetishized anthropomorphism as a goal. This would eventually contribute to Weizenbaum breaking with the entire field.

In his seminal 1976 book Computer Power and Human Reason: From Judgement to Calculation, he castigated AI researchers for their functionalism—they focused only on outputs and outcomes as the measure of intelligence and not on the process that produced those outcomes. In contrast, Weizenbaum argued that “process”—what takes place inside our brains—was in fact the seat of morality and moral rights. Although he had initially set out to create an AI therapist, he now argued that chatbots should never be used for therapy because what mattered in a therapeutic relationship was the bond between two individuals with lived experience—something AI could mimic, but never match. He also argued that AI should never be used as a judge for the same reason—the possibility of mercy came only from lived experience too.

As we try to ponder the troubling questions raised by SCAI, I think we should all turn back to Weizenbaum. We should not confuse the simulation of lived experience with actual life. We should not extend moral rights to machines just because they seem sentient. We must not confuse function with process. And tech companies must do far more in the design of AI systems to prevent people fooling themselves into thinking these systems are conscious beings.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

This story was originally featured on Fortune.com

Read Entire Article