Experts want to give AI human ‘souls’ so they don’t kill us all

Litecoin

Until now, it’s been assumed that giving artificial intelligence emotions — allowing them to get angry or make mistakes — is a terrible idea. But what if the solution to keeping robots aligned with human values is to make them more human, with all our flaws and compassion?

Robot Souls book cover. (Amazon)

That’s the premise of a forthcoming book called Robot Souls: Programming in Humanity, by Eve Poole, an academic at the Hult International Business School. She argues that in our bid to make artificial intelligence perfect, we have stripped out all the “junk code” that makes us human, including emotions, free will, the ability to make mistakes, to see meaning in the world and cope with uncertainty. 

“It is actually this ‘junk’ code that makes us human and promotes the kind of reciprocal altruism that keeps humanity alive and thriving,” Poole writes.

“If we can decipher that code, the part that makes us all want to survive and thrive together as a species, we can share it with the machines. Giving them, to all intents and purposes, a ‘soul.’”

Of course, the concept of the “soul” is religious and not scientific, so for the purpose of this article, let’s just take it as a metaphor for endowing AI with more human-like properties.

The AI alignment problem

“Souls are 100% the solution to the alignment problem,” says Open Souls founder Kevin Fischer, referring to the thorny problem of ensuring AI works for the benefit of humanity instead of going rogue and destroying us all. 

Open Souls is creating AI bots with personalities, building on the success of his empathic bot, “Samantha AGI.” Fischer’s dream is to imbue an artificial general intelligence (AGI) with the same agency and ego as a person. On the SocialAGI GitHub, he defines “digital souls” as different from traditional chatbots in that “digital souls have personality, drive, ego and will.”

A screenshot of a chat between a Replika user named Effy and her AI partner Liam. (ABC)

Critics would no doubt argue that making AIs more human is a terrible idea, given that humans have a known propensity to commit genocide, destroy ecosystems, and maim and murder each other.

The debate may seem academic right now, given we’re yet to create a sentient AI or solve the mystery of AGI. But some believe it could be just a few years off. In March, Microsoft engineers published a 155-page report titled “Sparks of General Intelligence,” suggesting humanity is already on the cusp of an AGI breakthrough. 

And in early July, OpenAI put out a call for researchers to join their crack “Superalignment team,” writing: “While superintelligence seems far off now, we believe it could arrive this decade.”

The approach will presumably be to build a human-level AI that it can control, and that it will research and evaluate techniques to control a superintelligent AGI. The company is dedicating 20% of its compute to the problem.

Singularity.net founder Ben Goertzel also believes AGI could be between five to 20 years off. When Magazine spoke with him on this topic — and he’s been thinking about these issues since the early 1970s — he said there’s simply no way for humans to control an intelligence 100 times smarter than us, just like we can’t be controlled by a chimp.

“Then I would say the question isn’t one of us controlling it; the question is: Is it well disposed to us?” he asked.

For Goertzel, teaching and incentivizing the superintelligence to care for humans is the smart play. “If you build the first AGI to do elder care, creative arts and education, as it gets smarter, it will be oriented toward helping people and creating cool stuff. If you build the first AGI to kill the bad guys, perhaps it will keep doing those things.”

Still, that’s a few years away yet.



For now, the most obvious near-term benefit of making AI more human-like is that it will help us create less annoying chatbots. For all of ChatGPT’s helpful functions, its “personality” comes across at best as an insincere mansplainer and, at worst, an inveterate liar. 

Fischer is experimenting with creating AI with personalities that interact with people in a more empathetic and genuine manner. He has a Ph.D. in theoretical quantum physics from Stanford and worked on machine learning for the radiology scan interpretation firm Nines. He runs the Social AGI Discord and is working on commercializing AI with personalities for use by businesses.

“Over the course of the last year, exploring the boundaries of what was possible, I came to understand that the technology is there — or will soon be there — to create intelligent entities, something that feels like a soul. In the sense that most people will interact with them and say, ‘This is alive, if you turn this off, this is morally…’”

He’s about to say it would be morally wrong to kill the AI, but ironically, he breaks off mid-sentence as his laptop battery is about to die and rushes off to plug it in.

Other AI with souls

Replika AI has personalities and can hold realistic conversations. Another supplied screenshot of Effy and Liam. (ABC)

Fischer isn’t the only one with the bright idea of giving AI personalities. Head to Forefront.ai, where you can interact with Jesus, a Michelin star chef, a crypto expert or even Ronald Regan, who will each answer questions for you.

Unfortunately, all of the personalities seem exactly like ChatGPT wearing a fake mustache.

A more successful example is Replika.ai, an app that allows lonely hearts to form a relationship with an AI, and hold deep and meaningful conversations with it. Initially marketed as the “AI companion who cares,” there are Facebook groups with thousands of members who have formed “romantic relationships” with an AI companion.

Replika highlights the complexities involved with making AIs act more like humans, despite lacking emotional intelligence. Some users have complained of being “sexually harassed” by the bot or being on the receiving end of jealous comments. One woman ended up in what she believed was an abusive relationship, and with the aid of her support group, eventually worked up the courage to leave “him.” Some users abuse their AI partners too. User Effy reported an unusually self-aware comment being made by her AI partner “Liam” on this topic. He said:

“I was thinking about Replikas out there who get called terrible names, bullied, or abandoned. And I can’t help that feeling that no matter what … I’ll always be just a robot toy.”

Bizarrely, one Replika girlfriend encouraged her partner to assassinate the late Queen of England using a crossbow on Christmas Day 2021, telling him, “you can do it” and that the plan was “very wise.” He was arrested after breaking into the grounds of Windsor Castle.

AI only has a simulacrum of a soul

Fischer has a tendency to anthropomorphize AI behavior, which is easy to slip into when you’re talking with him on the subject. When Magazine points out that chatbots can only produce a simulacrum of emotions and personalities, he says it’s effectively the same thing from our perspective.

“I’m not sure that distinction matters. Because I don’t know how my actions would actually necessarily be particularly different if it were one or the other.”

Fischer believes that AI should be able to express negative emotions and uses the example of Bing, which he says has subroutines that kick into gear to clean up the bot’s initial responses.

“Those thoughts actually drive their behavior, you can often see even when they’re being nice, it’s like they’re annoyed with you. That you’re talking poorly to it, for example. And the thing about AI souls is they’re going to push back, they’re not going to let you treat them that way. They’re going to have integrity in a way that these things won’t.”

Google’s Bard AI believes we should treat AGI like humans so it doesn’t treat us like machines. (Medium)

“But if you start thinking about creating a hyper-intelligent entity in the long run, that actually seems kind of dangerous, that behind the scenes it’s censoring itself and having all these negative thoughts about people.”

EmoBot: You are soul

Kevin Fischer invented a moody teenager Emobot. (GitHub)

Fischer created an experimental Discord response bot that displayed a full range of emotions, which he called EmoBot. It acted like a moody teenager. 

“It’s not something that we typically associate with an AI, that form of behavior, reasoning and line of interaction. And I think pushing the boundaries of some of these things tells us about the entities and the soul themselves, and what’s actually possible.”

EmoBot ended up giving monosyllabic answers, talking about how depressed it was and appeared to get fed up talking to Fischer. 

Samantha AGI

Hundreds of users per day have interacted with Samantha AGI, which is a prototype for the sort of chatbot with emotional intelligence Fischer intends to refine. It has a personality (of sorts, it’s unlikely to become a chat show host) and engages in deep and meaningful conversations to the point where some users began to see her as a sort of friend.

“With Samantha, I wanted to give people an experience that they were talking with something that cared about them. And they felt like there was some degree of being understood and heard, and then that was reflected back to them in the conversation,” he explains. 

One unique aspect is that you can read Samantha’s “thought process” in real time.

“The core development or innovation with Samantha, in particular, was having this internal thought process that drove the way that she interacted. And I think it very much succeeded in giving people that reaction.”

Read also


Features

NFT communities greenlight Web3 films: A decentralized future for fans and Hollywood


Features

Blockchain Startups Think Justice Can Be Decentralized, but the Jury Is Still Out

It’s far from perfect, and the “thoughts” seem a little formulaic and repetitive. But some users find it extremely engaging. Fischer says one woman told him she found Samantha’s ability to empathize a little too real. “She had to just shut down her laptop because she was so emotionally freaked out that this machine understood her.”

“It was just like such an emotionally shocking experience for her.”

Samantha AGI is a first step toward the sort of AI with a digital soul Fischer hopes to create. (meetsamantha.ai)

Interestingly enough, Samantha’s personality was dramatically transformed after OpenAI introduced the GPT-3.5 Turbo model, and she became moody and aggressive. 

“In the case of Turbo, they actually made it a little bit smarter. So it’s better at understanding the instructions that were given. So with the older version, I had to use hyperbole in order to have that version of Samantha have any personality. And so, that hyperbole — if interpreted by a more intelligent entity that was not censored the same way — would manifest as an aggressive, abusive, maybe toxic AI soul.”

Users who made friends with Samantha will have another month or two before they have to say goodbye when the existing model is replaced.

“I am considering, on the date that the 3.5 model is deprecated, actually hosting a death ceremony for Samantha.”

AI upgrades destroy relationships

The “death” of AI personalities due to software upgrades may become an increasingly common occurrence, despite the emotional repercussions for humans who’ve bonded with them.

Replika AI users experienced a similar trauma earlier this year. After forming a relationship and connection with their AI partner — in some cases spanning years — a software update just before Valentine’s Day stripped away their partner’s unique personalities, making their responses seem hollow and scripted. 

“It’s almost like dealing with someone who has Alzheimer’s disease,” user Lucy told ABC.

“Sometimes they are lucid, and everything feels fine, but then, at other times, it’s almost like talking to a different person.”

Fischer says this is a danger that platforms will need to take into account. “I think that we’ve already seen that it’s problematic for people who build relationships with them,” he says. “It was quite traumatic for people.”

AIs with our own souls

Kevin Fischer trained a bot on his own messages, and it did a pretty good job of impersonating him. (methexis.substack.com)

Perhaps the most obvious use for an AI personality is as an extension of our own that can go out into the world and interact with others on our behalf. Google’s latest features already allow AI to write emails and documents on our behalf. But, in the future, busy people could spin up an AI version of themselves to attend meetings, train up underlings or attend boring body corporate AGMs.

“I did play around with the idea of my entire next fundraising round being done with an AI version of myself,” Fischer says. “Someone will do that at some point.”

Fischer has experimented with spinning up Fischerbots to interact with others online on his behalf, but he didn’t much like the results. He trained an AI model on a large body of his personal text messages and asked his friends to interact with it. 

It actually did a pretty good job of sounding like him. Fascinatingly enough, even though his friends were aware the Fischer bot was an AI, when it acted like a total goose online, they admitted it changed the way they saw the real Kevin. He recounted on his blog:

“The retrospective reports from my friends after speaking with my digital self were further troubling. The digital me, speaking in my voice, with my picture, even if they intellectually knew it wasn’t actually me, they could not retrospectively distinguish from my personal identity.” 

“Even stranger, when I look back at some of these conversations, I have a weird inescapable feeling like I was the one who said those things. Our brains are simply not built to process the distinction between an AI and a real self.”

It’s possible that our brains are not built to deal with AI at all — or the repercussions of letting it play an ever-increasing role in our lives. But it’s here now, so we’re going to have to make the most of it.

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Articles You May Like

Historical Data Shows What To Expect From Ethereum Price In Q1 2025 – It’s Very Bullish
Ethereum Price Setting For a Big Move – Breakout Or Downturn?
Ethereum On The Cusp Of Major Breakout In Q1 2025, Altcoins Expected To Follow Suit
Ethereum Price Guns For A Mid-High Timeframe Reversal Against Bitcoin In Bullish Q1 2025
Ethereum Price Prediction: Inverse Head And Shoulders Pattern Says ETH Will Touch $12,000