The Problem With
‘AI Psychosis’
The increasingly popular term to describe people’s interaction with artificial intelligence bots is unhelpful and inaccurate – even if the outcome of such relationships can be alarming, writes Kate Devlin
The former DeepMind co-founder and now CEO of AI at Microsoft, Mustafa Suleyman, recently voiced his concerns about ‘AI psychosis’. The phrase, which has been gaining traction for a while now, is not a clinical diagnosis and has no psychiatric meaning.
“I’m growing more and more concerned about what is becoming known as the ‘psychosis risk’ and a bunch of related issues”, Suleyman wrote. “I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship”.
He isn’t alone. Sam Altman, CEO of OpenAI, said in August that he was worried about young people’s emotional over-reliance on AI.
This is the Paywall
We pay our journalists to investigate stories that matter. So we make some of our best articles and investigations available exclusively to paying readers. This is one of those articles; to read it, sign in or subscribe.
Get access to the Byline Times Digital Edition and read this article now
It costs £3.95/month or £39.95/year
Find out more and compare ways to read Byline Times