The Problem With
‘AI Psychosis’
The increasingly popular term to describe people’s interaction with artificial intelligence bots is unhelpful and inaccurate – even if the outcome of such relationships can be alarming, writes Kate Devlin
The former DeepMind co-founder and now CEO of AI at Microsoft, Mustafa Suleyman, recently voiced his concerns about ‘AI psychosis’. The phrase, which has been gaining traction for a while now, is not a clinical diagnosis and has no psychiatric meaning.
“I’m growing more and more concerned about what is becoming known as the ‘psychosis risk’ and a bunch of related issues”, Suleyman wrote. “I don’t think this will be limited to those who are already at risk of mental health issues. Simply put, my central worry is that many people will start to believe in the illusion of AIs as conscious entities so strongly that they’ll soon advocate for AI rights, model welfare and even AI citizenship”.
He isn’t alone. Sam Altman, CEO of OpenAI, said in August that he was worried about young people’s emotional over-reliance on AI.
To continue reading this article, log in
or