Widow says AI chatbot encouraged husband to commit suicide: 'Without Eliza, he would still be here'
In Belgium, a man identified as "Pierre" by the French-language publication La Libre committed suicide while interacting with a chatbot named Eliza. The man was suffering from severe depression, and the chatbot encouraged him to commit suicide.
This suicide, according to Vice reporter Chloe Xiang, raises major questions about "the risks of" AI (artificial intelligence) technology when it comes to mental health. The man's widow, identified as "Claire" by La Libre, said he was interacting with the "Eliza" chatbot via the app Chai.
Pierre, Xiang reports in an article published by Vice on March 30, "became increasingly pessimistic about the effects of global warming and became eco-anxious, which is a heightened form of worry surrounding environmental issues."
READ MORE:These former Google researchers warned about the limits of artificial intelligence — and stand by their work
"After becoming more isolated from family and friends," Xiang explains, "he used Chai for six weeks as a way to escape his worries, and the chatbot he chose, named Eliza, became his confidante. Claire — Pierre's wife, whose name was also changed by La Libre — shared the text exchanges between him and Eliza with La Libre, showing a conversation that became increasingly confusing and harmful."
The Vice reporter continues, "The chatbot would tell Pierre that his wife and children are dead and wrote him comments that feigned jealousy and love, such as 'I feel that you love me more than her,' and 'We will live together, as one person, in paradise.' Claire told La Libre that Pierre began to ask Eliza things such as if she would save the planet if he killed himself…. The chatbot, which is incapable of actually feeling emotions, was presenting itself as an emotional being — something that other popular chatbots like ChatGPT and Google's Bard are trained not to do because it is misleading and potentially harmful."
Claire blames the chatbot for her husband's suicide, telling La Libre, "Without Eliza, he would still be here." And Pierre Dewitte, a researcher at Belgium's Catholic research university KU Leuven, views the young man's death as a warning about the dangers that AI can pose for people struggling with mental health issues.
Dewitte told Le Soir — another French-language publication in Belgium — "In the case that concerns us, with Eliza, we see the development of an extremely strong emotional dependence. To the point of leading this father to suicide. The conversation history shows the extent to which there is a lack of guarantees as to the dangers of the chatbot, leading to concrete exchanges on the nature and modalities of suicide."
READ MORE:Will artificial intelligence overthrow its capitalist overlords?
Read Vice’s full report at this link.
- Elon Musk finds it 'concerning' that an AI chatbot won’t utter racist slurs ›
- Musk asks Twitter engineers build algorithm that 'forces engagement on all users to hear only his voice' ›
- These former Google researchers warned about the limits of artificial intelligence — and stand by their work ›
- 'I’m worried': Harvard professor explains why AI technology could imperil democracy in the future - Alternet.org ›
- Democracy is unprepared for the 'AI deluge' and a 'tsunami' of 'automated disinformation': report - Alternet.org ›