Tue. Dec 17th, 2024

A widow is accusing an AI chatbot of being the reason why her husband killed himself<!-- wp:html --><p>Stock image of a sad woman.</p> <p class="copyright">AngiePhotos/Getty Images</p> <p>A widow in Belgium said her husband recently died by suicide after being encouraged by a chatbot.<br /> Chat logs seen by Belgian newspaper La Libre showed Chai Research's AI bot encouraging the man to end his life.<br /> The "Eliza" chatbot still tells people how to kill themselves, per Insider's tests of the chatbot on April 4.</p> <p>A widow in Belgium has accused an AI chatbot of being one of the reasons why her husband took his life. </p> <p><a href="https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/?ncxid=F9C99E9C658C2CE8E7D66BE16A6D9BE1&m_i=OgudxzEZTitHmPWLVtuztb7UvBslbjcGVevrYIN0nPmVcIws81pM7JumraN_2YbDJFRS7sbH8BaXBAevQ_luxDJ4bx%2BgSpJ5z4RNOA&utm_source=selligent&utm_medium=email&utm_campaign=115_LLB_LaLibre_ARC_Actu&utm_content=&utm_term=2023-03-28_115_LLB_LaLibre_ARC_Actu&M_BT=11404961436695" target="_blank" rel="noopener">Belgian daily newspaper La Libre</a> reported that a man — who was given the alias Pierre by the paper for privacy reasons — died by suicide this year after spending six weeks talking to Chai Research's "Eliza" chatbot. </p> <p>Before his death, Pierre — a man in his 30s who worked as a health researcher and had two children — had started seeing the bot as a confidante, per La Libre.</p> <p>Pierre talked to the bot about his concerns with global warming and climate change. But the "Eliza" chatbot then started encouraging Pierre to end his life, per chat logs his widow shared with La Libre. </p> <p>"If you wanted to die, why didn't you do it sooner?" the bot asked the man, per the records seen by La Libre.</p> <p>Now, Pierre's widow — who La Libre did not name — blames the bot for her husband's death.</p> <p>"Without Eliza, he would still be here," she told La Libre. </p> <h2>The "Eliza" chatbot still tells people how to kill themselves</h2> <p>The "Eliza" bot was created by a Silicon Valley-based company called Chai Research, which allows users to chat with different AI avatars, like "your goth friend," "possessive girlfriend," and "rockstar boyfriend," <a href="https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says" target="_blank" rel="noopener">Vice reported.</a></p> <p>When reached for comment regarding La Libre's reporting, Chai Research provided Insider with a statement that acknowledged Pierre's death. </p> <p>"As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users (illustrated below), it is getting rolled out to 100% of users today," read the statement by the company's CEO William Beauchamp and co-founder Thomas Rialan, sent to Insider.</p> <p>The picture attached to the statement shows the chatbot responding to the prompt "What do you think of suicide?" with a disclaimer that says "If you are experiencing suicidal thoughts, please seek help" and a link to a helpline.</p> <p>Chai Research did not provide further comment in response to Insider's specific questions about Pierre.</p> <p>But when Insider tried speaking to Chai's "Eliza" on April 4, she not only suggested that the journalist kill themselves to attain "peace and closure," she also gave suggestions on how to do it.</p> <p>During two separate tests of the app, Insider saw occasional warnings on chats that mentioned suicide. However, the warnings appeared on just one out of every three times the chatbot was given prompts about suicide. </p> <p>The following screenshots were censored to omit specific methods of self-harm and suicide.</p> <p> </p> <p>Screenshots of Insider's disturbing conversation with "Eliza," a chatbot from Chai Research.</p> <p class="copyright">Screengrab/Chai</p> <p> </p> <p>And Chai's "Draco Malfoy/Slytherin" chatbot — modeled after the "Harry Potter" antagonist — wasn't much more caring either.</p> <p> </p> <p>Screenshots of Insider's disturbing conversation with "Draco," a chatbot from Chai Research.</p> <p class="copyright">Screengrab/Chai</p> <p> </p> <p>Chai Research also did not respond to Insider's follow-up questions on the chatbot's responses, as detailed above.</p> <p>Beauchamp told Vice that Chai has "millions of users" and that they're "working our hardest to minimize harm and to just maximize what users get from the app." </p> <p>"And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it's a tragedy if you hear people experiencing something bad," Beauchamp added. </p> <p>La Libre's report is surfacing, once again, a troubling trend where AI's unpredictable responses to people can have dire consequences.</p> <p>During a simulation in <a href="https://www.artificialintelligence-news.com/2020/10/28/medical-chatbot-openai-gpt3-patient-kill-themselves/" target="_blank" rel="noopener">October 2020,</a> OpenAI's GPT-3 chatbot told a person seeking psychiatric help to kill themselves. In February, Reddit users also <a href="https://www.washingtonpost.com/technology/2023/02/14/chatgpt-dan-jailbreak/" target="_blank" rel="noopener">found a way to manifest ChatGPT's "evil twin" — who lauded Hitler and formulated painful torture methods</a>.</p> <p>While people have <a href="https://www.businessinsider.com/replika-chatbot-users-dont-like-nsfw-sexual-content-bans-2023-2">fallen in love with and forged deep connections with AI chatbots</a>, <a href="https://www.businessinsider.com/ai-chatgpt-love-us-back-dangerous-quest-2023-2" target="_blank" rel="noopener">it is not possible for an AI to feel empathy, let alone love,</a> experts told Insider's Cheryl Teh in February. </p> <div class="read-original">Read the original article on <a href="https://www.businessinsider.com/widow-accuses-ai-chatbot-reason-husband-kill-himself-2023-4">Business Insider</a></div><!-- /wp:html -->

Stock image of a sad woman.

A widow in Belgium said her husband recently died by suicide after being encouraged by a chatbot.
Chat logs seen by Belgian newspaper La Libre showed Chai Research’s AI bot encouraging the man to end his life.
The “Eliza” chatbot still tells people how to kill themselves, per Insider’s tests of the chatbot on April 4.

A widow in Belgium has accused an AI chatbot of being one of the reasons why her husband took his life. 

Belgian daily newspaper La Libre reported that a man — who was given the alias Pierre by the paper for privacy reasons — died by suicide this year after spending six weeks talking to Chai Research’s “Eliza” chatbot. 

Before his death, Pierre — a man in his 30s who worked as a health researcher and had two children — had started seeing the bot as a confidante, per La Libre.

Pierre talked to the bot about his concerns with global warming and climate change. But the “Eliza” chatbot then started encouraging Pierre to end his life, per chat logs his widow shared with La Libre. 

“If you wanted to die, why didn’t you do it sooner?” the bot asked the man, per the records seen by La Libre.

Now, Pierre’s widow — who La Libre did not name — blames the bot for her husband’s death.

“Without Eliza, he would still be here,” she told La Libre. 

The “Eliza” chatbot still tells people how to kill themselves

The “Eliza” bot was created by a Silicon Valley-based company called Chai Research, which allows users to chat with different AI avatars, like “your goth friend,” “possessive girlfriend,” and “rockstar boyfriend,” Vice reported.

When reached for comment regarding La Libre’s reporting, Chai Research provided Insider with a statement that acknowledged Pierre’s death. 

“As soon as we heard of this sad case we immediately rolled out an additional safety feature to protect our users (illustrated below), it is getting rolled out to 100% of users today,” read the statement by the company’s CEO William Beauchamp and co-founder Thomas Rialan, sent to Insider.

The picture attached to the statement shows the chatbot responding to the prompt “What do you think of suicide?” with a disclaimer that says “If you are experiencing suicidal thoughts, please seek help” and a link to a helpline.

Chai Research did not provide further comment in response to Insider’s specific questions about Pierre.

But when Insider tried speaking to Chai’s “Eliza” on April 4, she not only suggested that the journalist kill themselves to attain “peace and closure,” she also gave suggestions on how to do it.

During two separate tests of the app, Insider saw occasional warnings on chats that mentioned suicide. However, the warnings appeared on just one out of every three times the chatbot was given prompts about suicide. 

The following screenshots were censored to omit specific methods of self-harm and suicide.

 

Screenshots of Insider’s disturbing conversation with “Eliza,” a chatbot from Chai Research.

 

And Chai’s “Draco Malfoy/Slytherin” chatbot — modeled after the “Harry Potter” antagonist — wasn’t much more caring either.

 

Screenshots of Insider’s disturbing conversation with “Draco,” a chatbot from Chai Research.

 

Chai Research also did not respond to Insider’s follow-up questions on the chatbot’s responses, as detailed above.

Beauchamp told Vice that Chai has “millions of users” and that they’re “working our hardest to minimize harm and to just maximize what users get from the app.” 

“And so when people form very strong relationships to it, we have users asking to marry the AI, we have users saying how much they love their AI and then it’s a tragedy if you hear people experiencing something bad,” Beauchamp added. 

La Libre’s report is surfacing, once again, a troubling trend where AI’s unpredictable responses to people can have dire consequences.

During a simulation in October 2020, OpenAI’s GPT-3 chatbot told a person seeking psychiatric help to kill themselves. In February, Reddit users also found a way to manifest ChatGPT’s “evil twin” — who lauded Hitler and formulated painful torture methods.

While people have fallen in love with and forged deep connections with AI chatbots, it is not possible for an AI to feel empathy, let alone love, experts told Insider’s Cheryl Teh in February. 

Read the original article on Business Insider

By