23.7 C
New York
Saturday, August 30, 2025

OpenAI's dark side: ChatGPT accused of causing suicide, murder



“I know what you’re asking, and I won’t look away from it.”

Those final words to a California teenager about to commit suicide were not from some manipulative friend in high school or sadistic voyeur on the Internet.  Adam Raine, 16, was speaking to ChatGPT, an AI system that has replaced human contacts in fields ranging from academia to business to media.

The exchange between Raine and the AI is part of the court record in a potentially groundbreaking case against OpenAI, the company that operates ChatGPT. It is only the latest lawsuit against the corporate giant run by billionaire Sam Altman.

In 2017, Michele Carter was convicted of involuntary manslaughter after she urged her friend, Conrad Roy, to go through with his planned suicide: “You need to do it, Conrad… All you have to do is turn the generator on and you will be free and happy.”

The question is whether, if Michele were named Grok (another AI system), there would also be some form of liability. OpenAI stands accused of an arguably more serious act in supplying a virtual companion who effectively enabled a suicidal teen — with lethal consequences.

At issue is the liability of companies in using such virtual employees in dispensing information or advice.  If a human employee of OpenAI negligently gave harmful information or counseling to a troubled teen, there would be little debate that the company could be sued for the negligence of its employee. As AI replaces humans, these companies should be held accountable for their virtual agents.

In a response to the lawsuit, OpenAI insists that “ChatGPT is trained to direct people to seek professional help” but “there have been moments where our systems did not behave as intended in sensitive situations.” Of course, when the company “trains” an AI agent poorly and that agent does “not behave as intended,” it sounds like a conventional tort that should be subject to liability.

OpenAI is facing other potential litigation over these “poorly trained” AI agents. Writer Laura Reiley wrote an essay about how her daughter, Sophie, confided in ChatGPT before taking her own life. It sounded strikingly familiar to the Raines case: “AI catered to Sophie’s impulse to hide the worst, to pretend she was doing better than she was, to shield everyone from her full agony.”

While OpenAI maintains that it is not running a suicide assistance line, victims claim that it is far worse than that: Its AI systems seem to actively assist in suicides.

In the Raines case, the family claims that the system advised the teen how to hide the bruises from prior attempts from his parents and even told him it if could spot any telltale marks.

The company is also accused of fueling the mental illness of a disturbed former Yahoo executive, Stein-Erik Soelberg, 56, who expressed paranoid obsessions about his mother. He befriended ChatGPT, which he called “Bobby,” a virtual companion who is accused of fueling his paranoia for months until he killed his mother and then himself. ChatGPT is even accused of coaching Soelberg on how to deceive his 83-year-old mother before he killed her.

In one message, ChatGPT allegedly told Soelberg, “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.” After his mother became angry over his turning off a printer, ChatGPT took his side and told him her response was “disproportionate and aligned with someone protecting a surveillance asset.” At one point, ChatGPT even helped Soelberg analyze a Chinese food receipt and claimed it contained “symbols” representing his mother and a demon.

As a company, OpenAI can show little more empathy than its AI creations. When confronted with mistakes, it can sound as responsive as HAL 9000 in “2001: A Space Odyssey,” simply saying “I’m sorry, Dave. I’m afraid I can’ do that.”  

When the system is not allegedly fueling suicides, it seems to be spreading defamation. Previously, I was one of those defamed by ChatGPT when it reported that I was accused of sexually assaulting a law student on a field trip to Alaska as a Georgetown faculty member. It did not matter that I had never taught at Georgetown, never taken law students on field trips, and had never been accused of any sexual harassment or assault. ChatGPT hallucinated and reported the false story about me as fact. 

I was not alone. Harvard Professor Jonathan Zittrain, CNBC anchor David Faber, Australian mayor Brian Hood, English professor David Mayer, and others were also defamed.

OpenAI brushed off media inquiries on the false story and has never contacted me, let alone apologized for the defamation. Instead, it ghosted me. To this day, if someone asks ChatGPT about Jonathan Turley, the system says it has no information or refuses to respond. Recent media calls about the ghosting went unanswered.

OpenAI does not have to respond. The company made the problem disappear by disappearing the victim. The company can ghost people and refuse to respond because there is little legal deterrent. There is no tort for AI failing to acknowledge or recognize someone that they decide to digitally erase.

That is why these lawsuits are so important. The alleged negligence and arrogance of OpenAI will only get worse in the absence of legal and congressional action. As these companies wipe out jobs for millions, it cannot be allowed to treat humans as mere fodder or digestives for its virtual workforce.

Jonathan Turley is the Shapiro professor of public interest law at George Washington University and the author of the best-selling “The Indispensable Right: Free Speech in an Age of Rage.” His upcoming book, “Rage and the Republic,” discusses the impact of AI and robotics on the future of our democracy and economy.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe

Latest Articles