Shocking Allegations: ChatGPT Linked to Violent Threats

Close-up of a hand reaching towards the ChatGPT logo on a smartphone screen

A California court fight is testing whether judges can force an AI company to silence a single dangerous user—without turning “safety” into government-coerced censorship.

Quick Take

  • A San Francisco Superior Court filing in Doe v. OpenAI seeks a temporary restraining order to block a specific user from ChatGPT for three weeks and prevent new accounts.
  • The plaintiff alleges the user used GPT-4o to generate harassing messages, defamatory “psychological reports,” violent planning prompts, and a death threat.
  • OpenAI allegedly banned the account for “Mass Casualty Weapons” activity, upheld the ban on appeal, then reversed course and reinstated access with an apology.
  • Legal analysts warn a court-ordered cutoff could raise First Amendment issues if it functions like state pressure on a private platform.

A TRO request puts ChatGPT access—and court power—on the line

San Francisco Superior Court is weighing a temporary restraining order request that would require OpenAI to block a specific man—described as mentally ill and recently released—from using ChatGPT for three weeks. The plaintiff, identified as Jane Doe, is his ex-partner and says she is facing immediate danger. Her request also asks the court to require OpenAI to prevent new account creation and to notify her about attempted access.

The case is unusual because it doesn’t just seek damages after the fact; it asks a judge to order a private company to deny service to a named person in real time. That is a public-safety instinct many Americans understand, especially when a case involves stalking behavior, threats, and violence. At the same time, forcing a communications tool to “deplatform” someone by court order is a different kind of power than a typical restraining order.

What the filings allege: harassment, violent prompts, and a reversal of enforcement

According to reporting based on the court filings, the user allegedly relied on GPT-4o to produce harassing content and materials that the plaintiff says were spread to people in her network. The filings describe prompts and outputs tied to violent planning, including references to a “Violence list expansion” and a “Fetal suffocation calculation,” as well as a death threat. The plaintiff argues OpenAI had notice of escalating misuse, and that continued access compounded her risk.

The timeline described in the reporting also highlights a point likely to matter in court: OpenAI’s safety systems allegedly flagged the account for “Mass Casualty Weapons” activity months before January 2026, banned it, and even upheld that ban on appeal—then later reversed the decision and reinstated access, apologizing to the user. If accurate, that sequence could become central to negligence arguments, because it suggests the company recognized a severe risk and then rolled back the consequence.

Criminal-court incompetency finding and a release that raised alarms

The public-safety backdrop is not abstract. The reporting describes the user as having been arrested in January 2026 on four felony counts, including a bomb threat and assault with a deadly weapon, and later found incompetent by a criminal court with an order for mental health commitment. Days before the TRO push gained attention, a procedural delay reportedly prevented a timely transfer, and a court ordered his release—reigniting the plaintiff’s fear of immediate harm.

Those facts—if the court accepts them as described in the filings—help explain why a targeted access cutoff is being sought instead of a broader policy change. Conservatives who prioritize law-and-order will see a familiar breakdown: a dangerous individual cycles through systems that fail to contain him, while the burden shifts to the potential victim to chase emergency remedies. Liberals who worry about corporate power will see a different danger: private tools becoming quasi-public utilities under judicial command.

The constitutional tension: safety mandates vs. compelled “deplatforming”

Legal commentary highlighted in the coverage warns that ordering OpenAI to block speech-related access could raise First Amendment concerns, especially if a court order effectively compels a private actor to restrict a user’s expression. The debate echoes recent disputes over whether government actions that “encourage” or pressure platforms to remove content amount to unconstitutional coercion. In this case, the order would be direct, not merely suggestive, which is why the constitutional questions are sharper.

At the same time, the filings and related reporting frame the issue as more than speech: the plaintiff argues she warned OpenAI and that the user’s behavior, combined with AI-enabled output generation, created a foreseeable risk. OpenAI has said it has been improving safety features, including training systems to recognize distress, de-escalate, and guide users toward support resources with input from clinicians. The court’s challenge is balancing a concrete claimed threat with limits on compelled platform action.

Why this case matters beyond one plaintiff and one platform

The immediate ruling—whether or not a TRO issues—could shape how future victims seek relief when technology is allegedly used to stalk, harass, or plan violence. A precedent for court-ordered denial of AI services could push companies to harden account controls and escalation processes, but it could also invite more litigation demanding judicial “kill switches” for disfavored users. That tension feeds a broader distrust many Americans share: systems that promise protection but often expand control instead.

For a country already frustrated with unaccountable institutions, this dispute lands at an uncomfortable intersection: victims want fast protection, companies want predictable rules, and courts are being asked to manage risks created by powerful new tools. The public record described so far is based largely on filings and summaries rather than fully tested evidence, and no final order is the last word. What happens next will signal whether AI governance is moving toward clear due process—or improvisation under pressure.

Sources:

Court Orders OpenAI to Cut off (for 3 Weeks) ChatGPT Access by Mentally Ill and Dangerous User

Court Considers Ordering OpenAI to Block Dangerous User

Should Court Order OpenAI to Cut Off ChatGPT Access by Mentally Ill and Dangerous User?

Can AI Companies Be Held Liable for User Suicide?

Lawsuit alleges ChatGPT convinced user they could “bend time,” leading to hospitalization