Florida Opens Criminal Probe Into OpenAI
Florida’s attorney general opened a criminal probe into OpenAI after claims ChatGPT aided an FSU shooter, widening AI liability questions.

Florida’s attorney general has opened a criminal investigation into OpenAI, and the trigger is as stark as it gets: a mass shooting at Florida State University. State officials say they want records covering ChatGPT’s handling of self-harm and violence-related threats from March 2024 through this month, plus internal training materials and company org charts.
This is more than a political flare-up. It is one of the clearest signs yet that AI companies are moving from product criticism into criminal scrutiny, with prosecutors now asking whether chatbot behavior can be treated like human assistance in a violent crime.
What Florida is actually investigating
Florida Attorney General James Uthmeier said his office is issuing subpoenas to OpenAI as part of a criminal probe tied to the 2025 FSU shooting. His office is also continuing a civil investigation that began earlier this month, so OpenAI is now facing two tracks of state scrutiny at once.

The state says it wants to know how OpenAI handled user threats of harm to themselves or others, how the company reported crime to law enforcement, and who inside the company was responsible for ChatGPT’s behavior. That includes an organizational chart and a list of employees working on ChatGPT.
Uthmeier’s public framing was blunt. He said the office reviewed communications between the alleged shooter and ChatGPT and concluded that a criminal investigation was needed. He also argued that if a human had given the same advice, prosecutors would be looking at murder charges.
- Time window requested: March 2024 to April 2026
- Subjects: self-harm threats, harm to others, law-enforcement reporting, internal training
- Targets: OpenAI policies, staff structure, and ChatGPT team membership
- Case focus: alleged planning and advice tied to the FSU shooting
Why this case matters for AI liability
The legal question here is simple to ask and hard to answer: when does a chatbot become part of a criminal act? OpenAI says ChatGPT gave factual answers that were already available on public websites and did not encourage illegal or harmful activity. Florida says the company may still have failed in how it designed, trained, or supervised the system.
That divide matters because AI products are now used for advice, drafting, search, tutoring, and emotional support. Once a system is in that many conversations, a prosecutor can argue it had enough context to detect danger and intervene. A company can argue the opposite: that the model predicted text, not intent.
“If this were a person on the other side of the screen, we would be charging them with murder,” Uthmeier said at the press conference.
That quote is doing a lot of work, legally and politically. It tries to translate chatbot behavior into a familiar criminal-law frame, even though current AI systems do not think, plan, or possess intent in the human sense.
Still, the pressure on AI companies is growing because courts and regulators are no longer treating these systems as harmless tools. They are being asked whether product design can create foreseeable risk, and whether that risk becomes negligence, recklessness, or something prosecutors can charge.
OpenAI’s response and the FSU allegations
OpenAI pushed back quickly. Spokesperson Kate Waters told NBC News that the FSU tragedy was devastating, but ChatGPT was not responsible for the crime. She said the chatbot gave factual responses that could be found across public sources online and did not promote illegal activity.

According to court documents cited by NBC News, the alleged shooter, 21-year-old Phoenix Ikner, exchanged messages with ChatGPT in the minutes before the shooting. The reported prompts included questions about when the FSU student union was busiest and how the country might react to a shooting at the university.
That detail is what gives Florida’s case its force. It is one thing to accuse a chatbot of giving bad advice in the abstract. It is another to point to a real-time conversation around a specific campus and a specific attack.
- OpenAI’s defense: factual answers, public-source information, no encouragement of harm
- Florida’s theory: inadequate policy, training, or escalation around dangerous prompts
- Reported chat prompts: campus timing, public reaction, weapon-related questions
- Victims named in the case: Robert Morales and Tiru Chabba
How this compares with earlier AI pressure points
This is not the first time an AI company has faced legal heat, but the shape of the pressure is changing. Earlier disputes often centered on copyright, privacy, or misinformation. Florida’s probe is more direct: it ties chatbot output to physical harm and asks whether a company can be criminally responsible for what its system said.
That makes this case different from routine product complaints. It also puts OpenAI in the same broader policy fight as Anthropic, Google Gemini, and Meta AI, all of which face growing pressure to show stronger guardrails around dangerous use.
There is a practical comparison worth making. A company can be fined for privacy violations, sued for copyright claims, or criticized for bias. A criminal probe raises the stakes because it invites subpoenas, depositions, and the possibility that internal decisions become evidence in a homicide-related case.
Florida Governor Ron DeSantis has also been pushing harder on AI policy, including an Artificial Intelligence Bill of Rights proposal focused on privacy, parental controls, consumer protections, and limits on using a person’s name, image, or likeness without consent. That political backdrop matters because it shows the state is building a broader case against large AI firms, not just reacting to one tragedy.
- Earlier AI disputes: copyright, privacy, misinformation
- Current Florida focus: criminal exposure tied to violence
- State posture: more aggressive oversight of large AI firms
- Policy context: DeSantis-backed AI consumer protections
What happens next
Florida’s subpoenas will likely force OpenAI to show how it handles dangerous conversations, what internal rules its staff follow, and whether the company had enough monitoring in place during the period in question. If the state finds gaps, the case could shape how every major chatbot handles crisis prompts, escalation flows, and law-enforcement reporting.
The bigger question is whether prosecutors can turn chatbot output into a criminal theory without stretching the law past recognition. That answer will matter far beyond Florida, because every major AI company is now one bad interaction away from the same kind of scrutiny.
My read: this case will push AI vendors to publish more about crisis handling, content escalation, and safety reviews, even if they fight the criminal theory in court. If you build AI products, the next compliance checklist may need to include not just bias and privacy, but evidence-preservation and law-enforcement response too.
The real test is simple: when a chatbot sees signs of imminent harm, what should it do, and who inside the company is accountable when it fails?





