Florida Attorney General James Uthmeier opened an investigation into OpenAI over whether or not the corporate “bears criminal responsibility” for a shooting at Florida State University final yr.
The attorney general’s workplace mentioned it’s investigating whether or not OpenAI’s ChatGPT helped the suspect, Phoenix Ikner, perform the crime.
“If that bot were a person, they would be charged with a principal in first-degree murder,” Uthmeier mentioned at a press convention on Tuesday. “ChatGPT offered significant advice to the shooter before he committed such heinous crimes.”
Ikner is accused of killing two folks and injuring six others on FSU’s campus on April 17, 2025. He has pled not responsible, and his trial is about to start in October.
Uthmeier mentioned that Ikner submitted a number of queries to ChatGPT previous to the shooting, and that the chatbot “advised” the shooter on weapons and ammunition, “what time of day would be appropriate for the shooting to interact with more people, and where on campus would be the place to encounter a higher population.”
While there have been a number of lawsuits in opposition to AI corporations, a criminal investigation is extraordinarily uncommon.
Uthmeier mentioned OpenAI has been subpoenaed for details about “policies and internal training materials regarding user threats of harm to others” and self-harm, in addition to insurance policies for reporting potential crimes.
“We’re going to look at who knew what, designed what or should have known what and if it is clear that individuals knew that this type of dangerous behavior might take place,” Uthmeier mentioned.
An OpenAI spokesperson mentioned in an announcement to NCS that the shooting “was a tragedy, but ChatGPT is not responsible for this terrible crime.” OpenAI “proactively” shared the account believed to be linked to Ikner with legislation enforcement after the shooting, the spokesperson added.
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” the spokesperson mentioned.
This not the primary time ChatGPT has been accused of serving to a suspect plan a mass shooting. After a shooting in British Columbia, Canada, this yr, OpenAI said it has “taken steps to strengthen our safeguards,” together with altering when the corporate chooses to alert legislation enforcement about doubtlessly violent actions.
“We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise,” the spokesperson instructed NCS.