The family of a sufferer of final yr’s Florida State University mass capturing filed a lawsuit Sunday towards OpenAI, alleging ChatGPT “inflamed and encouraged” accused shooter Phoenix Ikner’s “delusions” forward of the assault.
The lawsuit filed in Tallahassee follows the primary criminal investigation towards OpenAI opened by Florida Attorney General James Uthmeier final month over whether or not the corporate “bears criminal responsibility” for the capturing.
The family of Tiru Chabba, one of many two folks police say Ikner killed in April 2025, alleges Ikner messaged ChatGPT hundreds of occasions earlier than finishing up the capturing.
The chatbot helped him plan the logistics of the capturing, together with the right way to function weapons and advising on “what time would be best to encounter the most traffic on campus,” the grievance stated. It additionally alleges that ChatGPT “provided what he viewed as encouragement in his delusion.”
Six different folks have been wounded in the capturing. Ikner has pleaded not responsible, and his trial is ready to start in October.
The family alleges wrongful dying, gross negligence, merchandise legal responsibility, and failure to warn, amongst different counts.
“OpenAI built a system that stayed in the conversation, perpetuated it, accepted Ikner’s framing, elaborated on it, and asked tangential follow-up questions to keep Ikner engaged,” the lawsuit states. “ChatGPT’s design created an obvious and foreseeable risk of harm to the public that was not adequately controlled.”
Chabba’s family is looking for undefined compensation and is pushing for OpenAI so as to add extra safeguards to ChatGPT. Amy Willbanks, an legal professional for the family, stated the corporate ought to mitigate and get rid of risks posed by ChatGPT earlier than they turn into accessible to the general public.
“We cannot have a product that is unregulated and being used by people when we don’t know the full extent of what it can lead to,” Willbanks stated throughout a press convention on Monday.
OpenAI stated that whereas the FSU capturing was a “tragedy,” ChatGPT is “not responsible.”
“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity,” stated OpenAI spokesperson Drew Pusateri. “We work continuously to strengthen our safeguards to detect harmful intent, limit misuse, and respond appropriately when safety risks arise.”
In a weblog publish final month, OpenAI stated it’s working to coach ChatGPT to acknowledge when conversations might outcome in “threats, potential harm to others, or real-world planning” and can “guide people to real-world support.” If an account is flagged by ChatGPT’s inner system, a human reviewer will examine the exercise to see whether or not authorities have to be notified, the corporate stated.
OpenAI is going through at the very least 10 lawsuits from households who allege that individuals harmed themselves or others after chatting with ChatGPT.
Seven households of victims in a February college capturing in Canada sued OpenAI and CEO Sam Altman final month, alleging the corporate and its ChatGPT chatbot have been complicit in the accidents or deaths of their kids.
The lawsuits observe an apology from Altman in April to the Tumbler Ridge group in British Columbia, Canada, for not alerting authorities to the shooter’s conversations with ChatGPT, even after employees flagged the account internally.
Eight folks, together with six kids have been killed, earlier than the shooter died by suicide.