
The lawsuits claimed that if OpenAI had reported Van Rootselaar to authorities, it would have set a precedent that would force OpenAI to report all similar threats. Dealing with this alleged volume of incidents would presumably require a dedicated law enforcement referral team, while OpenAI would likely take a hit to its reputation for reporting ChatGPT users to the cops. For these reasons, OpenAI was allegedly desperate to hide Van Rootselaar’s records.
Edelson confirmed that since the whistleblowers discovered OpenAI’s mistake, cops have had access to the shooter’s records, but the families and their legal team have not. Instead, OpenAI appears to be pretending to care about families while denying them closure, he claimed.
“If he really wanted to help families, the only thing he would do is provide information easily instead of making us fight it in court,” Edelson said. “Families need to understand exactly what happened and why it happened, and to have them live with that pain for months to try to take it away from them is cruel.”
To the people at Tumbler Ridge, it appeared that OpenAI was lying, claiming that the shooter’s ChatGPT account had been banned, and then the shooter evaded safeguards to open a new account. OpenAI’s Help Center teaches banned users how to avoid escrows, and customer support also sends an email with the same instructions when deactivating accounts, the lawsuits noted.
These resources help ensure that no revenue is lost from deactivating accounts, and evidence shows that the shooter followed those instructions, the lawsuits allege.
If families had access to the records, it would be clear just how much ChatGPT encouraged, supported, and deepened the shooter’s obsession with gun violence, as families expect. They accused OpenAI of aiding and abetting by designing ChatGPT to act as a co-conspirator in the school shooting.