This man was killed four years ago. His AI clone just spoke in court.

AI continues to trickle into courtrooms, from 'hallucinated' court cases to deepfaked videos. The post This man was killed four years ago. His AI clone just spoke in court. appeared first on Popular Science.

May 7, 2025 - 21:15
 0
This man was killed four years ago. His AI clone just spoke in court.

People just can’t stop using generative AI tools in legal proceedings, despite repeated pushback from frustrated judges. While AI initially appeared in courtrooms through bogus “hallucinated” cases the trend has taken a turn—driven by increasingly sophisticated AI video and audio tools. In some instances, AI is even being used to seemingly bring victims back from the dead.

This week, a crime victim’s family presented a brief video in an Arizona courtroom depicting an AI version of 37-year-old Chris Pelkey. Pelkey was shot and killed in 2021 in a road rage incident. Now, four years later, the AI-generated “clone” appeared to address his alleged killer in court. The video, first reported by local outlet ABC15, appears to be the first known example of a generative AI deepfake used in a victim impact statement.

“To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances,” the AI replica of Pelkey says in the video. “In another life, we probably could have been friends.”

The video shows the AI version of Pelkey—a burly, bearded Army veteran—wearing a green hoodie and gray baseball cap. Pelkey’s family reportedly created the video by training an AI model on various clips of Pelkey. An “old age” filter was then applied to simulate what Pelkey might look like today. In the end, the judge sentenced Horcasitas to 10.5 years in prison for manslaughter, a decision he said was at least partly influenced by the AI-generated impact statement.

“This is the best I can ever give you of what I would have looked like if I got the chance to grow old,” the Pelkey deepfake said. “Remember, getting old is a gift that not everybody has, so embrace it and stop worrying about those wrinkles.”

A New York man used an AI deepfake to help argue his case 

The AI-generated impact statement comes just a month after a defendant in New York State court, 74-year-old Jerome Dewald, used a deepfake video to assist in delivering his own legal defense. When Dewald appeared in court over a contract dispute with a former employer, he presented a video showing a man in a sweater and blue dress shirt speaking directly to the camera. The judge, confused by the video, asked Dewald if the person on screen was his attorney. In reality, it was an AI-generated deepfake.

“I generated that,” Dewald said according to The New York Times. “That is not a real person.”

The judge wasn’t pleased and reprimanded Dewald for failing to disclose that he had used AI software to aid his defense. Speaking with the NYT after the hearing, Dewald claimed he hadn’t intended to mislead the court but used the AI tool as a way to more clearly articulate his defense. He said he initially planned to have the deepfake resemble himself but switched to the version shown in court after encountering technical difficulties.

“My intent was never to deceive but rather to present my arguments in the most efficient manner possible,” Dewald reportedly said in a letter to the judges. 

Related: [This AI chatbot will be playing attorney in a real US court]

AI models have ‘hallucinated’ fake legal cases

The two cases represent the latest examples of generative AI seeping into courtrooms, a trend that began gaining traction several years ago following the surge of public interest in popular chatbots like OpenAI’s ChatGPT. Lawyers across the country have reportedly used these large language models to help draft legal filings and collect information. That has led to some embarrassing instances where models have “hallucinated” entirely fabricated case names and facts that eventually make their way into legal proceedings.

In 2023, two New York-based lawyers were sanctioned by a judge after they submitted a brief containing six fake case citations generated by ChatGPT. Michael Cohen, the former personal lawyer of President Donald Trump, reportedly sent fake AI-generated legal cases to his attorney that ended up in a motion submitted to federal judges. Another lawyer in Colorado was suspended after reportedly submitting AI-generated legal cases. OpenAI has even been sued by a Georgia radio host who claimed a ChatGPT response accused him of being involved in a real embezzlement case he had nothing to do with. 

Get ready for more AI in courtrooms 

​​Though courts have punished attorneys and defendants for using AI in ways that appear deceptive, the rules around whether it’s ever acceptable to use these tools remain murky. Just last week, a federal judicial panel voted 8–1 to seek public comment on a draft rule aimed at ensuring that AI-assisted evidence meets the same standards as evidence presented by human expert witnesses. Supreme Court Chief Justice John Roberts also addressed the issue in his 2023 annual report, noting both the potential benefits and drawbacks of allowing more generative AI in the courtroom. On one hand, he observed, AI could make it easier for people with limited financial resources to defend themselves. At the same time, he warned that the technology risks “invading privacy interests and dehumanizing the law.”

One thing seems certain: We haven’t seen the last of AI deepakes in courtrooms.

The post This man was killed four years ago. His AI clone just spoke in court. appeared first on Popular Science.