In Chandler, Arizona, family and friends of Christopher Pelkey submitted numerous statements to the court for the sentencing of the man found guilty of shooting him in a road rage incident. These statements offered insights into Pelkey’s humor, character, and military service. However, a unique element was introduced when an AI-generated version of Pelkey himself was presented during the sentencing.
Pelkey’s family utilized artificial intelligence to create a video featuring his likeness, allowing him to express his thoughts. This groundbreaking use of AI in U.S. courts marked a significant moment. The AI representation of Pelkey conveyed messages of forgiveness, faith, and the importance of cherishing each day and showing love to one another. This innovative approach of using AI for victim impact statements is a new legal tool in Arizona’s court system, offering a fresh way to present information beyond the traditional evidentiary phases.
Judge Todd Lang, who oversaw the road rage case, expressed his belief that Pelkey, had he been alive, would have approved of the video’s content. The video also reflected the sentiments of Pelkey’s family, who sought justice for his death. Horcasitas, the shooter, was convicted of manslaughter and received a 10.5-year prison sentence.
Pelkey’s sister, Stacey Wales, suggested the idea of her brother speaking for himself through AI. She found solace in imagining what her brother would have said to the shooter, emphasizing forgiveness. Victims in Arizona are allowed to present their impact statements in various digital formats, according to victims’ rights attorney Jessica Gattuso.
While the use of AI in courtrooms is expanding, concerns have been raised about the potential misuse of deepfake technology for generating false evidence. Gary Marchant, a law professor at Arizona State University, cautioned about the risks associated with the proliferation of AI-generated evidence in legal proceedings. The rise of AI technology has prompted discussions within the court system about establishing guidelines for its responsible use.
In a recent case in New York, a man without legal representation used an AI-generated avatar to present his case in a lawsuit via video. The judges on the appeals court quickly realized that the man on the screen was not real. In another instance in Arizona, an AI-generated video was effective because the judge had received numerous letters from loved ones that supported the video’s message. According to Wales, there was a common theme in these letters that highlighted the true essence of the individual in question. This approach was successful because it accurately portrayed the individual’s character. ___Reporting from Las Vegas, Yamat. Additional reporting by Susan Montoya Bryan in Albuquerque, New Mexico.