Federal judges across the country are confronting a growing challenge: how to handle evidence produced or analyzed by artificial intelligence in criminal proceedings. A series of recent rulings has produced conflicting standards, prompting calls for federal guidelines.
Emerging Precedents
In one closely watched case, a federal judge admitted AI-enhanced surveillance footage but required the prosecution to disclose the algorithm used and its known error rates. In another jurisdiction, a different judge excluded a deepfake detection report, ruling that the technology had not been sufficiently peer-reviewed.
The rules of evidence were written for a world of fingerprints and eyewitnesses. We need a principled framework for the age of algorithms.
The Judicial Conference is expected to issue advisory guidance before the end of the year.