San Francisco disclosed about a “bias mitigation tool” that makes use of basic AI techniques to naturally articulate information from police reports that could recognize a suspect’s race. It’s devised to be a way to keep prosecutors from being affected by racial bigotry when deciding whether someone gets charged with a crime. The tool would be ready and is due to be implemented on 1st July.
According to the SF district attorney’s office, the tool would not only strip out descriptions of race, but also descriptors such as eye color and hair color. The names of people, locations, and neighborhoods that may all knowingly or unconsciously caution a prosecutor that a suspect is of a certain racial background are also removed.
A DA spokesperson told that the tool would remove details about police officers, too, including their badge number, in case the prosecutor happens to have knowledge about them and might be partial toward or against their report.
Presently, San Francisco makes use of a much more limited manual process to try to avert prosecutors seeing these things, the city merely gets rid of the first two pages of the document, but prosecutors get to see the whole rest of the report. Gascón stated that they had to create machine learning around this process. Using of this tech would be first-in the nation.
The tool was developed by Alex Chohlas-Wood and team at the Stanford Computational Policy Lab, who also aided in developing the NYPD’s Patternizr system to automatically search through case files to find patterns of crime.