A Computer Got It Wrong

In January of 2020, Robert Williams, like countless other Americans, headed home from work. When Mr. Williams arrived home to join his family for dinner, his car was immediately surrounded by Detroit police squad cars. He was then detained in front of his wife and two children, who looked on with fearful and bewildered eyes. Mr. Williams spent the night on a cold, concrete floor in an overcrowded, filthy cell with zero explanation behind his arrest. After eventually being connected with a defense attorney, Mr. Williams had learned that Facial Recognition Technology (FRT) owned by the Michigan State Police had pinpointed him as a suspect for a crime against watch theft.  In total, Mr. Williams spent over 30 hours in the Detroit Detention Center. On what account? A blurry surveillance image, an old license photo and a few watches[i]. “​​I never thought I’d have to explain to my daughters why their daddy got arrested in front of them on our front lawn. How does one explain to them that a computer got it wrong?”[ii]

What happened to Mr. Williams is a part of a much larger story surrounding the concerns and risks associated with the use of Facial Recognition Technology. FRT is a software that, in theory, is able to generate face detection and identification by comparing captured images against digital photos from various sources.[iii] In Mr. In Williams’ case, the source used for his investigation was an old driver’s license photo. Due to these conditions, this technology has been regarded as one of the most powerful surveillance tools. FRT has appeared in different uses, such as for providing secure access to smartphones, but is predominantly utilized by law enforcement. While this AI-powered software should aid crime detection, inaccuracies in the detection of specific demographic groups has created a serious problem for law enforcement. Despite stories like Mr. Williams, among countless other individuals who have been falsely accused because of FRT, police departments still continue to prioritize these systems over clear regulatory guidelines. At least a quarter of U.S police agencies utilize facial recognition networks to support police operations[iv]. Since more than half of American adults are enrolled in a facial recognition network[v], this issue puts into question how the use of AI technology will impact community policing, particularly for communities of color. Like any new forensic technology, FRT systems are being implemented into institutions that already have a history of disparities. Yet, when you consider that the algorithms that drive FRT have typically been less accurate when applied to nonwhite people, these disparities become further complicated[vi].

The case of Mr. Williams, alongside that of countless other victims, has sparked a conversation regarding the ethical and moral implications that FRT poses. Within the last few years, there has been an increase in scholarly research collected on the association between FRT and the racial makeup of arrests. A study published in Government Information Quarterly revealed that the use of FRT contributes to greater racial disparities in arrests and is associated with increases in Black arrest rates and decreases in White arrest rates[vii]. Other studies have revealed that members of the Black and Asian community are 100 times more likely to be misidentified than their white counterparts[viii]. The likeness that an individual will be misidentified continues to increase as demographics such as age and gender are applied. However, it should be noted that people of color have been most targeted by this condition and are more likely to be targeted than white people. These alarming disparities stem from the lack of diverse datasets that were utilized to create this technology and their respective algorithms. Further, these algorithms were built on predominantly white data-sets, and are therefore much less accurate in detecting “dark” faces[ix].

Government leaders have expressed diverse opinions on the use of FRT as these discussions continue to be had within the media. New Orleans Mayor, LaToya Contrell, emphasizes the benefit of this technology in light of the growing staff shortages that police departments are facing[x]. Due to these conditions, many have deemed FRT as the solution to providing police coverage to agencies that lack officers. Police forces in states such as Chicago and New York continue to use FRT despite detection inaccuracies[xi]. On the other hand, other institutions have debunked the use of FRT due to the racial inequities that are associated with the software. Cities such as San Francisco and Boston have banned the use of FRT within their police departments[xii].

At the federal level, the Biden Administration and respective democratic congressional members have attempted to mitigate this crisis through various attempts. In 2022, the Biden Administration released the “BluePrint for AI Bill of Rights”, a document intended to protect citizen rights in wake of evolving AI technology. However, this blueprint’s principles are nonbinding and don’t specifically reference the use of AI within law enforcement. Subsequently, at the beginning of this year congressional Democrats reintroduced the Facial Recognition and Biometric Technology Moratorium Act. This proposal would end the use of FRT within law enforcement until policy makers can establish proper guidelines and regulations to protect community safety[xiii]. While these measures are important first steps, they nevertheless don’t serve as viable long-term solutions.

The matter of racial profiling towards people of color within the law enforcement system has forever been an issue. However, this condition only becomes further complicated by the application of AI and Facial Recognition Technologies. Looking forward, it is imperative that we repair our technology to avoid making the same mistakes that humans have. While it is natural to debate the ethical and moral dilemmas that AI poses alone, the equity concerns that have been presented in effect of this technology should be straightforward. FRT and the respective racial implications that emerge from such systems, pose a serious threat to communities of color within the U.S.

[i] Williams, R. (2023, February 24). I did nothing wrong. I was arrested anyway.: ACLU. American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anyway

[ii] Williams, R. (2023, February 24). I did nothing wrong. I was arrested anyway.: ACLU. American Civil Liberties Union. https://www.aclu.org/news/privacy-technology/i-did-nothing-wrong-i-was-arrested-anyway

[iii] Johnson, T, et al. (2021, July 7). Facial recognition systems in policing and racial disparities in arrests. Government Information Quarterly . https://www.sciencedirect.com/science/article/abs/pii/S0740624X22000892?via%3Dihub

[iv] Garvie, C., Bedoya, A., & Frankle, J. (2016, October 18). The perpetual line-up. Perpetual Line Up. https://www.perpetuallineup.org/

[v] Garvie, C., Bedoya, A., & Frankle, J. (2016, October 18). The perpetual line-up. Perpetual Line Up. https://www.perpetuallineup.org/

[vi] Perkowitz, S. (2021, February 5). The Bias in the Machine: Facial Recognition Technology and Racial Disparities. MIT Case Studies in Social and Ethical Responsibilities of Computing.

[vii] Johnson, T, et al. (2021, July 7). Facial recognition systems in policing and racial disparities in arrests. Government Information Quarterly . https://www.sciencedirect.com/science/article/abs/pii/S0740624X22000892?via%3Dihub

[viii] Harwell, D. (2019, December 21). Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use. The Washington Post. https://www.washingtonpost.com/technology/2019/12/19/federal-study-confirms-racial-bias-many-facial-recognition-systems-casts-doubt-their-expanding-use/

[ix] Perkowitz, S. (2021, February 5). The Bias in the Machine: Facial Recognition Technology and Racial Disparities. MIT Case Studies in Social and Ethical Responsibilities of Computing. https://mit-serc.pubpub.org/pub/bias-in-machine/release/1#:~:text=Within%20the%20United%20States%2C%20these,institutions%20with%20their%20own%20histories

[x] Johnson, T. L. (2023, May 18). Police facial recognition technology can’t tell Black People Apart. Scientific American. https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/#:~:text=Civil%20rights%20advocates%20warn%20that,the%20likelihood%20of%20missed%20arrests

[xi] Johnson, T. L. (2023, May 18). Police facial recognition technology can’t tell Black People Apart. Scientific American. https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/#:~:text=Civil%20rights%20advocates%20warn%20that,the%20likelihood%20of%20missed%20arrests

[xii] Johnson, T, et al. (2021, July 7). Facial recognition systems in policing and racial disparities in arrests. Government Information Quarterly . https://www.sciencedirect.com/science/article/abs/pii/S0740624X22000892?via%3Dihub

[xiii] Johnson, T. L. (2023, May 18). Police facial recognition technology can’t tell Black People Apart. Scientific American. https://www.scientificamerican.com/article/police-facial-recognition-technology-cant-tell-black-people-apart/#:~:text=Civil%20rights%20advocates%20warn%20that,the%20likelihood%20of%20missed%20arrests

This entry was posted in Artificial Intelligence. Bookmark the permalink. Follow any comments here with the RSS feed for this post. Post a comment or leave a trackback: Trackback URL.

Leave a Reply