Human Rights News

Human Rights News

 

 

WGEPAD discusses racial bias in machine learning algorithms

Mar 28, 2019

This week, the Working Group of Experts on People of African Descent (WGEPAD) is meeting in Geneva, Switzerland for its 24th session. The theme for session 24 of WGEPAD is “Data for Racial Justice”.


One of the critical discussions to take place this week is on the topic of artificial intelligence and its perpetuation of systemic racism and discrimination against people of African descent in North America.


During the Thematic Discussion on People of African Descent in North America, Attorney Ms. Dominique Day argued that technological innovations are not working to eliminate racial injustice, but rather that technological innovations like artificial intelligence often perpetuate and create racial injustices that human rights defenders are fighting to eradicate.


Particular attention was given to research by Joy Buolamwini, a digital activist at the M.I.T. Media Lab. Her work reveals the disparity in error rates of facial recognition software between white/fair-skinned men and men and women of color. The most advanced machine learning and artificial intelligence software are able to correctly identify the gender of a white man 99 percent of the time, while the same software will incorrectly identify a woman of color’s gender around 35 percent of the time.


Currently, there is no regulation prohibiting software with high error rates for recognition of people of color from entering the commercial marketplace. As artificial intelligence and machine learning technologies are increasingly utilized in the public arena, people of color are disproportionately misidentified, for example, as criminally suspect due to the overrepresentation of people of color in law enforcement databases and the high error rates of machine learning technologies in recognizing people of color.

 

Related: AI, Ain’t I A Woman? Video by digital activist and computer scientist Joy Buolamwini