Produced by Yunyun Wang
Google has a racism problem. And nearly three years after an incident where software engineer Jacky Alciné reported that Google Photos’ image recognition algorithms were misclassifying black people as “gorillas”, Google has since blocked the gorilla identifier altogether rather than risking another incidence of mis-categorization. However, there may be more than meets the eye, because a recent research study conducted by Joy Buolamwini of MIT Media labs identified several companies with facial recognition softwares that consistently return less accurate results for female faces compared to males, and even lower accuracy for faces of women of color.
In this episode of State of the Pod, I sit down with Vivian Kiniga, to dig deeper into the pattern of engrained bias in computing and algorithms and what can be done about it.
Vivian is a UX designer currently pursuing an MS at Cornell University. Read more about Vivian’s Medium post where she analyzes the Google Photos facial identification error case and raise important questions relating to data, classification and the politics of algorithms.
Podcast: Play in new window | Download
Subscribe: RSS