This week’s blog is all about MIT graduate student Joy Buolamwini’s journey and realization in the world of algorithmic bias. One would assume that bias can only occur with sectors such as jobs, education, etc. but unfortunately, with the need for software’s comes the issues of skin color and the lack of inclusion. Buolamwini worked often with robots and different forms of technology to bring about convenience for certain parts of life. However, while on the search of technological convenience, Buolamwini discovered that her social robot had problem: It couldn’t recognize her face. Is it because it didn’t know her? No, it’s because it didn’t try to get to know her. And how could it? The algorithms to design the software were founded on bias.
Viruses that stem from digital platforms and softwares spread rapidly around the world. Buolamwini expressed that it was as quick as downloading files off the computer. When she designed the Aspire Mirror, which allowed the user to wear various digital masks to reflect how they feel or they want to feel, Buolamwini realized that it would not work without a white facial mask. If the user was white, they would have no problem. But as a dark brown-skinned young woman, the software couldn’t detect the information, forget about retaining it. She and some of her peers travelled to Hong Kong for an entrepreneurship tour, to explore some of the start-ups in that area. When she used the social robot there too, it couldn’t recognize her face either. Once again, the algorithmic bias strikes again. Where was the algorithmic design actually faulty? In the training sets, which were sets of designed faces that would be able to be detected by the robot. However, there was barely any diversity in the training sets, paving the way for white skin to be the primary and probably only face/skin to be detected.

Like data scientist Cathy O’Neill believes, algorithms are “widespread, mysterious and destructive.” If it’s not handled or used correctly and inclusively, it can create damage. It won’t be destructive immediately, but will take years to slowly but steadily divide the human race. Whether it’s someone with dark skin or misidentifying a suspected criminal, facial recognition software has a lot of flaws. It’s still rather new, which means it’s crawling right now in terms of where it is in human age. It’s going to stand up and fall down many times, because it has been designed by humans. If it’s made by us, then it can’t be perfect because we are not perfect. Our flaws and mistakes will be reflected in our softwares. Often times, the wrongly accused people spend years in jail, again most likely because of some kind of bias. It starts off with the biased facial recognition software, and carries on into the court system. Again, the mistakes are reflective of the creators and their intents.
Skin tones and facial structures have no limit when it comes to the types that we have all come across. Especially now, with interracial marriages, typical features belonging to one culture or ethnicity are being blended in with others. Nothing is staying original now, and it’s good because it’s making us all realize we are all the same. No one can judge who is who, what background they have come from, or have any pre-conceived biases or assumptions just by looking at someone. You will have to get the know person well in order to feel anything about them. But with this growing diversity and racial mixing, the coders of the softwares need to keep up. They must learn how to adapt and change their algorithms to make sure it meets the needs of all the people who will use it. So what matters in the end? Who codes matters, how we code matters and most importantly, why we code matters. Don’t let your visibility be ruined because of the fault of the coders. Get rid of the coded gaze and see with a clear and colorful mind, accepting and including all the colors of the world.
One reply on “The Coded Gaze”
It seem more like a long sly stare than a coded gaze! Looking forward to this discussion. The challenge is we never know or hear of the thinking that goes into the design of these systems, and they are only revealed as people investigate, compare, and ultimately share in social media.
It’s only after getting called out that responses happen.
This one went back to 2015, but showed there is bias in soap dispensers! https://www.mic.com/articles/124899/the-reason-this-racist-soap-dispenser-doesn-t-work-on-black-skin
And a colleague of mine Colin Madland (also a PhD student) noticed and exposed an issue Twitter had in making decisions on how images in tweets are cropped.
https://techcrunch.com/2020/09/21/twitter-and-zoom-algorithmic-bias-issues/
How can this cycle of discover, outrage, publicize and THEN corrections be changed?