I’ve watched both video for next week’s discussion on Artificial Intelligence, Race & Technology, and there is a lot to unpack and discuss for next class.
We’ve already made mention of the bizarre growth technology has on what we can do and operate, and it’s showing signs of not stopping. I think as a society we’ve gone far enough that the rate of growth is too far ahead, and at a rapid pace that we cannot properly regulate, I’ve made mention of that with NFTs.
Of the two videos we’ve had to watch, How I’m fighting bias in algorithms shocked me the most. In here Joy Buolamwini discusses the way technology is shaping perception (or lack thereof) in society, and how if left untreated it could cause damaging affects.
Her story of dealing with face-recognition software is something I have heard about before, but I didn’t know how far it reached out. The lack of a computer’s ability to recognize all faces is disturbing, but what I found more disturbing is realizing it is just following the orders it has been given.
“However, if the training sets aren’t really that diverse, any face that deviates too much from the established norm will be harder to detect, which is what was happening to me. ” Why is it that as a society, that lighter faces are considered the norm? The world is full of diversity, and the United States is no different. But when we consider this the ‘norm’, it damages the perception of other cultures and they are then compared behind American ideas like this. Because this is now extending to computer AIs, it could lead the way for more scrutinized perceptions that will get very difficult to resolve.
This reliance on technology is also starting to seep into areas where it isn’t needed. Another quote from Buolamwini that sent chills down my spine was:
“Law enforcement is also starting to use machine learning for predictive policing. Some judges use machine-generated risk scores to determine how long an individual is going to spend in prison. So we really have to think about these decisions. Are they fair? And we’ve seen that algorithmic bias doesn’t necessarily always lead to fair outcomes. “
I can’t be the only one in thinking that is off-putting. Lives can be placed in trouble if dealing with the wrong AI. We’ve established that these have to be coded in first, and I feel there might be biased place into it like face-scanning. Humans are complicated creatures, so I feel there needs to be room for nuance and complex understanding that cannot be done digitally. I’m really starting to think that importance facets of humanity are being discarded for digital convenience. It’s something that I feel that realistically can be done since coding can always change.