I am glad we have spent some time thinking about the everyday consequences of data surveillance, and I think that discussing the “Screening Surveillance” films has been a great way to get at the issues (moving beyond mere conceptual analysis) as we start to think about how this ubiquitous reality is shaping our everyday lives. The conversation we had about “A Model Employee” and more recently about “Blaxites” lead to some meaningful insights on your part. Doing some speculating together (imagining different versions of the story or how small plot changes could lead to very different outcomes) has also been away for us to understand what is at stake.
Earlier, Maura did some important highlighting of a few critical concepts from Shoshana Zuboff’s analysis, including the ways machine learning and tech companies have “outrun public understanding” and the key concept of “anticipatory conformity” – which should lead us all to question how we can be sure our beliefs and behaviors are ours (rather than a result of the perception we come by that is ultimately shaped by the algorithms framing our world via the internet).
We also have had a chance to look at the TED talk by Joy Buolamwini:
Buolamwini zeroes in on role that data plays in algorithmic injustice, helping us understand how our human blind spots (i.e. racism) can very well become “baked in” blind spots when data is mobilized via machine learning. She illustrates the flaws that may exist with data sets, and then expands her analysis to include the problems that result from the practices/policies that are shaped by the use of artificial intelligence. How does algorithmic bias lead to discriminatory practices? When algorithms become the basis for things like predictive performance in school, for predictive policing, and for overall societal access (admissions, loans, insurance, etc) – our future(s) are indeed cast by data patterns. Buolamwini also calls for “inclusive code” and inclusive coding practices. As she states, this starts with considering the people writing the code (who is included here?), and the ethics of the design teams who are employed to design new models. Can we “factor in fairness” when the world itself is already rife with inequity, before we scale our systems for efficiency with machine learning? In bringing up the recent case of researcher Dr. Timnit Gebru at Google, I meant to point us all towards both the currency and urgency of these questions.
…And just a reminder to check out Alan’s post which has given us all much to play with regarding AI exploration:
And also check out this thread:
Next Thursday
I am looking forward to our time together next Thursday! #NetNarr is pleased to host another Studio Visit with special guest Chris Gilliard. We will engage Chris in a lively discussion of race, data surveillance, and artificial intelligence. Please check out Chris Gillard @hypervisible.
Please register for this event here.
After we connect with Chris, we will jump back into our regular #NetNarr Zoom room to hear from Tom, who will be pathfinding based on Ruja Benjamin’s “Catching Our Breath: Critical Race STS and the Carceral Imagination” Engaging Science, Technology, and Society 2 (2016), 145-156. It will be an engaging evening!
Finally, just a reminder that we have all agreed to make Thursday April 8th a free evening. I hope you will take the “breather” to refresh and recuperate from this dense and challenging stretch of time.
See you all Thursday!
Dr. Zamora