Remember that vine with the cat ‘saying’ “yas” to wanting a treat? If you don’t, the description says it all. I remember watching this video with someone and them having a hard time fully believing that it wasn’t actually the cat saying ‘yas’ instead of someone voicing over the cat’s meow. This simple exchange shocked me at the time in an oddly deep way. How could you question something that was so obviously contrived and fake? Even now when I remember the conversation, it leaves me with a horrible feeling inside.
The feeling is reminiscent of the time I witnessed my little brother fall into a pool when he was around 3 or 4. The moment of danger was short thanks to my dad being right there to reach in and grab him out, but the realization that someone that vulnerable was so easily harmed by their environment if there wasn’t someone there to protect them left me with a residual terror that is stirred up anytime I get a hint of a similar dynamic. You would think the terror of a child falling into a pool wouldn’t compare to someone being a little too convinced by a silly video, but the vulnerability present in both situations is an all too terrifying dynamic. What if that person approached most of their online activity, use of technology, and consumption of media that way and were taken in by something more insidious than a silly cat video? Say, for instance, the belief that a secret occult ring of government officials were pedophiles and were operating out of the basement of a pizza parlor?
As I read Autumm Caines’ essay “The Zoom Gaze” I couldn’t help but get a sudden wave of this terror of vulnerability. In some ways, it almost feels conspiratorial in and of itself to think that the technology that we use to communicate could have such dark repercussions if we don’t think critically enough about how it positions us in light of visibility, both to ourselves and to those around us. I mean, is it really that bad? Can’t we just use Zoom and not have to think about whatever the “man behind the curtain” is doing? If the company tells me that it has created something with the intention for it to be a helpful and life altering tool, can’t it be just that if I believe it to be so? The fact that I even want to ask these questions makes me realize the terror I feel from this article stems from my own potential to be overly vulnerable in the online and technological environment we are living in right now.
Caines’ article is important because it doesn’t take for granted that the questions above might be on a lot of different people’s minds; in light of this, she demands a critical consideration of the forces behind tools we are becoming so accustomed to using in our everyday lives – videoconferencing platforms in particular. Caines argues that an understanding of the way the ‘Zoom gaze’ impacts our perception and ideas of control shines light on potential new ways of creating old oppressive hierarchical patterns. By entering into a new system of surveillance, we are exposed to a new way of perceiving who we are and how we present ourselves. Because we are being placed under this new ‘gaze’, it creates a need to control what we do, how we look, and what our environment says about us in new ways. At the same time, our ability to actually control these elements is challenged by potential features within video conferencing like having one person be the host who can decided to record, mute, track eye movement, etc. According to Caines, these features not only challenge privacy, but they can create power dynamics that – whether the intention behind them is good or bad – seem to lean more towards the bad than good in implementation. These dangers and deeper issues of what is actually happening in Zoom fatigue and with the Zoom gaze are at times hard to see when the companies that create these platforms are adept at promoting everything in a way that erases the potential for harm.
I am intrigued with this idea of how a tool is created and what its ‘intended’ purpose is versus how it really plays out in its implementation. I think the ‘intention behind the tool’ dynamic goes back to some thoughts I had last semester about what the driving force is behind things. Most tools seem like they are empty containers that we can then imbue with moral and ethical meaning and purpose. Isn’t Zoom just one more empty container that we are now seeing has implementations that can be harmful… or does it go deeper than that? Is its very make up corrupt in ways that go deeper than just a simple platform that allows us to communicate? Caines considers this dynamic:
“In video conferencing, the software itself can assign power relations that may or may not map onto existing social relations. The Zoom gaze ultimately comprises how the software’s programmers see users in the abstract, a perspective that can condition all the other possible perspectives within a video conference.”
Here we see that the so called ’empty container’ isn’t so empty after all, there is a mind within the tool already dictating what it is supposed to be and do. I think in many ways this is the complexity that *modern* technology has brought into our lives, the notion that underneath the tool is the software and “brain” of the tool upholding invisible forces of power. We can no longer approach any kind of modern technology – phone, car, smart tv’s, Alexa, Siri – with the assumption that it is an empty container to be filled. It has been pre-filled with a preexisting notion of what the user and world is like. I emphasize the “modern” behind technology because when I think of certain technologies of the past, they were more in alignment with the idea of the empty container. You couldn’t really say in the past that – for instance – a car’s machinery was itself marginalizing or creating power structures. Certainly, the car became imbued with symbols of hierarchy and was used in sinister ways, but the car itself? It was a tool without any real insidious built-in technology.
Caines is asking for her readers to critically think about the very make up of Zoom and the resulting Zoom gaze have larger implications for privacy, power, and equity. When I consider this need for critical thinking in light of the ‘empty container’ notion, I almost feel like maybe we need a new critical thinking model to go with her request. I realize that technology and software have been around for a long time and that people have been thinking critically about it much longer than I have been around. But the fact that we are still having issues like black skin being washed out on video platforms and programs that punish people for their eye movement says that maybe people aren’t thinking as critically as we would want. Maybe there is still too much of a thought process that the technology is simply a tool that we assign good or bad intention instead of having that intention already built in.
I almost feel a little paranoid saying all that, but what this conversation on the Zoom Gaze makes me think about is how I was raised and taught to think about technology. I wasn’t thinking about the make up of the programs I was using being harmful, I just used them and assumed they were helpful if that is the way I used them. I knew nothing about how even coding can be racist until the last few years. In light of this introspection, a rabbit hole I would want to go down is around the idea of the forces behind technology, especially from the angle of what I’ll call the ‘Boomer/Gen X Gaze’. I know even the mention of Boomer stirs up some tense feelings, and I truly don’t want to come off as ageist in saying that, so I will add disclaimers to try to sooth the Boomer/Gen Xer who might have some ruffled feathers. I realize not ALL Boomers and Gen Xers fall into the category of not being critical about the forces behind technology. In fact, I am learning plenty from those who are part of those generations on how to be more skilled in my understanding and use of technology. That said – what happens when millennials like me were raised and taught by Boomers and Gen Xers who look at technology through the ‘empty container’ lens? (Interestingly, this is a question that stems out of another seemingly unrelated question (my suspicions are that they are related) that I’ve been mulling over in relation to the color-blind generation and its impact on millennials. Going into that now is getting a little off topic through, but that will hopefully be a topic I can explore another time.)
Something from Boyd’s podcast last week fits this conversation about the change in modern technology and the forces that are at work behind it. Boyd mentioned how helping “kids” think more critically about the use of the internet and technology was important to helping them become more thoughtful about their use of online tools and platforms. She felt like the older, more ‘mature’ generation had these critical thinking skills to offer to kids. I agree that kids need to learn critical thinking, but I don’t think they are the only ones; and this “more mature” generation that is supposed to teach them? I’m sorry, but they aren’t always the ones who have the tools to teach the type of critical thinking needed for the use of modern technology. Again – not all of the older generation. But if some of that generation doesn’t believe in or simply doesn’t talk about things like systemic racism or that a ‘tool’ could be racist or sexist or ableist or etc. in and of itself… can they really offer the right way of critically thinking about technology?
The Zoom Gaze opens a conversation about the way we approach technology and how the critical thinking skills we need to engage with modern technology need to be updated like the very software that has made it all so complex. This conversation is vital for the Post-Pandemic University because we will have individuals who are supposed to use and build these technologies in our universities. They will be the new generation critically thinking about and altering the invisible forces behind our tools of technology. They will be the ones to answer the question, if we are able to create technology with make up that is harmful, can’t we also create technology with make up that is good? What if there really was a way to shift the meaning we put into things and change it so that its default is to actually do good?
I realize that the forces at work in these technologies do go beyond the software, and that there is a possibility for tools to still be used in good or bad ways regardless of software – but I think my main point is we need to approach our understanding of technology with a critical thinking that first considers the invisible forces behind the tool without assuming it will be whatever we want it to be. If we assume our intentions going into it are going to be the factors that will determine the nature of implementation (empty container thinking), we may remain blind to the larger forces at play that Caines is discussing. The less we see the forces at play and the more we grow accustomed to their influence, the more vulnerable we will be to their harmful effects. No wonder we are all so fatigued – our world has been opened more than ever to forces that we have to be prepared to question and guard ourselves and others from. So in the words of a talking cat, “yas” you need to think twice about the Zoom Gaze.
One reply on “Yas Cat and Empty Containers in the Post Pandemic University”
[…] In Joy Buolamwini’s talk on bias in algorithms we return to the idea I mentioned a few blogs back about the idea of tech not being an “empty container” like technology of the past. The […]