Categories
Ch 08: The Attention Economy as Dystopia

Persuasion, Politics, and the Post Pandemic University

In my past couple of blog posts, I’ve mentioned how data collection and a lack of privacy are looming concerns that I try not to think too hard about while I’m using the internet for work, school, and entertainment. I haven’t really been able to articulate exactly why the thought of companies collecting my data upsets me—obviously, I want to keep some parts of myself private, but there’s more to it than that. Zeyneb Tufekci’s TED Talk “We Are Building a Dystopia Just to Make People Click on Ads” does a better job than I ever could of explaining why allowing corporations to have unlimited access to our data is so dangerous.   

Tufekci describes how companies like Facebook and Google capture and sell their users’ data in order to improve their “persuasion architecture.” Like the candy next to the cash register that convinces a mom to spend money on her sweet-toothed kid, online persuasion architecture is designed to convince people to spend money by showing them ads based on their previous searches, the groups they’re part of, the posts they interact with, etc. This in itself is kind of creepy; I don’t like to think about how Facebook has read all of my so-called “private” messages and viewed hundreds of my photos. And the specificity of the ads is pretty unsettling, too. Sometimes I worry my phone is reading my mind because the advertisements it’s showing me are so relevant. 

But, as Tufekci explains, the more sobering issue is how this persuasion architecture can be applied to fields other than advertising—like politics. The speaker outlines some really terrifying scenarios to show how artificial intelligence and machine learning do things like radicalize white supremacists and affect voter turnout. AI is collecting our data and then using algorithms to create feeds of information that confirm and intensify our political beliefs. 

Tufekci’s point about the growing impossibility of public debate rings true for me; the feeds of (mis)information we receive are so specific and radical that it feels like each of us is living in a different reality. For example, the rapid spread of disinformation on social media has caused a staggering number of Trump supporters to believe that antifa was responsible for the insurrection at the Capitol on January 6, and the riots themselves were the result of the false narrative of a stolen election. How can you have a conversation with someone whose warped perception of reality is so contrary to the truth? 

And how can we even be sure our own perceptions of what’s true are accurate? Although I’m intellectually aware of how these algorithms curate feeds of information, I also have to acknowledge that I’m being subconsciously influenced by the people and pages on my own social media feeds. I mostly follow and interact with people whose politics I agree with, and it’s all too easy to get caught in an echo chamber without realizing it. I do my best to take a step back every once in a while and evaluate my beliefs objectively, but I can’t pretend I’m completely unbiased.  

No matter how objective or unbiased you try to be, the truth is that the information the algorithm feeds you will influence your actions, beliefs, and emotions. We can’t rely on tech companies to mitigate the harm they’re doing because we live in a capitalist society; users’ continued engagement in this persuasion architecture makes a lot of money for corporations, and the government (as I’ve mentioned in previous blogs) is too out of touch and ignorant to step in and regulate how our data is collected, shared, and sold. Tufekci suggests the solution is to rebuild a better AI that works to help humanity instead of harm it, but I’ve seen too many sci-fi movies to be convinced that adding more super intelligent computers into the mix will have positive results. 

Ex Machina Movie Review | Alex Garland, the Director of 28 D… | Flickr
I’m not the only one who watched Ex Machina, right?

As I wonder about possible solutions to the problems AIs are creating, I also wonder how the post pandemic university will be affected. The pandemic has provided more opportunities for data collection in the form of test proctoring software that records students in their homes, but even before instruction moved entirely online, I suspect students’ data was still being catalogued by companies that provide learning management systems. How will software like Blackboard, Canvas, Google Classroom, etc., use the data collected from students’ submitted papers, discussion questions, and presentations? In the post-pandemic university, we must question what data schools are collecting, who they’re sharing that data with, and how that data can be used against us.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.