Okay, so Zeynep Tufekci’s TEDTalk gave us a lot to talk about as far as artificial intelligence and it’s use to influence human behaviors over the internet. In the talk, she first talks about online ads and the intent behind their use, which of course we all can guess is to make us buy more stuff…but it’s apparently more than that. And it’s clear to see as Tufekci explains that these online ads that keep popping up track whether we’ll interact with it, whether we won’t interact with it, and can even predict our next moves. This is what she refers to as persuasion architectures. In her talk, Tufekci explains: “In the digital world, though, persuasion architectures can be built at the scale of billions and they can target, infer, understand and be deployed at individuals one by one by figuring out your weaknesses, and they can be sent to everyone’s phone private screen, so it’s not visible to us.”
It doesn’t stop there, either. But we already guessed this, didn’t we?
Tufekci goes on to explain that these machine learning algorithms use what they learn about human characteristics and then go on to apply it to other people in order to learn more about other people’s weaknesses. That’s why, for example, Facebook collects as much data as possible on you, from your date of birth to even your statuses. They collect this data so that they can give you ads and content that will make you stay on their site longer. This can also be seen in the algorithm Youtube formulates for you as you watch videos, to a point where videos are repeated in your autoplay playlist. Before you know it, you’ve been sitting on Youtube for 2 hours watching creepypasta videos about ancient Egyptian cats.
But wait, there’s more!
Not only do AI learning algorithms use architecture persuasion to keep you seeing what you like; it also tries to push you in the direction of extremes and thoughts opposing to yours. AI also can provide information on a person’s sexual orientation through picture (I’d like this to be further explained, but I’ll leave it be for now), political moods, mental stability, and so on.
In short, these ads that we avoid and algorithms that we end up tangled in are also organizing our political, personal and social information, how it flows, and how it can be encouraged in one direction as well as discouraged in another direction. In her talk, Tufekci touches on manipulation during the election: “So last year, Donald Trump’s social media manager disclosed that they were using Facebook dark posts to demobilize people, not to persuade them, but to convince them not to vote at all. And to do that, they targeted specifically, for example, African-American men in key cities like Philadelphia.”
Wow. What can even be said about that?
Honestly, it’s easier to ignore this issue when it’s not in your face and when no one is talking about it.
But who’s to say that they’re not? This makes me wonder more and more about the act of shadowbanning, which is the act of blocking content on social media sites in a way that the user who posted it has no idea it’s happening. The user, therefore, is the only person who can see their post. I’ve often heard people talk about not seeing a certain person’s content on their feed anymore, or have even watched people post on other platforms about switching to one site because they were blocked on another…with TEDTalks like Tufekci’s, it only makes this reality all the more scary.
Sometimes, when I’m scrolling through apps and I suddenly see a post from someone I haven’t seen on my algorithm in a while, I wonder why I haven’t seen their posts in a while. It always leads me to wonder about the amount of useless promoted posts I see on my feed over the content from people I actually care about.
Take a minute to pick up your phone and go onto the Instagram app. Scrolling for a few minutes, I can promise you that you will see an ad that you are either interested in or not interested in.
As I wrote this, my partner was scrolling through Instagram and suddenly expressed “Why would I give a crap about this?”
Which only made me think even more: What exactly is being done with the information that these algorithms collect about us? Why show us what we’re not interested in and hide the content we actually want to see from other people? For people like me, it only makes me want to use the app less and pull myself further and further into the confines of a safe space where no one can see me.
But then that also makes wonder: is that also on the algorithms agenda? To weed me, a possible anomaly to their understanding of human characteristics, out of the equation to better reach the targeted crowd?
If so or if not, then why? And why are all of these companies switching to using machine learning algorithms? Do they all have the same agenda, and if so, what is that agenda?
To control us? Manipulate what we buy and see? To what extent? Am I thinking too deeply into this? Does this make me a borderline conspirator?
Or am I simply looking too closely into it?