Categories
Miscellaneous

How AI Can Generate Cats, Snacks, and “the Carceral”

I had some fun with AI this week, but I also had some feelings of existential dread as I thought about the dark, wide-reaching implications of this technology. But let’s start with the fun stuff: First, I took the “Which Face Is Real?” quiz a few times. I was able to tell the difference between the real faces and the AI generated ones correctly every time, but it was pretty difficult. When choosing between the faces, I had to rely on my gut feelings rather than on logic. Something about the fake faces looked too airbrushed or blurred to be real; they were right on the cusp of the uncanny valley. The real faces, on the other hand, had more imperfections and unique characteristics.

Speaking of fake faces, I used the website This Person Does Not Exist to generate a very ambiguously aged, unpleasant looking person from the uncanny valley who I am quite glad does not actually exist. 

I’m not sure if this is a bratty kid about to throw a tantrum or a Karen about to ask for the manager.

I also tried out This Cat Does Not Exist twice (because of course I did.) I was pretty shocked with the results of this one; I would never guess that these cats are completely AI generated.

This one looks like my cat Jon!

This Rental Does Not Exist yielded much less convincing results. The description on this one reads like someone just kept using the next suggested word on their phone’s predictive texting software, and the rooms’ décor is not exactly my taste.

I also really wasn’t a fan of This Snack Does Not Exist. Not only does soaking cotton candy in apple cider sound gross, but it also made me sad because I started thinking about how if I tried to pour liquid cider over wispy cotton candy, I’d be just like the poor raccoon in that one video

These AI generated images are pretty neat, but the implications of this technology and the processes it can be used for are much darker than they appear at first glance. Ruha Benjamin discusses some of the more nefarious and harmful ways AI can be (and is being) used in her article “Catching Our Breath: Critical Race STS and the Carceral Imagination.” This article expands on the issue of the intersection of race and technology that we discussed last week; namely, how developers bake their own biases into the technology they’re creating, which leads to further oppression of marginalized peoples. 

Benjamin’s numerous examples of how technology perpetuates systemic racism are shocking and disheartening. These kinds of systemic injustices just keep happening, and they’re only going to get worse as society’s biases intersect with ever evolving technologies. For example, Benjamin discusses “computer-generated risk assessment tools [being] biased against black Americans…[and] falsely flag[ging] black defendants as future criminals” (149). This concept feels like modern day phrenology to me—we’re just using Big Data instead of skull size to predict which crimes a person might commit. 

This type of predictive technology has implications outside of the criminal justice system, as well. Benjamin explains how numerous institutions in our society—like schools and hospitals—have roots in “the carceral.” I wish she’d been a bit more specific about how this carceral ideology has infected so many of society’s institutions, because I don’t know enough about, say, the healthcare system to understand beyond a surface level how it’s contributing to the oppression and subjugation of minority communities.

However, as a teacher, I do know that “the carceral” is well documented in the U.S. education system: the presence of police instead of counselors in schools, the student achievement gaps that are currently widening due to the pandemic, and the disproportionate rates of suspensions for black students all contribute to a disturbing school-to-prison pipeline

The predictive technology Benjamin describes can make this problem worse. Already, schools are collecting data on their students’ grades and standardized test scores and using that information to predict their future academic performance. While this technology could certainly be used positively as a preventative intervention strategy for at risk students, there’s also the danger that it will prevent already marginalized groups from taking advanced classes or getting into prestigious universities, which will, of course, affect their opportunities later in life. 

As I’ve stated in previous blogs, the question for the post pandemic university is what data these academic institutions are collecting on their students and how they’ll use it. Schools and universities need to be aware of the societal biases that are baked into predictive technology, and they must find ways to mitigate the harm these biases will do if left unchecked. 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.