Skip to content
7 years ago

910 words

NPR’s Tom Ashbrook hosts a show called On Point, which covers a multitude of topics ranging from schooling to online dating to genetics to The Beatles’ Sergeant Pepper’s Lonely Hearts Club Band. Available as a podcast, On Point featured a story on August 9th about bots, which I listened to in curiosity and dismay, and not as much surprise as I wish I’d had. Bots are essentially automated software programs that run tasks on the Internet, and according to one of the experts on the show, they’ve been around as long as the World Wide Web has been. The show’s focus, however, was much more specific, targeting the use of bots by certain individuals, organizations, and political entities to disseminate propaganda and fake news, or “disinformation,” in order to meddle in electoral politics. The show’s guests discussed the ways in which bots originating in Russia were used during the 2016 election to influence the U.S. population’s view of the candidates, the issues being discussed, and the general political state of affairs of our country, to which an elected president theoretically would provide a resonating response. Apparently, these bots can generate commentary and content which is, at best, biased, and at worst, patently false.


By Ian McKellar from San Francisco, CA, USA – Elektro and Sparkotaken from: www.maser.org/k8rt/, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=18986910

This is clearly a new era we’re in, because though the use of propaganda is as old as human society itself – incidentally, propaganda means simply a form of communication intended to sway or persuade its audience in favor of or against a given individual or group – the bots are used in a curious way. Employed on social media sites like Twitter, Facebook, and Reddit, bots create “news” content whose volume and relevance to one’s own opinions can persuade a reader to follow that opinion. They function cleverly, or rather are designed in a clever way, in that they are meant to emulate a real person by patterning off of language used by current participants, and further appear to confirm the views of the reader through the temptation of accepting information that appeals to our established beliefs, thus persuading us via confirmation bias. Given the magnitude of influence of these bots, whose presence appears to range in the thousands across popular social media sites, it may not be too much to suggest that our view of the world, at least the view which we draw from our screens and hear echoed in the mouths of our colleagues and loved ones, is not simply a wake-up-and-see-what’s-true-today process.

Or is it? I’m no technophobe, but I do come from a generation that was raised without the Internet, without screens (excepting only 1/2 hour of TV a day, for which I’m still grateful), and without that addition to my consciousness that I might at any time be missing out on something on a screen awaiting my attention. I remember rotary phones and the use of folded-up maps stuffed in the glove box. This is not intended to be simple nostalgia, however. I’m actually asking what we might do about something all of us as deeply smitten phone lovers are well aware of.


By Aditya19472001 (Own work) [CC BY-SA 4.0 (http://creativecommons.org/licenses/by-sa/4.0)], via Wikimedia Commons

I suppose what I’m asking is, how did we develop critical literacy and media literacy in the past? How did we think about the information presented to us, sort through it, and determine what was of value not because it made us feel warm and safe but in fact because it presented us with what was happening in the world? The American poet T.S. Eliot apparently even distrusted newspapers, believing that those who read them were easily manipulated away from a true engagement with the world. I’m not suggesting not taking in any information from news sources, which we tend to read now online, but a return to the issue will ask where we get our “news” from. And this is really the key when we think about social media. Baudrillard’s hyperreality was one in which, as in The Matrix, individuals are completely enveloped by the worldview they consume as true (that is, my belief about my reality, is what is created and given to me outside of my own influence). Under this social logic, we are simple consumers of our reality, not participants. This is not unlike the consumer posture we are encouraged to take as we experience the ads and clickbait that accompany us as we look at photos of our cousin’s new baby. We may not realize that our reality, our political agency, is being slowly pushed back behind a curtain, and is being replaced by blurps and blips that confirm our perspectives and comfort us that we are right, that we are looking at what’s “real.” The battle, it seems, is a philosophical and a psychological one as well as a political and technological one.

To close with the questions Eliot asks in his famous modernist masterpiece, The Lovesong of J. Alfred Prufrock:

To wonder, “Do I dare?” and, “Do I dare?”…
Do I dare…Disturb the universe?

Do we dare to do this? Do we dare to put the phone away, close the Twitter feed, log off of Facebook, even for a moment, a moment when we might miss something…a something which might be worse than taking in nothing at all?

2 Replies to “Is this the Matrix?: Reality in the era of bots”

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to toolbar