Constructing belief in the post-truth era.


What happens when you detach information from materiality? It’s a question I’ve been considering in my work for a few years, and one that digital humanists and archivists know to be important. Hilary Jenkinson believed the archivist ‘is perhaps the most selfless devotee of Truth the modern world produces’ [Jenkinson, 1947], because they are unobtrusive custodians of the real. But if we really have passed through a Baudrillardian mirror, and the image is now superior to the written word, what appears online takes on a new authority. What does that mean to how we construct belief?

My stepdaughter is 8 years old, and a huge Minecraft fan. She now only plays intermittently, and when she was visiting a month or two ago I asked her why she didn’t play on it quite so much anymore. She asked me had I heard of Herobrine. Herobrine is the product of a Creepypasta story: he appears in worlds constructed by Minecraft players, manipulating them and sometimes deleting them entirely. He takes on the persona of Steve, but has white eyes that glow in the darkness. He stalks players across the digital landscape.

There are myriad discussion threads on the subject of Herobrine. Minecraft players seem to delight in perpetuating his “ghost story”, particularly to new users of the game. My stepdaughter had obviously discussed him with other players and this had led her to question the point of playing if Herobrine was likely to delete the worlds she had laboriously creating. Also, I suspect, she was a little afraid that he would loom out of the Minecraft mist one day whilst she was playing, and scare her.

It struck me then that there was no means for my stepdaughter of truly checking the veracity of Herobrine’s existence. The discussion threads on which his existence is disputed are without reliable authorial attribution. Her pleasure and enjoyment of the game had been fundamentally affected by the myth.

Herobrine was probably influenced by the Slender Man phenomenon.

In 2014, two 12-year-old girls lured their friend into the woods in their hometown of Waukesha in Wisconsin, and stabbed her 19 times. They did this as a sacrifice to Slender Man, a character who was created for an Internet competition on the website Something Awful.

The idea was to see who could use their Photoshop skills to create the best new mythological creature…In the first of two photos, an unnaturally tall and spectral being in a prim black suit is seen in the shadows behind a group of young teenagers, followed by the vague caption: “‘We didn’t want to go, we didn’t want to kill them…’ -1983, photographer unknown, presumed dead.”

Knudsen’s second photo was stamped with a fake library seal…several children smile towards the camera, while those in the back gather around a tall figure in a suit, summoning them with long and eerie arms. This time, the caption reads: “One of two recovered photographs from the Stirling City Library blaze. Notable for being taken the day which fourteen children vanished and for what is referred to as ‘The Slender Man’…Fire at library occurred one week later. Actual photograph confiscated as evidence. – 1986, photographer: Mary Thomas, missing since June 13th, 1986.”

Slender Man: From Horror Meme to Inspiration for Murder | Rolling Stone Magazine, 2016

The development of the Slender Man meme was taken up by users of YouTube and 4chan, and a participatory relationship developed around the story. By 2011, the Slender Man had acquired Creepypasta status. The myth was made so real that Morgan Geyser and Anissa Weier, were prepared to stab one of their peers to death as a sacrifice to him. Both girls are to be tried in Court as adults, because the judicial system deems them capable of recognising right from wrong.

But are they? Once the usual referentials are discarded, and a perfect double of the real exists in the digital domain, how do we distinguish truth from fiction? If we are left with the simulacrum, what happens if the simulacrum tells lies?

There is a growing call for the dissemination of misinformation to be policed more effectively, particularly on sites like Facebook. In light of the recent US election result, Mark Zuckerberg has gone on record to dismiss the idea that Donald Trump’s victory was as a result of fake news stories perpetuated on social media.

Facebook wants to publish news and profit from it, but it does not want to act as a traditional news organisation would by separating fiction from facts using human editorial judgment. By relying on algorithms Facebook privileges engagement, not quality. It acts as a publisher without accepting the burdens of doing so. Yet, as Aldous Huxley noted, “facts do not cease to exist because they are ignored”.

The Guardian view on social media: facts need to be labelled as facts | The Guardian, Editorial, 2016

What happens when the only source of information available to the majority is online, and that information is untrue? The least worst scenario is it drives people away from something that they enjoy. In worst case scenarios it leads to murder; and perhaps persuades a nation to vote in someone who espouses alt-right sympathies.

According to The New York Times, we have entered the age of post-truth politics:

According to the cultural historian Mary Poovey, the tendency to represent society in terms of facts first arose in late medieval times with the birth of accounting…it presented a type of truth that could apparently stand alone, without requiring any interpretation or faith on the part of the person reading it…accounting was joined by statistics, economics, surveys and a range of other numerical methods. But even as these methods expanded, they tended to be the preserve of small, tight-knit institutions, academic societies and professional associations who could uphold standards.

The Age of Post-Truth Politics | The New York Times, 2016

The problem is our critical faculties are continuously challenged by the material with which we are presented. That isn’t exclusive to the digital domain, of course: lies can be presented in ink as well as code. But the challenge is that if something like Slender Man, or Herobrine, becomes a participatory event in which people engage; when they create and develop in order to entrench a lie and become part of its origin story, and subsequent consumers of that material have no recourse to other sources of information that might contradict these myths, then how we construct our truth is fundamentally flawed. In addition, the critical skills that are essential to determine truth and authenticity are increasingly lacking.

I started this post with an anecdote about my stepdaughter’s use of Minecraft to construct alternative worlds for herself. We do the same thing with truth: we build it, block by block, and fashion our own hierarchies of understanding. Sometimes, the resulting edifice is destroyed by a lie. In a post-truth era, we should be careful on what foundations we rest our understanding upon.

Geoffrey Rockwell, “On the Evaluation of Digital Media Scholarship”, Profession 2011, pp.152-168

“Cathy Davidson has argued that we are entering a second phase that can be loosely connected to social media technologies, often given the Web 2.0 designation (“Humanities 2.0”).9 Blogs and now Twitter are examples of social media that have been adapted for research work in the academy. Such emergent forms are particularly hard to evaluate since they don’t resemble any traditional academic form and they are more about process and relationships than finished content. A good blogger (or team of blog- gers), however, does a great service to the community by tracking fast- moving issues, linking to new materials, and commenting on those issues. The better blogs will include short reviews, announcements, interesting interventions, and notes about timely matters like exhibits. Blogs, as I have learned, require habits of attention.

Each post might take half an hour to research and post. Posts may appear to be light and quick, but the good bloggers learn and practice their craft. In some ways running a blog is like moderating a discussion list. How often does Willard McCarty post a pro- vocative note to Humanist to promote discussion?

The work of facilitating the conversations we value in the humanities should not be dismissed as service; it can be closer to journal editing. Disciplines interested in human expression should take seriously new types of expression. What is really at issue is whether scholars should par- ticipate in experiments or take a critical or judgmental stance and only comment on, review, and theorize about the creative work of others. We have encoded in our departmental divisions views about the values and differences of academic work that separate the creative work of the artist from the critical work of the art historian, or the creative work of the writer of fiction from the theoretical work of the literary scholar who studies her or him. We aren’t entirely sure if the fine, design, and performing arts should be in the academy, as the language of most tenure and promotion documents shows. Imagine trying to get creative digital work evaluated when you aren’t in the art department.

The split between “interpretation” or “theoretical” or “analytical” work on the one hand and, on the other, “archival work” or “editing” falls apart when we consider the theoretical, interpretive choices that go into decisions about what will be digitized and how. (Davidson, “Data Mining”) In addition to the problem of assessing new media work, there is the perception that at best digital scholarship is essentially community work, editorial work, or a form of translation and therefore theoretically light. It needs to be said over and over that there is nothing a priori untheoretical about digital work; it is rather a form of potential theory. I have argued that specifications, for example, instantiate a particular theory of text, and others have argued that prototypes can reify arguments. Every decision of the TEI about how to encode some phenomenon that we take for granted, like a date, is based on a theory of what a date is for the purposes of tex- tual representation. Every research tool bears a theory about the practice of interpretation and the potential for computer-assisted interpretation. Specifications and tools can be done well and be appropriately theorized, or done poorly without a view to the fabric of humanities knowledge. If we don’t recognize and support well-theorized specifications and tools, we will have to live with those that emerge from other groups with needs and questions other than those we care about. Do we really want our tools to be built only by Google and to thus be geared for handling business documentation? Likewise, if we don’t recognize the care and work that goes into maintaining the research commons through editing, blogging, and other social research activities, then our public intellectual space will be managed by others (or simply not be there).

I will go further and say that practices of theory that do not, where appropriate, take into account their implementation are unethical, especially when consequences are openly discussed. The old way of doing theory is premised on an unexamined view that the way ideas are transmitted is primarily through chains of books by great men. This is simply no longer true, if it ever was. The epidemiology of ideas — the way ideas are transmitted, explored, refined, and forgotten — is complex and changing. The Internet is changing the ecology of transmission. A widely read blog can have measurably more readers than a published book. If what we value is appropriate intervention into the flow of conversation we call the humanities, then we need to be prepared to measure contributions, no matter what their form, in terms of their effectiveness as interventions. Counting peer-reviewed books and articles just doesn’t cut it as a measurement of impact, especially with all the problems of peer review and its particular economy. It should be noted that one relevant feature of the digital is that access to information can be logged and measured in ways that were unthinkable before. Viewing statistics are easy to gather for blogs, Web sites, tools, and hypermedia. The statistics we can gather have far more detail than the crude metric of peer-reviewed page counts. While neither page counts nor Web statistics really tell you whether information is having an effect, one can infer a lot more about readers from Google Analytics than one can from sales of a peer-reviewed book.

Things to discuss: What are the subject and audience of the blog? What is the contribution to the research community of the work? Are there statistics that show the reach and impact of the blog? What are some exemplary posts that show the research focus of the blog? Are there plans to archive the blog or to repurpose parts as publications?”

Taken from Geoffrey Rockwell’s ‘On The Evaluation of Digital Media as Scholarship’, Profession 2011, pp.152-168.