Constructing belief in the post-truth era.

maxresdefault

What happens when you detach information from materiality? It’s a question I’ve been considering in my work for a few years, and one that digital humanists and archivists know to be important. Hilary Jenkinson believed the archivist ‘is perhaps the most selfless devotee of Truth the modern world produces’ [Jenkinson, 1947], because they are unobtrusive custodians of the real. But if we really have passed through a Baudrillardian mirror, and the image is now superior to the written word, what appears online takes on a new authority. What does that mean to how we construct belief?

My stepdaughter is 8 years old, and a huge Minecraft fan. She now only plays intermittently, and when she was visiting a month or two ago I asked her why she didn’t play on it quite so much anymore. She asked me had I heard of Herobrine. Herobrine is the product of a Creepypasta story: he appears in worlds constructed by Minecraft players, manipulating them and sometimes deleting them entirely. He takes on the persona of Steve, but has white eyes that glow in the darkness. He stalks players across the digital landscape.

There are myriad discussion threads on the subject of Herobrine. Minecraft players seem to delight in perpetuating his “ghost story”, particularly to new users of the game. My stepdaughter had obviously discussed him with other players and this had led her to question the point of playing if Herobrine was likely to delete the worlds she had laboriously creating. Also, I suspect, she was a little afraid that he would loom out of the Minecraft mist one day whilst she was playing, and scare her.

It struck me then that there was no means for my stepdaughter of truly checking the veracity of Herobrine’s existence. The discussion threads on which his existence is disputed are without reliable authorial attribution. Her pleasure and enjoyment of the game had been fundamentally affected by the myth.

Herobrine was probably influenced by the Slender Man phenomenon.

In 2014, two 12-year-old girls lured their friend into the woods in their hometown of Waukesha in Wisconsin, and stabbed her 19 times. They did this as a sacrifice to Slender Man, a character who was created for an Internet competition on the website Something Awful.

The idea was to see who could use their Photoshop skills to create the best new mythological creature…In the first of two photos, an unnaturally tall and spectral being in a prim black suit is seen in the shadows behind a group of young teenagers, followed by the vague caption: “‘We didn’t want to go, we didn’t want to kill them…’ -1983, photographer unknown, presumed dead.”

Knudsen’s second photo was stamped with a fake library seal…several children smile towards the camera, while those in the back gather around a tall figure in a suit, summoning them with long and eerie arms. This time, the caption reads: “One of two recovered photographs from the Stirling City Library blaze. Notable for being taken the day which fourteen children vanished and for what is referred to as ‘The Slender Man’…Fire at library occurred one week later. Actual photograph confiscated as evidence. – 1986, photographer: Mary Thomas, missing since June 13th, 1986.”

Slender Man: From Horror Meme to Inspiration for Murder | Rolling Stone Magazine, 2016

The development of the Slender Man meme was taken up by users of YouTube and 4chan, and a participatory relationship developed around the story. By 2011, the Slender Man had acquired Creepypasta status. The myth was made so real that Morgan Geyser and Anissa Weier, were prepared to stab one of their peers to death as a sacrifice to him. Both girls are to be tried in Court as adults, because the judicial system deems them capable of recognising right from wrong.

But are they? Once the usual referentials are discarded, and a perfect double of the real exists in the digital domain, how do we distinguish truth from fiction? If we are left with the simulacrum, what happens if the simulacrum tells lies?

There is a growing call for the dissemination of misinformation to be policed more effectively, particularly on sites like Facebook. In light of the recent US election result, Mark Zuckerberg has gone on record to dismiss the idea that Donald Trump’s victory was as a result of fake news stories perpetuated on social media.

Facebook wants to publish news and profit from it, but it does not want to act as a traditional news organisation would by separating fiction from facts using human editorial judgment. By relying on algorithms Facebook privileges engagement, not quality. It acts as a publisher without accepting the burdens of doing so. Yet, as Aldous Huxley noted, “facts do not cease to exist because they are ignored”.

The Guardian view on social media: facts need to be labelled as facts | The Guardian, Editorial, 2016

What happens when the only source of information available to the majority is online, and that information is untrue? The least worst scenario is it drives people away from something that they enjoy. In worst case scenarios it leads to murder; and perhaps persuades a nation to vote in someone who espouses alt-right sympathies.

According to The New York Times, we have entered the age of post-truth politics:

According to the cultural historian Mary Poovey, the tendency to represent society in terms of facts first arose in late medieval times with the birth of accounting…it presented a type of truth that could apparently stand alone, without requiring any interpretation or faith on the part of the person reading it…accounting was joined by statistics, economics, surveys and a range of other numerical methods. But even as these methods expanded, they tended to be the preserve of small, tight-knit institutions, academic societies and professional associations who could uphold standards.

The Age of Post-Truth Politics | The New York Times, 2016

The problem is our critical faculties are continuously challenged by the material with which we are presented. That isn’t exclusive to the digital domain, of course: lies can be presented in ink as well as code. But the challenge is that if something like Slender Man, or Herobrine, becomes a participatory event in which people engage; when they create and develop in order to entrench a lie and become part of its origin story, and subsequent consumers of that material have no recourse to other sources of information that might contradict these myths, then how we construct our truth is fundamentally flawed. In addition, the critical skills that are essential to determine truth and authenticity are increasingly lacking.

I started this post with an anecdote about my stepdaughter’s use of Minecraft to construct alternative worlds for herself. We do the same thing with truth: we build it, block by block, and fashion our own hierarchies of understanding. Sometimes, the resulting edifice is destroyed by a lie. In a post-truth era, we should be careful on what foundations we rest our understanding upon.

Social Machines: Research on the Web | Professor David DeRoure

Wednesday morning’s plenary lecture was given by Professor David DeRoure, a Fellow of the British Computer Society and interim Director and Professor of e-Research at the Oxford e-Research Centre. He is also the National Strategic Director for the Digital Social Research project. Professor DeRoure epitomises what I believe the digital humanities are all about, in that he is unafraid to collaborate with multiple disciplines to ask new questions, and seek new answers.

I have to confess that his illustrious position makes me feel rather better about the fact that I struggled with some of the concepts he was explaining, but actually this ongoing struggle is the basis for my attending this week’s conference. Coming from a literature background I sometimes find it difficult to engage with techniques which owe more to the Social Sciences, or to IT, but the application of these research techniques and the language they use is something I need to engage with, and to become comfortable with. Ho hum, I digress. 

Professor DeRoure began by asking us to consider the Web in a variety of different ways: as an infrastructure of research, as a source of data, as a subject of research, and as a web of scholarly discourse. He commented that the data deluge has moved away from being an issue only for social scientists and scientists generally, but it is the science community who have reacted to this emphasis on data-led research by announcing a paradigm shift from hypothesis-driven research (the Fourth Paradigm I mentioned in my previous blog post). In fact, Science magazine went as far as to announce the end of theory (which rather brings to mind the great Mark Twain quote “Rumours of my death are an exaggeration”).

Supporting this Big Data, as I understood it, are computers with sophisticated-enough technology to sort through the masses of data – or Big Compute, as it’s known. And as the Science magazine article explains, we need to have this kind of technology – we’re children of the Petabyte Age, and we need to adapt accordingly. The Web should be about co-evolution – society and technology working together. 

The problem with Big Data is that the temptation is to work within a sub-set that concentrates on proving your own personal theories. But we simply can’t work in that way anymore. We are, in the words of Nick Cave, merely “a microscopic cog”. We need to realise we can’t work in isolation and we can’t ignore other data simply because it doesn’t say what we want it to. One of the ways which DeRoure suggests this can be avoided is with the use of linked data. Linked data enables us to discover more things; we need to realise that our questions are often similar to those being asked within other disciplines, and that linked data can broaden our areas of understanding. 

“Wait a second, back up!”, I hear you wail. What’s linked data? There’s a possibility the librarians amongst you will have started to sit up and take notice, as the idea of linked data is closely related to concepts like controlled headings in library catalogues. Essentially, the idea of linked data is that information can become linked, and therefore more useful. It needs a standard format, which is “reachable and manageable by Semantic Web tools”, and tools to make either conversion or as-necessary conversion achievable (for further clarification, the Semantic Web is simply “a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web”)

imgres.jpg

An example of a project which will be published as linked data is the Digital Music Collections (SALAMI). SALAMI will analyse 23,000 hours of digitised music in order to build a resource for musicologists, drawing on a range of music from the Internet Archive. Students will annotate the structure of songs based on what Professor DeRoure termed their “ground truth” – meaning, what those annotators is say the structure of the song at the time in which they’re annotating it.

In addition to the sheer scale of the information we’re receiving, the nature of that data is changing. Twitter has generated a lot of energy within the Social Sciences community as to how useful the data they collect from Twitter can be. Some areas of the field have rejected the data on the basis that it was not collected correctly, or collated properly, but other areas of the field have embraced this new rich area of data, and are establishing new methods to deal with it. Professor DeRoure suggested that it may very well be a case of whether you consider your data cup half full, or half empty.

Whichever way you look at it, the loop is closing – social theory is being used to describe the data. But the need to create intermediaries for this form of data remains. The Web Science Trust proposes the creation of a global community which looks not at how you represent your data, but how you describe it.

So, let’s take a breath. I need one, frankly, and I’m guessing you do too. I’m omitting great swathes of Professor DeRoure’s lecture and endeavouring to stick with the nuts and bolts of what I think he was saying, so you will have to bear with me whilst I process my thoughts. I think, at this juncture, what he is suggesting is that because of Big Data and the need to process large amounts and different kinds of data (such as the information one could glean from collating tweets, for example), we are more in need than ever of a linked, coherent system that communicates with itself and doesn’t come up against any barriers in the learning process. I’m thinking now of the Web as a maze, in which one suddenly finds oneself at a dead end simply because a program is hosted on a different system, for example. Obviously the concept of the Semantic Web and linked data are steps to enabling this open process. 

We are links in that chain too – it’s not just the machinery of the computer. We are as much a part of it as the hard drive is. Our interaction with computers is changing. SOCIAM (the Theory & Practice of Social Machines) is a project proposed by the University of Southampton which will attempt to research the best means of supporting “purposeful” human interaction on the Web. This interaction, they claim, is:

“…characterised by a new kind of emergent, collective problem solving, in which we see (i) problems solved by a very large scale human participation via the Web (ii) access to, or the ability to generate, large amounts of relevant data using open data standards (iii) confidence in the quality of data and (iv) intuitive interfaces.

The boundary between programmers and users has been dissolved by the Web, and our participation with it. This is mainly typified by social websites such as Facebook and Twitter. We are now merely a component of the Social Machine.

Clinton-Global-Initiative-008.jpg

The picture here is of Ory Okolloh, the founder and executive director of Ushahidi: an example, cited by Professor DeRoure, of the social machine in action. Developed shortly after the Kenyan elections on the 27th December 2007Ushahidi was created to map incidents of violence and peacekeeping in the country after the elections, based on reports submitted by mobile phone and via the Web. The incident was a catalyst for the website team to realise that there was a need for a platform which could be used worldwide in the same way. Ushahidi (Swahili for “testimony”), the social machine, was born.

An example of the way in which the website is used was given in The Spectator magazine in 2011:

“At 6:54 pm the first bomb went off at Zaveri Bazaar, a crowded marketplace in South Mumbai. In the next 12 minutes two more followed in different locations in the city…The attacks added to the confusion just as millions of people were returning home from work. With telephone lines jammed, many Mumbaikars turned to a familiar alternative: they posted their whereabouts, and sought those of their close ones, on social networks.

Facebook doubled up as a discussion forum…users on Twitter, meanwhile, exchanged important real-time updates. Moments after the explosions, a link to an editable Google Docs spreadsheet was circulated frantically on the microblogging site. It carried names, addresses and phone numbers of people offering their houses as a refuge to those left stranded. The document was created by Nitin Sagar, an IT engineer in Delhi, 1,200km (720 miles) away.”

Problems (of any description, be they the classification of galaxies or a bomb going off in a city centre) are solved by the scale of human participation on the Web and the timely mobilisation of people, technology and information resources. And those websites which refute the traditional idea of the “layperson [as] irrational, ignorant…even intellectually vacuous” are the ones which are the most successful: the ones who tell people what they’re about, and treat participants as collaborators, not as subjects. We are even coming to a stage where we consider human interaction with the machine as a sub-routine: a human-based computation, outsourcing certain steps to humans. Professor DeRoure cited Wikipedia as a good case in point – an interesting combination of automation and assistance rather than the replacement of the human. 

And there are many dimensions to our social machines: the number of people, and of machines; the scale and variety of data – and how does one measure the success of a social machine? By the way it empowers groups, individuals, crowds? We are moving away from the idea of the Turing machine to one in which humans and machines are brought together seamlessly. 

We are at Big Data/Big Compute right now. In fact, if I understand Professor DeRoure correctly, WE are Big Compute: “The users of a website, the website and the interactions between them, together form our fundamental notion of a ‘machine’”. Thus, we find ourselves on the edge of a new frontier. Technology isn’t transforming society alone, but people will, and the behaviour of machines over time will evolve because of its involvement with humans. In order to facilitate those changes we need to understand how to design social computations, provide seamless access to a web of data and to consider how accountable and trusted the components should be. Ultimately, we are citizen-scientists and human-computer integrations

I hope this has made some sense to you – please feel free to comment via my Twitter profile and let me know whether you think I’ve accurately assessed the tone of Professor DeRoure’s lecture, or whether I’m barking up entirely the wrong digital facsimile of a tree.