Quartz, the digital news outlet, recently published an interview by Adrienne Matei with Peter Kahn, a psychology professor at the University of Washington. In it, they discuss how technology is affecting our lives and becoming a means to mediate the real world. The item references some of the research that Kahn and his colleagues at the Human Interaction with Nature and Technological Systems Lab (HINTS) have undertaken, aspects of which have direct relevance for understanding technology within archaeology. They raise issues such as the limitations of technological devices, questions of authenticity, changing perspectives, and what they call the ‘shifting baseline problem’, all of which have their echoes within digital archaeology.
Infrastructures are all around us. They make the modern world work – whether we’re thinking of infrastructures in terms of gas, electric or water supply, telephony, fibre networks, road and rail systems, or organisations such as Google, Amazon and others, and so on. Infrastructures are also what we are building in archaeology. Data distribution systems have increasingly become an integral part of the archaeological toolkit, and the creation of a digital infrastructure – or cyberinfrastructure – underpins the set of grand challenges for archaeology laid out by Keith Kintigh and colleagues (2015), for example. But what are the consequences and challenges associated with these kinds of infrastructures? What are we knowingly or unknowingly constructing?
Patrik Svensson (2015) has pointed to a lack of critical work and an absence of systemic awareness surrounding the developments of infrastructures within the humanities. While he points to archaeology as one of the more developed in infrastructural terms, this isn’t necessarily a ‘good thing’ in the light of his critique. As he says, “Humanists do not … necessarily think of what they do as situated and conditioned in terms of infrastructures” (2015, 337) and consequently:
“A real risk … is that new humanities infrastructures will be based on existing infrastructures, often filtered through the technological side of the humanities or through the predominant models from science and engineering, rather than being based on the core and central needs of the humanities.” (2015, 337).
In 2014 the European Union determined that a person’s ‘right to be forgotten’ by Google’s search was a basic human right, but it remains the subject of dispute. If requested, Google currently removes links to an individual’s specific search result on any Google domain that is accessed from within Europe and on any European Google domain from wherever it is accessed. Google is currently appealing against a proposed extension to this which would require the right to be forgotten to be extended to searches across all Google domains regardless of location, so that something which might be perfectly legal in one country would be removed from sight because of the laws of another. Not surprisingly, Google sees this as a fundamental challenge to accessibility of information.
As if the ‘right to be forgotten’ was not problematic enough, the EU has recently published its General Data Protection Regulation 2016/679 to be introduced from 2018 which places limits on the use of automated processing for decisions taken concerning individuals and requires explanations to be provided where an adverse effect on an individual can be demonstrated (Goodman and Flaxman 2016). This seems like a good idea on the face of it – shouldn’t a self-driving car be able to explain the circumstances behind a collision? Why wouldn’t we want a computer system to explain its reasoning, whether it concerns access to credit or the acquisition of an insurance policy or the classification of an archaeological object?
[To interrupt the blogging hiatus, here’s the introduction to a recently published paper …]
Since the mid-1990s the development of online access to archaeological information has been revolutionary. Easy availability of data has changed the starting point for archaeological enquiry and the openness, quantity, range and scope of online digital data has long since passed a tipping point when online access became useful, even essential. However, this transformative access to archaeological data has not itself been examined in a critical manner. Access is good, exploitation is an essential component of preservation, openness is desirable, comparability is a requirement, but what are the implications for archaeological research of this flow – some would say deluge – of information?
In an earlier post I wrote about the importance of understanding the legibility, agency and negotiability of archaeological data as we increasingly depend on online data delivery as the basis for the archaeologies we write and especially as those archaeologies show signs of being partly written by the delivery systems themselves.
A simple illustration of this is the idea of filter bubbles. This term was coined in 2011 by Eli Pariser to describe the way in which search algorithms selectively return results depending on their knowledge of the person who asked the question. It’s an idea previously flagged by, amongst others, Jaron Lanier who wrote about ‘agents of alienation’ in 1995, but it came to the fore through the recognition of the personalisation of Google results and Facebook feeds (and is the counter-selling point of the alternative search engine, DuckDuckGo, for example). So can we see this happening with archaeological data? Perhaps not to the extent described by Pariser, Lanier and others, but still …
As the end of 2014 approaches, Facebook has unleashed its new “Year in Review” app, purporting to show the highlights of your year. In my case, it did little other than demonstrate a more or less complete lack of Facebook activity on my part other than some conference photos a colleague had posted to my wall; in Eric Meyer’s case, it presented him with a picture of his daughter who had died earlier in the year. In a thoughtful and thought-provoking piece, he describes this as ‘Inadvertent Algorithmic Cruelty’: it wasn’t deliberate on the part of Facebook (who have now apologised), and for many people it worked well as evidenced by the numbers who opted to include it on their timelines, but it lacked an opt-in facility and there was an absence of what Meyer calls ‘empathetic design’. Om Malik picks up on this, pointing to the way Facebook now has an ‘Empathy Team’ apparently intended to make designers understand what it is like to be a user (sorry, a person), although Facebook’s ability to highlight what people see as important is driven by crude data such as the number of ‘likes’ and comments without any understanding of the underlying meanings which are present.
Emma Bryce (2014) has recently written about her autistic brother’s interest in technology – something that is quite commonly associated with folk on the spectrum. I deliberately wound up a conference audience some years ago by characterising computer-usage amongst archaeologists as fetishistic, but I’m not about to claim that digital archaeologists are autistic. However, one phrase at the end of her article jumped out at me: that regardless of where we are, on or off the spectrum, we all use technology as a form of comfort and security.
“By its very structure, technology invites us to practice repetitive behaviours and keep familiar habits alive. It transports us to places we feel comfortable…”