Deep-fried archaeological data

deepfriedmarsbar
Deep fried Mars bar

I’ve borrowed the idea of ‘deep-fried data’ from the title of a presentation by Maciej Cegłowski to the Collections as Data conference at the Library of Congress last month. As an archaeologist living and working in Scotland for 26 years, the idea of deep-fried data spoke to me, not least of course because of Scotland’s culinary reputation for deep-frying anything and everything. Deep-fried Mars bars, deep-fried Crème eggs, deep-fried butter balls in Irn Bru batter, deep-fried pizza, deep-fried steak pies, and so it goes on (see some more not entirely serious examples).

Hardened arteries aside, what does deep-fried data mean, and how is this relevant to the archaeological situation? In fact, you don’t have to look too hard to see that cooking is often used as a metaphor for our relationship with and use of data.

Continue reading

A Digital Afterlife

Data AfterlifeSolutions to the crisis in archaeological archives in an environment of shrinking resources often involve selection and discard of the physical material and an increased reliance on the digital. For instance, several presentations to a recent day conference on Selection, De-selection and Rationalisation organised by the Archaeological Archives Group implicitly or explicitly refer to the effective replacement of physical items with data records, where either deselected items were removed from the archive or else material was never selected for inclusion in the first place because of its perceived ‘low research potential’. Indeed, Historic England are currently tendering for research into what they call the ‘rationalisation’ of museum archaeology collections

“… which ensures that those archives that are transferred to museums contain only material that has value, mainly in the potential to inform future research.” (Historic England 2016, 2)

Historic England anticipate that these procedures may also be applied retrospectively to existing collections. It remains too early to say, but it seems more than likely a key approach to the mitigation of such rationalisation will be the use of digital records. In this way, atoms are quite literally converted into bits (to borrow from Nicholas Negroponte) and the digital remains become the sole surrogate for material that, for whatever reason, was not considered worthy of physical preservation. What are the implications of the digital coming to the rescue of the physical archive in this way?

Continue reading

Digital Data Realities

The Cost of Digital Data
The Cost of Digital Data (Ainsley Seago via Wikimedia Commons) CC BY 4.0

The UK is suddenly wakening from the reality distortion field that has been created by politicians on both sides and only now beginning to appreciate the consequences of Brexit – our imminent departure from the European Union. But – without forcing the metaphor – are we operating within some kind of archaeological reality distortion field in relation to digital data?

Undoubtedly one of the big successes of digital archaeology in recent years has been the development of digital data repositories and, correspondingly, increased access to archaeological information. Here in the UK we’ve been fortunate enough to have seen this develop over the past twenty years in the shape of the Archaeology Data Service, which offers search tools, access to digital back-issues of journals, monograph series and grey literature reports, and the availability of downloadable datasets from a variety of field and research projects. In the past, large-scale syntheses took years to complete (for instance, Richard Bradley’s synthesis of British and Irish prehistory took four years paid research leave with three years of research assistant support in order to travel the country to seek out grey literature reports accumulated over 20 years (Bradley 2006, 10)). At this moment, there are almost 38,000 such reports in the Archaeology Data Service digital library, with more are added each month (a more than five-fold increase since January 2011, for example). The appearance of projects of synthesis such as the Rural Settlement of Roman Britain is starting to provide evidence of the value of access to such online digital resources. And, of course, other countries increasingly have their own equivalents of the ADS – tDAR and OpenContext in the USA, DANS in the Netherlands, and the Hungarian National Museum’s Archaeology Database, for instance).

But all is not as rosy in the archaeological digital data world as it might be.

Continue reading

Biggish Data

Big Data
Big Data 😉

Big Data is (are?) old hat …  Big Data dropped off Gartner’s Emerging Technologies Hype Cycle altogether in 2015, having slipped into the ‘Trough of Disillusionment’ in 2014 (Gartner Inc. 2014, 2015a). The reason given for this was simply that it had evolved and had become the new normal – the high-volume, high-velocity, high-variety types of information that classically defined ‘big data’ were becoming embedded in a range of different practices (e.g. Heudecker 2015).

At the same time, some of the assumptions behind Big Data were being questioned. It was no longer quite so straightforward to claim that ‘big data’ could overcome ‘small data’ by throwing computer power at a problem, or that quantity outweighed quality such that the large size of a dataset offset any problems of errors and inaccuracies in the data (e.g. Mayer-Schönberger and Cukier 2013, 33), or that these data could be analysed in the absence of any hypotheses (Anderson 2008).

For instance, boyd and Crawford had highlighted the mythical status of ‘big data’; in particular that it somehow provided a higher order of intelligence that could create insights that were otherwise impossible, and assigned them an aura of truth, objectivity and accuracy (2012, 663). Others followed suit. For example, McFarland and McFarland (2015) have recently shown how most Big Data analyses give rise to “precisely inaccurate” results simply because the sample size is so large that they give rise to statistically highly significant results (and hence the debacle over Google Flu Trends  – for example, Lazer and Kennedy 2015). Similarly, Pechenick et al (2015) showed how, counter-intuitively, results from Google’s Books Corpus could easily be distorted by a single prolific author, or by the fact that there was a marked increase in scientific articles included in the corpus after the 1960s. Indeed, Peter Sondergaard, a senior vice president at Gartner and global head of Research, underlined that data (big or otherwise) are inherently dumb without algorithms to work on them (Gartner Inc. 2015b). In this regard, one might claim Big Data have been superseded by Big Algorithms in many respects.

Continue reading

Preservation by record

Preservation by record is very much in the news at the moment in relation to attempts by ISIS to destroy cultural heritage in Iraq and Syria in places like Nineveh, Nimrud, Hatra, and the present threat to Palmyra. In some instances, the archaeological response has entailed excavations, in others it has been to begin to use crowd-sourced imagery to digitally reconstruct the heritage that has already been destroyed, or to use satellite and aerial imagery to map unrecorded and endangered sites.

Laser scanning
Laser scanning at Merv
(original by CyArk, CC BY-SA 3.0)

Emma Cunliffe, from the Endangered Archaeology of the Middle East and North Africa (EAMENA) project, has suggested that “in some extreme, and particularly devastating, cases, the records may be the only thing left of a culture, in which case we owe it to them to preserve something, anything”. Hard to argue with that, and the article goes on to suggest that one approach to preservation of these sites is the use of archaeological technology to record monuments in high resolution in those areas which are still accessible (Foyle 2015).

Continue reading

Filter bubbles

In an earlier post I wrote about the importance of understanding the legibility, agency and negotiability of archaeological data as we increasingly depend on online data delivery as the basis for the archaeologies we write and especially as those archaeologies show signs of being partly written by the delivery systems themselves.

A simple illustration of this is the idea of filter bubbles. This term was coined in 2011 by Eli Pariser to describe the way in which search algorithms selectively return results depending on their knowledge of the person who asked the question. It’s an idea previously flagged by, amongst others, Jaron Lanier who wrote about ‘agents of alienation’ in 1995, but it came to the fore through the recognition of the personalisation of Google results and Facebook feeds (and is the counter-selling point of the alternative search engine, DuckDuckGo, for example). So can we see this happening with archaeological data? Perhaps not to the extent described by Pariser, Lanier and others, but still …

Continue reading

Big Data Analytics

It was only a matter of time before a ‘big data’ company latched onto archaeology for commercial purposes. Reported in a New Scientist article last week (with an unfortunate focus on ‘treasure’), a UK data analytics start-up called Democrata is incorporating archaeological data into a system to allow engineering and construction firms to predict the likelihood of encountering archaeological remains. This, of course, is what local authority archaeologists do, along with environmental impact assessments undertaken by commercial archaeology units. But this isn’t (yet) an argument about a potential threat to archaeological jobs.

Continue reading