When we hear of augmentation in digital terms, these days we more often than not think of augmented or mixed reality, where digital information, imagery etc. is overlain on our view of the real world around us. This is, as yet, a relatively specialised field in archaeology (e.g. see Eve 2012). But digital augmentation of archaeology goes far beyond this. Our archaeological memory is augmented by digital cameras and data archives; our archaeological recording is augmented by everything from digital measuring devices through to camera drones and laser scanners; our archaeological illustration is augmented by a host of tools including CAD, GIS, and – potentially – neural networks to support drawing (e.g. Ha and Eck 2017); our archaeological authorship is augmented by a battery of writing aids, if not (yet) to the extent that data structure reports and their like are written automatically for us (for example).
I’ve commented here and here about the question of data reuse (or more accurately, the lack of it) and the implications for archaeological digital repositories. It’s frequently argued that the key incentive for making data available for reuse is providing credit through citation. So how’s that going? I’ve not seen any attempt to actually quantify this, so out of curiosity I thought I’d have a go.
A logical starting point is Thomson Reuters Data Citation Index – according to its owners (it’s a licensed rather than public resource), this indexes the contents of a large number of the world’s leading data repositories, and, on checking, the UK’s Archaeology Data Service (ADS) appears among them. So far so good.
We often hear of the active archive, but what about an idle one? In a post on Digital Data Realities, I suggested that, although we might wish otherwise, our digital archaeological data repositories seemed relatively little-used. The Archaeology Data Service access statistics did not suggest a large uptake for the project archives it holds, and the ADS had not found it easy to attract entries to its Digital Data Reuse Awards in the past. In that light, I commented that it would be interesting to see how the OpenContext & Carleton Prize for Archaeological Visualization would get on. Well, the jury is now in, and the winner is … the ‘Poggio Civitate VR Data Viewer’, an impressive-looking data viewer, though as it requires an HTC Vive to use, I can sadly only watch the video rather than experience it myself …
However, as interesting are Shawn Graham’s reflections on the experience of organising the contest:
“We offered real money – up to a $1000 in prizes. We promoted the hang out of it. We made films, we wrote tutorials, we contacted professors across the anglosphere. We had very little uptake.”
(accompanied in his presentation by an image of tumbleweed) … Indeed, only the one winner was announced for the team prize – no individual or student prizes were awarded as was originally intended. So what’s going on?
We’re accustomed to the fact that much archaeology is collaborative in nature: we work with and rely on the work of others all the time to achieve our archaeological ends. However, what we overlook is the way in which much of what we do as archaeologists is dependent upon invisible collaborators – people who are absent, distanced, even disinterested. And these aren’t archaeologists working remotely and accessing the same virtual research environment as us in real time, although some of them may be archaeologists who developed the specialist software we have chosen to use. The majority of these are people we will never know, cannot know, who themselves will be ignorant of the context in which we have chosen to apply their products, and indeed, to compound things, will generally be unaware of each other. They are, quite literally, the ghosts in the machine.
Visualisation is much in vogue at present, especially with the increasing availability and accessibility of virtual reality devices such as the Occulus Rift and the HTC Vive, plus cheaper consumer alternatives including the Google Daydream and Sony’s Playstation VR, and there’s always Google Cardboard. We’re told that enhancing our virtual senses will increase knowledge, especially when we move into a virtual world in which we are interconnected with others (e.g. Martinez 2016), and the future is anticipated to bring sensors that go beyond vision and hearing and transmit movement, smells, and textures.
Hyperbole aside, we generally recognise (even if our audiences might not) that our archaeological digital visualisations are interpretative in nature, although how (or whether) we incorporate this in the visualisation is still a matter of debate. However, we understand that the data we base our visualisations upon are all too often incomplete, ambiguous, equivocal, contradictory, and potentially misleading whether or not we choose to represent this explicitly within the visualisation. I won’t rehearse the arguments about authority, authenticity etc. here (see Jeffrey 2015, Watterson 2015, Frankland and Earl 2015 (pdf), amongst others).