Skip navigation

“Come with us now on a journey through Time and Space…”

I’ve always wanted to say that. Well, I have since I first saw The Mighty Boosh anyway, and last week I finally got the chance, and completely failed to use it. I was running a symposium/workshop entitled Space & Time: Methods in Geospatial Computing for Mapping the Past with Stuart Dunn and, all things considered, it went quite well. It was funded by the Methods Network (thanks guys) and run in conjunction with the Arts & Humanities E-Science Theme lectures on Space and Time. There were three sessions, on Scale, Heterogeneity and Standards and Metadata, and a lot of time set aside for discussion. One of the nice aspects about it was also a high proportion of participants from other disciplines, although I confess to not having spent nearly enough time collaring them for a chat during the coffee breaks. I even flagrantly seized the opportunity to bandy about some thoughts of my own, of which (slightly) more later.

So what came out of it? The two most interesting developments for me were a growing interest in Agent-Based Modelling (ABM) and increasing realisation that there are massive developments going on in geocomputing which threaten to leave the Humanities community behind. (There’s also discussion of these and other topics over at www.arts-humanities.net)

Agents Myth

I’ll confess to not having previously been one ABM’s greatest fans, but the presentations by Tony Wilkinson and Mark Lake in the session on scale have given me some serious food for thought. For those who haven’t come across it, ABM is type of computational modelling in which programmatic ‘agents’ are given a rule set and framework in which to operate and then left to their own devices in order to observe how they behave. In this way reserachers can attempt to model real world behaviour and see how large scale patterns can ‘emerge’ from smaller ones. Part of my skepticism was due to a (perhaps) unfounded perception that ABM was attempting to make ‘strong claims’ about its results. Obviously not to the degree of individual agent histories, but to the extent that one might say, ‘these are the factors which led to outcome X and this is how they interoperate’. Criticisms of this view range from the problem of equifinality (i.e. that a variety of different processes might potentially lead to X, and it’s not possible to ascertain which), to whether such massive generalisations over complex systems can provide any meaningful results at all. Mark gave a particularly impressive critique of the process, which rather than undermining its validity, actually left me thinking that (at least in the hands of a reflective and reflexive research community) it could be a valuable tool. When used as an experimental ‘laboratory’, it can throw up all sorts of interesting insights and possibilities which might not otherwise occur to the researcher.

This resonated with me quite strongly because it feels the Network Analysis (NA) is faced with much the same challenges. There’s a fairly small user community, all using related but distinct approaches, and there is a danger of attempting to ‘over-sell’ the method, thereby putting off colleagues who might otherwise be interested. It feels to me that NA in the humanities has also reached the stage where we need to be brave enough to critique our own methodologies in order to establish firm theoretical ground. If we don’t, it may unnecessarily become a passing fad.

Mass Mashing

The second interesting development for me was the widely-made observation that not only has basic geocomputing become available to the public, but that in a very short space of time it has become hugely influentioal in the web (and thereby cultural) sphere. Huge quantities of spatial data are becoming available all the time, often created by people with their own implicit or explicit agenda. Whilst professional archaeologists shouldn’t turn their backs on traditional methods of dissemination, if we don’t utilise common dialacts (such as KML and GeoRSS) as well, there’s a real danger that we will not participate in that wider public dialogue. Nobody owns the past, and our role as academics and professionals can only influence, not direct, other people’s views, but it is important to make sure that we don’t just end talking to ourselves.

The Truth, the Whole Truth and Nothing Like the Truth

During the session on Heterogeneity, I gave a paper entitled ‘Ptolemy’s Error: Truths and Falsehoods in Heteregeneous Spatial Data’. It brought together a few thoughts I’ve had recently on the nature of ‘truth’ in mapping, along with some observations on Ptolemy’s World Map. Don’t know how much sense it made but it seemed to be well received, so I’ll post it online here shortly. All feedback greatly welcomed :-)

About these ads

One Comment

  1. re: Mass Mashing – this is so true. It raises lots of questions about ‘authority’, how we evaluate data based on source publication, and how or whether the public does the same thing; but we have the data so we might as well repurpose and republish it (or make it available for others to mash up if we don’t have the resources to do anything other than just publish a repository of data).


One Trackback/Pingback

  1. By Yorkie Talkie « Archaetech on 11 Feb 2008 at 8:37 pm

    [...] received so the main reason for this post is to upload an older version of it that I gave at the Methods Network Workshop on Geospatial Computing last year. There’s a bit more gubbins in this about Ptolemy’s Geography which is [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 25 other followers

%d bloggers like this: