Home > Survey Archaeology, Thisvi-Kastorion Archaeological Project > Survey Archaeology Finds as Data

Survey Archaeology Finds as Data

One the small arguments that I make in our paper on re-analyzing the survey finds from the Ohio Boeotia Project is that the changes in technology have influenced the way that data were recorded in archaeology. One thing that is particularly noticeable in the data is that little effort was made to normalize the finds data. This is not because the project was not imagining that their data could be analyzed quantitatively. In fact, the density or distributional data was collected on paper forms that were suitable for direct entry into a spreadsheet type program.

OBE Field Recording Sheet

The finds data on the other hand was collected in a way suitable for a text type catalog. Such catalogues have been a standard part of archaeological documentation for a century. They are typically include measurements for the object and then a textual description of the fabric, shape, decoration, and date of the object with some notes on comparanda. Such thorough textual descriptions of objects is useful for establishing the identity of fossil types in an excavation context. In other words, these objects serve to date stratigraphic layers in excavation and describing them accurate is important for establishing the validity of the identification of the object.

With the introduction of “New Archaeology” (or processural archaeology) in the early 1960s, there emerged a greater interest in quantification and quantitative methods for documenting past human activities. The tools to perform these kinds of quantitative analysis, however, were expensive and time consuming (often involving processing punch cards, expensive mainframe computer time, or even tedious and error prone hand calculations). The increasing availability of personal computers in the first years of the 1980s paralleled the development of important software packages for organizing data on desktop computers. IBM’s iconic DB2 came out in 1983 as represented the first SQL driven desktop database. The same year saw the introduction of a powerful new version of the longstanding statistics package SPSS (SPSS-X). Moreover, the portability of both hardware and software made it possible to enter data in the field. This undoubtedly shed light on the practice of data collection in direct contact with data entry (if not on the fly analysis). The desktop computer, SQL driven database software, and new statistics packages put complex statistically driven archaeological research in the hands of even the smallest intensive survey project.

Apple Computer Advertisement from 1985

The Ohio Boeotia Expedition worked on the cusp of these significant changes concluding in 1982. As a result, they collected quantitative data on artifact densities (which could be easily calculated by hand), but did not collect the finds data in rigorously normalized way. This is not to say that the data was not collected systematically. In fact, the systematic and robust collection of finds data has made it possible to normalize significant parts of the finds notebooks. The results can then be projected across the transects that were remapped into our GIS.

Classical Period Finds

Hellenistic Period Finds

Late Roman Period Finds

With time and creativity these data could be translated into chronotype data. The chronotype system is the systematic recording system that we used to document finds from the Eastern Korinthia Archaeological Survey and in the Pyla-Koutsopetria Archaeological Project (as well as several other significant survey projects). We are gradually translating the context pottery from the Ohio State Excavations at Isthmia into this same system. This will create a foundation for some kinds of cross project analysis. At the same time, it will not eliminate the need for careful catalogue entries. The practice of recording careful descriptions of artifacts central to chronological and functional arguments will continue to remain central to archaeological documentation. In fact, the improved ability of desktop database software and “natural language” search engines will make these descriptions increasingly susceptible to the same kind of quantitative analysis as more standardized (and in most cases abbreviated) forms of notation.

  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: